poor dumping in NARX

2 visualizaciones (últimos 30 días)
Giuseppe Menga
Giuseppe Menga el 7 de Jun. de 2022
Respondida: Krishna el 5 de Feb. de 2024
I'm using NARX nets to estimate joint velocities from electromiographical (EMG) signals of patient's muscles to control a lower limb exoskeleton.
The net is trained in several trials postural exercises where EMG signals and joint velocities are recorded.
The result is a kind of admittance control; the NARX acts as an admittance filter where the input EMG signals are related to the joint torques and the output are velocities.
Sometime I noticed in the response of NARX poor dumping as in the figure
At expense of some lag I filterd those signals at the output of the net. In particular in the figure it has been used a Savitzky-Golay Smoothing filter of order 1 and framelen 17.
Then, I have two questions:
  • is it possible to force the net to be more dumped?
  • if not, how to embed the filter directly in the net design?
Giuseppe

Respuestas (1)

Krishna
Krishna el 5 de Feb. de 2024
Hello Giuseppe,
Based on what you've described, it appears you are engaged in a project that utilizes a Nonlinear Autoregressive with Exogenous Inputs (NARX) neural network. This network is employed to predict joint velocities from electromyography (EMG) signals, which are then used to control a lower limb exoskeleton. The NARX network serves as an admittance filter by translating EMG signals, indicative of muscle activity and thus joint torques, into joint velocities.
You've encountered an issue where the output of the NARX network occasionally displays insufficient damping, leading to unwanted oscillations or overshoots in the joint velocity response, which is not ideal for the exoskeleton's smooth operation.
To improve the damping in NARX networks, you might try several methods to train the network for a more subdued response:
  1. Train with data that includes instances of effective damping, either by incorporating more of such data or by expanding your dataset with synthesized examples.
  2. Introduce regularization terms like L1 or L2 penalties into your loss function to nudge the network toward learning a more gradual function, potentially enhancing damping.
  3. Chnage the network's structure, including the number of hidden layers, neurons, or activation function types, as these may affect the system's damping properties.
  4. Implement a feedback mechanism during training that penalizes insufficient damping. This can be done by crafting a custom loss function that factor in damping quality.
  5. Tailor the learning algorithm to emphasize damping by employing a weighted loss function that prioritizes damping characteristics during the optimization.
To integrate a filter within the network design, consider these strategies:
  1. Develop a specialized neural network layer that executes the Savitzky-Golay filtering process. This could be positioned at the network's conclusion, enabling the combined system (NARX + filter layer) to be trained from start to finish. The filter layer's weights could either be fixed to the Savitzky-Golay filter's coefficients or start with those values and be fine-tuned through training.
  2. Confirm that the filtering process is differentiable, allowing it to be incorporated into the backpropagation routine. Given that the Savitzky-Golay filter is a convolution-based linear operation, it can be fashioned into a differentiable function.
  3. Construct a composite model in which the NARX network generates an initial output, followed by a separate differentiable filter module refining this output. The loss is computed post-filtering during training, with gradients being sent back through the filter to the NARX network.
  4. Modify the loss function to incorporate an element that simulates the Savitzky-Golay filter's impact, such as a term that discourages high-frequency elements in the output, prompting the network to prefer a smoother response.
It's crucial to acknowledge that introducing a filter directly into the network's architecture demands a thoughtful analysis of the filter's characteristics and the network's training dynamics. I have described a lot of ways to solve your problem and you might need to go through them and implement that bests suit your problem needs. The filter must be congruent with the gradient-based optimization techniques used for neural network training. Moreover, embedding the filter within the network can add to the model's complexity, potentially necessitating additional data and computational power for effective training.
Hope this helps.

Productos


Versión

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by