Behavioral Cloning for Prosthetics

Walking is a critical motor skill at the center of human mobility and independence. However, for many millions of people affected by musculoskeletal disorders, amputations, neurologic pathologies, or other health-related issues walking may be a daily struggle or even completely out of reach. Modern assistive robotics technology has the potential to change the lives of people affected by such conditions for the better.

Healthy human adults walk on average several thousand steps per day, seemingly without any effort and with substantial grace and fluency to their movements. In doing so, they repeatedly perform variations of the same periodic behavior, adapting in each moment to the state of their body and the conditions of their environment. By taking a behavioral cloning approach, we are able to learn complex and informative models of human locomotion. These models translate directly into prosthesis control commands for our powered prosthetic device. Our models, much like humans themselves, constantly adapt and interact with the user as they walk, run, jump, or climb stairs. My work shows that probabilistic models can easily conceptualize the controls necessary to react quickly and effectively to changes in human movement and environment.

I incorporated a multitude of body-worn sensors, including inertial measurements of each limb, force measurements from instrumented insoles, and camera sensors able to discern the local environment. I designed custom sensor packs that use an ESP32 microprocessor programmed with embedded C to collect data from the camera, IMU, and insoles; and transmit the data to a remote PC for storage and processing. Furthermore, one of the sensor packs links to the robotic prosthesis to enable wireless communication and control. The variety of sensors enabled me to distinguish human, robot, and environmental features over which my models can make decisions. For example, the insole sensors below show how foot forces change with respect to the environment.

One remaining problem, however, is that machine learning models learn from the data given to them but do not have a contextual basis for what the data means. It is very easy, therefore, to trick ML models into producing actions outside of a robot’s operating parameters. These adversarial samples can come either naturally through sensor or data errors or unnaturally through nefarious attacks on the prosthesis software. In response to this problem, we implemented an additional optimization step into our learning framework that specifically targets data points that exceed specific constraints. In layman’s terms, we can set hard constraints on the control outputs, such as torque, velocity, or position, and optimize the models for both accuracy and constraint satisfaction. In this way, we can build models that are both adaptive and safe.

We are in the late stages of testing our methods on amputees and expect to have new data to show soon.

Further Information

For further information, please consult the peer-reviewed conference papers below:

drawing drawing