Gesture recognition for transhumeral prosthesis control using EMG and NIR

: A key challenge associated with myoelectric prosthesis limbs is the acquisition of a good quality gesture intent signal from the residual anatomy of an amputee. In this study, the authors aim to overcome this limitation by observing the classification accuracy of the fusion of wearable electromyography (EMG) and near-infrared (NIR) to classify eight hand gesture motions across 12 able-bodied participants. As part of the study, they investigate the classification accuracy across a multi-layer perceptron neural network, linear discriminant analysis and quadratic discriminant analysis for different sensing configurations, i.e. EMG-only, NIR-only and EMG-NIR. A separate offline ultrasound scan was conducted as part of the study and served as a ground truth and contrastive basis for the results picked up from the wearable sensors, and allowed for a closer study of the anatomy along the humerus during gesture motion. Results and findings from the work suggest that it could be possible to further develop transhumeral prosthesis using affordable, ergonomic and wearable EMG and NIR sensing, without the need for invasive neuromuscular sensors or further hardware complexity.


Introduction
The hand of an individual comprises of 21 degrees of freedom and can be viewed as a means of accomplishing various physical activities, environmental navigation and the carrying out of social interactions [1]. The degree of upper limb loss can be categorised into seven places depending on the severity of amputation as follows: forequarter, shoulder disarticulation, transhumeral elbow disarticulation, transradial wrist disarticulation and trans-carpal amputation [1]. An annotated image of the skeletal view of the upper limb can be seen in Fig. 1.
Combined statistics on the levels of upper-limb amputations in the UK and Italy show that (aside from transcarpal amputees) transhumeral amputees account for the largest cohort amongst upper limb amputees at 16%, closely followed by transradial amputees at 12%, a pie chart of this can be seen in Fig. 2 [1]. The brain is said to be in a feedback with various nerves which conduct signals back and forth through the efferent and afferent pathways, which cater to motor control and feedback sensations. Limb loss through amputation distorts the closed-loop nature of the feedback system [1]. The National Amputee Statistical Database (NASDAB) cites trauma as the main cause of upper limb amputations, closely followed by neoplasia and vascular infections [2] Primarily, there are three main kinds of prosthesis: the nonfunctional and mainly cosmetic prosthesis, the mechanically actuated body powered prosthesis and the myoelectric prosthesiswhich this research is based on [3]. Although there have been significant advances in this area over the last few decades, technical and ergonomic challenges exist which myoelectric prosthesis users are affected by. Technical challenges include the embedding of the relevant electronics, motors and actuators into the prosthetic limb while maintaining an acceptable weight, control system and the need for the occasional surgically implanted electrode into the anatomy or brain [1]. Ergonomic challenges include invasiveness of neuromuscular electrodes, cognitive load associated with operability and overall durability.
Scientific research on the adoption of EMG signal as an input for prosthesis control dates to 1948, and has been applied across numerous case studies both clinically and academically, as an acquired EMG signal reflects motoneuronal generated signals from the brain and culminates in an electrical action potential, which in turn can be recorded with surface electrodes [31,37]. Despite the wide and favoured application of surface acquired EMG signals, common limitations include signal degradation due to muscular fatigue and proneness to crosstalk from neighbouring muscles [38][39][40][41][42][43]. To compensate for the limitations of EMG, NIR has been proposed by several authors as this allows for concurrent monitoring of the perfusion level and oxygenation of the muscles, which produces a signal with a high spatial resolution [32,[44][45][46]. NIR in the region of 650-1000 nm is used for human monitoring as this allows for non-invasive tissue monitoring, with the human tissue being transparent to light in this region, and main light absorbers being oxygenated and deoxygenated variations of haemoglobin [47,48].
The setup for NIR sensing requires a light emitting diode (LED) infrared emitter and an associated receiver, working with the principle of light scattering through biological tissue which can be picked up by an appropriate photodetector [48]. The penetrative depth of the travelling NIR light is governed by halving the distance between the infrared transmitter and receiver [48]. Oxygenated and deoxygenated haemoglobin present in the blood possess complementary optical properties to NIR light which allows the contraction of a muscle to cause a variation in the amount of infrared light being acquired by the NIR receiver on the surface of the skin [49,50]. Thus, it is hypothesised that the fusion of physiological estimates consisting of both EMG and NIR signals would allow for an enhanced monitor of muscle state and contractions from both an electrophysiological and hemodynamic perspective, and thereby provide a rich signal source to serve as the input for the actuation of a prosthetic limb [31].
In this study, we investigate the design of a myoelectric control system for transhumeral amputees, who represent the largest cohort who have had significant limb amputation, with EMG and NIR sensing. This study will aim to take both ergonomic and technical issues into consideration and in particular will be addressing the following: • Design and selection of non-invasive and wearable sensors that are affordable, comfortable and can be incorporated into a prosthesis. • Comparison of classification and gesture recognition accuracy of EMG-NIR, EMG only and NIR only.
• Use of ultrasound as an independent contrastive base to study the anatomy around the humerus.

Myoelectric control architecture
A model for the pattern recognition based myoelectric control architecture applied in this study can be seen in Fig. 3, and it can be noted that there are six key stages between the sensing and acquisition of a muscle signal and the relevant actuation [51]. The signal is acquired with a surface EMG electrode and an NIR transmitter-receiver followed by the relevant amplification and filtering (details in Section 2.3), after which a feature extraction exercise is conducted, from which a classification process is conducted, which serves as the motor function determination process. The final three blocks represent the actuator function selection, which maps the chosen motor function and gesture with a relevant actuator position set point. The motor control is the lower level control of a set of servo drivers with the determined set-points and provides the final control signal, which is fed into the prosthesis motors that represent the final stage [51]. A biobased feedback takes place at this stage in the form of vision sensing with the eyes, which informs the central nervous system (bio-controller) of what further intent stimuli needs to be sent to the prosthetic system to complete the task. This stage is said to be carried out iteratively until satisfactory action in completed. The work done as part of this research is focused mainly on the sensing, signal processing and classification of hand gestures from an acquired signal (blocks 1-4 in Fig. 3), and the work done as part of the final three blocks has been deemed out of scope within the context of this paper.

Gesture sets
In order to determine what hand gestures were to be explored as part of this study, an ultrasound scan was conducted along the lower part of the biceps and brachialis region, which correspond to the region in the upper limb where the wearable sensors were going to be placed. The ultrasound imaging was carried out by a consultant clinician who specialises in musculoskeletal interventions and contributed with the interpretation of complex limb anatomies. For the ultrasound imaging undertaken as part of this study, the Canon manufactured Aplio i900 machine was used with the linear PLU-1204BT 18L7 probe, whose frequency is in the range of 7.2-14 MHz and is typically used for musculoskeletal imaging purposes. The scanning was done with the probe placed in both longitudinal and transverse positions while the participants repeated the selected gesture sets from the same position as the wearable data collection. For each of the gestures, four to five repeats were performed by each subject, with the probe held in place at each of three positions on the right upper arm -anterior The gesture sets used by McIntosh et al. [52] for the wrist based gesture recognition was used as a starting point, but it was seen that although these gesture sets resulted in muscular contractions along the forearm, they provided no resulting muscular contraction by the muscles along the humerus. The gesture sets used by Gaudet et al.'s [53] work with transhumeral amputees, which comprised of bulk and compound gesture motions, showed a notable resulting muscular contraction during the qualitative assessment during the ultrasound scan. Thus, a total of eight discrete gesture sets were selected for this study and include elbow flexion (EF), elbow extension (EE), forearm pronation (FP), forearm supination (FS), wrist flexion (WF), wrist extension (WE), hand open (HO) and hand closed (HC) [53].

Electromyography:
An EMG signal is a compound signal which depends on the anatomical properties of an individual and accumulates signals from various motor units [54]. The electrical signals present in muscle tissue are referred to as muscle action potential and an acquired EMG signal comprises the muscle action potentials occurring within the segment of muscle tissue [54]. A superimposed combination of the action potential from the muscular tissue of an individual motor unit is termed motor unit action potential (MUAP) [54]. A discrete representation of an EMG signal is given as where x n represents the EMG signal, e n is the point being processed, h r firing impulse of the MUAP, w n is the zero mean additive white noise and N is the number of motor neuron firings. Due to the simultaneous firing of numerous MUAPs, an EMG signal can be seen as a highly variable non-stationary time series, whose output signal depends on factors such as muscle state, fatigue and extent of contraction [55]. Depending on the level of muscular contraction, the resulting distribution of an EMG signal has been characterised as a zero mean Gaussian process at high contraction level and a zero mean Laplacian distribution during moments of fatigue [56]. The commercially available Myo armband by Thalmic Labs with dry electrodes, which consists of an ARM Cortex-M4 120 MHz processor, eight EMG electrodes and a fixed sampling rate of 8-bit precision at 200 Hz and a wireless transmission using a low energy Bluetooth protocol synchronised with the MyoConnect application on the receiving computer, was used to acquire EMG signals during this work. The streamed data was acquired and stored as a CSV file using the C + + open source code available from the myoblog website [57]. It is also worth mentioning that in addition to the raw EMG data, accompanying spatial data from the accelerometers and inertial sensors were also saved, although unused [52,58]. Fig. 5 shows a sample acquired EMG signal from the wearable armband. From the figure it can be seen that some electrodes acquire relatively clear signals depending on their locations. An example of this can be seen in electrode 1 whose resulting signal comprises of peaks that correspond to muscle contractions, while other electrodes (mainly 4 and 5) produce a more convoluted and noisier resulting signal.

Near-infrared:
NIR in the wavelength region of 650-1000 nm is typically used for applications involving biological tissue. The theoretical principle behind NIR sensing of haemodynamic is based around the modified Beer-Lambert law (MBLL), which quantifies the variations in the optical characteristics of the tissue and is based on the continuous wave diffusion of the optical intensity measurements [59]. Concisely put, the MBLL establishes a relationship between variations in the intensity of light transmission to a resulting change in the absorption of the tissue [60]. The equation for the MBLL is given as where OD represents the optical density, I is the intensity of the detected light, I o is the incident light intensity, d is the distance between the NIR transmitter-receiver, μ a is the co-efficient of the optical absorption, G is the measurement geometry and DPF stands for differential pathlength and accounts for the mean pathlength of the detected photons.
As NIR is a measurement technique based on light, a key limitation of this technique in anatomical sensing is the influence of adipose tissue thickness on muscular measurements [12]. The NIR band used as part of this study builds on the design carried out by McIntosh et al., where the design comprises of an armband cut from Hobarts laser rubber and contains 14 segments for pairs of infrared transmitters and receivers spaced at 1.5 cm [52]. The transmitting LEDs were the Osram Opto SFH 4556P with an IR sensing band of 860 nm, while the receiving photodiodes were the Osram Opto BPW34 FS IR. An annotated image of the infrared armband can be seen in Fig. 6. Both the signal acquisition board and Teensy 3.6 microcontroller were housed in a 3D-printed casing printed from polyactic acid which is a biodegradable thermoplastic material [61].
With an input power supply of 24 V, power is sent to the mini circuit boards which house the infrared transmitter and receivers. The generated signal is amplified using a general-purpose op-amp AD8615AUJZ, and further regulated with a 100 kΩ resistor and a 10 nF capacitor connected in series, and further smoothened with a 10 nF capacitor prior to being sent to the Teensy 3.6 microcontroller. The Teensy microcontroller possesses a highresolution analogue-to-digital converter of 32-bits and a reference voltage of 3.3 V, which drives the circuitry controlling the pulsing of the LED, which pulses at a rate of 10 samples/s. The LED driving section of the circuit consists of five resistors of values: 15 Ω, 1 kΩ, 4.7 kΩ, 10 kΩ (x2) resistance, along with a switching transistor BC337, three capacitors at 10 µF, a Zener diode BZY88C and a 47 µF MOSFET transistor IRLB3034PbF.
The armband containing the LEDs was linked and connected to the signal acquisitioning and conditioning board using the Rapid Electronics designed 26-way connector socket. The Teensy microcontroller was programmed using the Arduino 1.8.9 software with the board type specified as Teensy. Within the software, the ports in the microcontroller were first defined and then data smoothed with a signal averaging process, after which the data was serially communicated at a baud rate of 9600 bits/s. The serially streamed data was acquired using the Python software which stored a defined amount of data samples as comma separated variables.

Experiments and participant information
A total of 12 (10 males and 2 females) participants with an age range of 22-50 years took part in the study and were recruited using flyers, with ethical approval for the study granted by The University of Bristol Ethics Committee (project ID = 80143).
The EMG and NIR were worn simultaneously and in parallel during all data acquisition sessions with participants. The anatomical muscles present in the region in which the sensors were worn include the brachialis and the biceps brachialis in the anterior part of the arm, and the triceps brachii in the posterior. As a first step in this research, the gestures investigated involved the arm being static with the elbow placed on a flat surface while the relevant hand gestures were being made as can be seen in Fig. 7.
This part of the study took an average of 30 min per participant and commenced immediately following the obtaining of an informed consent from the participant. The first 5 min comprised of the introduction of the gesture sets to the participants followed by a brief practice exercise. A sum of eight acquisitions were conducted with each participant corresponding to the eight selected gesture sets, with ten repetitions per run, and with breaks factored in between acquisitions as necessary. Each data acquisition run with ten repetitions per gesture was later segmented into ten data chunks during post-processing for both the EMG and NIR sensors.

Signal processing
Key works in the area of EMG and NIR have been carried out by the likes of Guo et al. [31,33], Hermann et al. [44,45] and Attenberger and Wojciechowski [32]. Amongst the mentioned authors, Guo et al. managed to validate their prosthesis control architecture on a combined cohort of 16 participants, three of whom were amputees [33]. For the feature extraction and classification section of their control architecture, the authors extracted the four-dimensional Englehart time-domain features comprising of mean absolute value (MAV), root mean squared (RMS), zero crossing (ZC), slope sign change (SSC) and waveform length (WL) from the EMG signal. While three features, namely MAV, WL and NIR-variance were extracted from the signal [33]. For the classification of the various hand gesture sets, a comparison case study was carried out between the linear discriminant analysis (LDA) and support vector machines (SVM) and noted similar classification accuracies for the respective classifiers, averaging around 95% for the able-bodied individuals and 85% for the amputees [33,[62][63][64][65].
In the area of transhumeral prosthesis research, for five ablebodied participants, Pulliam et al. [66] worked with a set of intramuscular electrodes, Englehart time-domain features and a neural network, and achieved a classification accuracy expressed in R 2 of 0.81. Jarrasse et al. [67] used 12 pairs of wet surface electrodes placed around the residual amputated stump, deltoid and pectoral muscles, extracted the RMS, first four autoregressive (AR) co-efficient, WL and sample entropy (SampEN), and achieved an offline classification accuracy in the range of 78-94.8%. Acquiring data with kinematic sensors and a wearable sEMG sensor consisting of six self-adhesive bipolar electrodes (isopropyl alcohol skin preparation), and acknowledging the high variability and stochasticity of signals acquired from along the humerus, Gaudet et  al. [53] extracted both the Englehart time-domain features (MAV, WL, SSC, ZC) and RMS, mean absolute value slope, Wilson amplitude (WAMP), SampEN, AR and the fourth-order cepstral analysis (CC). The authors used a multi-layer perceptron neural network (MLPNN), and for five amputees obtained classification accuracy in the range of 60-93% [53].
The following features were extracted from the resulting EMG and NIR signals.

EMG features
• MAV: The MAV, denoted in (3) where N is the number of samples in signal segment, x n is the nth sample in the EMG signal. • RMS: This gives an indication of the overall power present in a signal. The RMS equation is given as • ZC: The ZC provides a quantification of the number of times the signal crosses the y-axis; it is viewed as a robust feature to signal noise as it detects features above a predefined threshold and can be expressed as [68] where i represents the time segment and the sgn function produces a positive number if a defined threshold is crossed and 0 otherwise. • A threshold of 0.1 mV was adopted for the ZC calculations.
• AR co-efficient: an AR model is one where co-efficients are linearly combined, regressed and formulated as estimations of previous samples [68]. The estimated co-efficient of a selected AR model has been seen to vary with the state of muscle contractions. The fourth-order AR model was computed and used in this study as it has been previously determined to be sufficient for EMG signals [69]. • An equation for the AR co-efficient is given as.
where x¯(k) denotes the signal, ϕ j represents the AR coefficients, N p being the order of the AR model, ε k being a white noise sequence. • SampEN: It is a quantitative measure which gives an indication of the level of regularity and complexity within a signal. An equation for the SampEN is given as [70] SampEN m, r, N = − log A B (9) where m is the embedding dimension, r is the tolerance, N is the number of data points The values for m and r were chosen to be 2 and 0.2, respectively. • Cepstrum: It is mainly used in speech processing, the cepstral analyses the signal into two components by applying linear filtration methods, and has been seen to be a useful feature for motion identification [71,72]. An equation for the cepstrum is given as • where K is the interval index, n is the lag elements, c i is the ith cepstrum coefficient, a i is the ith AR coefficient.

NIR features
• MAV: The NIRMAV gives an indication of the amount of hemodynamic diffusion present within an NIR time series signal. An equation for this is given as [33]. (11) where N is the number of samples in signal segment, x n is the nth sample in the NIR signal. • Variance: The NIRVAR gives an indication of how much dispersion is present within an NIR time series signal. An equation for this is given as [33] VAR = 1 where μ is the sample mean. • In order to ensure dimension uniformity of the combined feature vector of both the EMG and NIR features, a selection of eight segments from the NIR armband with the highest hemodynamic fluctuation was utilised during the feature extraction stage.
For the ZC,SSC & WAMP features, a threshold of 1 μv was used as seen in [53]. For the sensor fusion with both sets of sensors with a total of 22 receiving segments, 11 features, 8 gestures and 10 repetitions each, the amount of data points per participant amounted to 19,360 data points per participant and a total of 232,320 across all 12 participants who took part in the exercise.

Gesture classification methods
The following is a case study of the classification performance of the MLPNN, of the linear and quadratic variant of the discriminant analysis. These classifiers represent the more frequently used methods for gesture recognition for prosthesis control and allow for a comparison case study between a black box based classification method (MLPNN) with a parametric based method (discriminant analysis) which has a high degree of transparency associated with its decision-making process [10,66]. It is worth mentioning that the discriminant analysis method was chosen ahead of the SVM in this work as although their classification capabilities in gesture recognition are similar, the discriminant analysis has been seen to be more computationally efficient, which is an important factor in this area of research [33].
• MLPNN: A multi-layer perceptron is a type of feed-forward neural network which is typically comprised of an input layer where features are fed into a stacked layer of perceptrons whose function can be seen in (13), a hidden layer comprised of a chosen activation function and an output layer which outputs the predicted class label [73] where a n = denotes a node in a hidden layer, s is the activation function, m is the set of values, It is a supervised learning method which is capable of performing data classification with non-linear classification boundaries [74]. The architecture of the chosen neural network used for gesture classification can be seen in Fig. 8.
The implemented MLPNN was fed 2-11 features depending on the sensing input, a sigmoid activation function was used in the hidden layer and is a non-linear activation function whose equation is given as The output layer comprised of the softmax function which normalises the output of the hidden layer into a readable output, which in this case corresponds to a gesture label [73,74].
The network was trained using the iterative back propagation algorithm which used the chain rule to adjust the weights of the network while taking into account the performance of the network through the loss function [73].
The dataset was split using a random division method and the dataset set was split 70% for training, 15% for the validation set and 15% for the test set, where the purpose of the validation set is to act as an interrupt which prevents the network from overfitting. The final metric used to characterise the recognition capability of the neural network is the 'classification accuracy' which is obtained from the test dataset solely and was performed using a 10-fold cross-validation method to produce a recognition accuracy for each of the participants, from this an average was taken of the classification accuracy for each of the 12 participants to produce the final classification accuracy. • Discriminant analysis: the LDA assumes that the data has a normal distribution with a Gaussian assumption, and classifies data into various classes separated by a linear boundary using the Bayesian probabilistic method. An equation of the linear discriminant function and the probabilistic decision rule is given as [75] ð where ð l represents the discriminant function for a feature vector x, μ c T is the mean vector of the training samples for the class c, Σ l is the pooled covariance matrix, C is the number of motion classes, N is the total training samples, n c is the number of training samples for class c, while Σ c is the covariance matrix of the class c.
The quadratic discriminant analysis (QDA) does not assume the covariance matrix to be the same for each class, and as a result a unique covariance matrix is estimated for every observation class. Its discriminant function is given as [76] ð k is the discriminant function for a feature vector x,Σ k is the pooled covariance matrix as seen in (15), μ k is the mean vector of the training samples for a class k π k is the prior probability for each class k.
To produce the final classification accuracy for both the LDA and QDA, a ten-fold cross-validation method was used in order to test the accuracy of the classifiers.

Results
The obtained results from the EMG only from the MLPNN show an average classification accuracy of 83%, of 20% for the NIR only and a marginal increment of 84% for the fusion of EMG and NIR. The QDA results were 81, 19 and 82%, and LDA results were 79, 17 and 80% for EMG only, NIR only and the fusion of EMG and NIR, respectively. The average EMG results were in the range of 79-83% across all three classifiers for all 12 participants, with the MLPNN boasting the highest average classification accuracy. With the MLPNN being a non-linear function approximator, it possesses the most efficient capability of classifying data which require a non-linear separation boundary. The LDA, which fits solely linear decision boundaries to separate the classes, recorded the lowest average classification amongst the other techniques considered, with an accuracy of 79%; while the QDA, which fits quadratic non-linear boundaries, recorded a slightly better average accuracy of 81%. The NIR results showed the lowest classification accuracy in the range of 17-20%, this was attributed to the presence of adipose tissue thickness within the surrounding anatomy around the humerus bone [77]. This is seen to inhibit the transmission and penetrative depth of the infrared light, with the source-detector spacing in the NIR armband of 1.5 cm, which built on the wrist based gesture recognition work done by McIntosh et al. [52], and would have afforded a penetrative depth of 0.75 cm. Although this was seen to be a limiting factor in this study, there is a possibility that if an optimisation exercise is carried out with the transmitter-receivers and an optimal penetrative depth was found, this could enhance the sensitivity of the NIR sensing and thereby enhance the overall classification accuracy. Also, further work in the signal analysis of the NIR signal could investigate the extraction of further key signal features which could contribute to the enhancement of its gesture the recognition accuracy. Despite the limited performance, the combination of the information from both sensors gave the best classification accuracy when compared with other single modalities. The fusion of the sensing modalities is believed to result in greater separation of the various gesture classes, thus making the data points easier to classify and reducing the likelihood for misclassification [10]. A PCA visualisation of five select gestures can be seen in Fig. 9 and a bar chart displaying the results across all participants can be seen in Fig. 10. The results suggest that if an MLPNN is used, then the fusion of data information from both EMG and NIR may only make for a small improvement at the cost of an additional sensing module and greater computational demands, whereas the MLPNN appears to have the capability to fit highly non-linear classification boundaries. Although a shortcoming of the MLPNN is its extended computation time and demands for a relatively larger volume of training data.
Gestures 1-4 (EE, EF, WP, WS) produced the highest gesture recognition classification accuracy, likely owing to the bulk muscular movements associated with the gestures, whereas gestures 5-8 (WF, WE, HO, HC) although involving compound muscular contractions were relatively finer gesture motions in comparison and thus produce less distinct signals.

Conclusion
The primary findings in this paper have shown, using cheap and affordable wearable sensors, that it is possible to recognise eight discrete hand gestures with a classification accuracy in the range of 79-81% (classifier dependent) from signal acquired along the humerus. This has been achieved with the myo armband which possesses a slightly lower specifications compared to Gaudet et al. whose wearable sensor had a 16-bit resolution and a sampling of 1000 Hz [53]. An enhanced feature vector was constructed from the acquired signal which comprised of time-domain, frequencydomain and entropy-based features. An inclusion of data from the NIR band saw an average of 1-2% enhancement in the classification accuracy of the gestures. This could be further improved by the design of an NIR band which allows for greater penetrative depth through the layer of the thick adipose tissue present in the upper arm along the humerus. The attainment of this could allow for further development of EMG-NIR multi-modal sensing, which would allow for a control scheme based solely on data from affordable, ergonomic and wearable sensors, which in turn would help drop the cost of the prosthesis and improve the human factor aspect.
Two further areas that need to be explored as part of this research include: • Further experimental work is required to observe the performance of this approach on transhumeral amputees. It is seen in the literature that an estimated 10% classification accuracy reduction is expected with amputated individuals due to the residual nature of the anatomy, which in turn would make for a reduced quality signal when compared with non-amputees. This would also allow for the further investigation of the overall consistency of the intent signals produced by phantom limb motions. From here it can be evaluated if further fine tuning of the signal processing approach is required, i.e. further features from signal, activation threshold or a different classifier. • Based on the results from the validation on amputated individuals, real-time characterisation of the overall performance can be investigated using the four key algorithm performance metrics as reported by Guo et al., this includes selection time, completion time, completion rate and real-time accuracy [10]. First steps in this area can likely involve commencement using the virtual prosthesis simulator system as used by Guo et al., after which embedded electronic hardware and a benchtop artificial limb, using a low power mixed signal processing unit such as the analogue front end ADS1299 by Texas Instruments, which is responsible for intent decoding and classification, and allows for online visualisation of bioelectric performance and variable adjustment from a remote station [10].

Acknowledgment
The authors thank the following people for their contributions: