Joint angle measurement of manipulator and error compensation based on an IMU sensor

: To detect the joint angle of the manipulator accurately, a measuring method based on an IMU sensor is proposed. The sensor's attitude angles and corresponding rotation matrix are obtained according to the data of three-axis gyroscope and three-axis accelerometer. During the installation, the sensor's Z -axis direction is kept along with the motor's rotation axis direction, so that the angle of the sensor rotating around its Z -axis, which is the rotation angle of the motor, can be calculated by comparing the rotation relationship between the sensor's initial position and the sensor's position after the motor rotating. The largest source of result error derives from the inconsistent between the sensor's Z -axis direction and the motor's rotation axis direction. Consequently, an installation's error compensation method is designed to correct the sensor's Z -axis to the motor's rotation axis by rotating the related motor. The experiments show that the method can measure the joint angle of the manipulator accurately and calibrate the installation error effectively. The measurement result errors are confined <0.25°.


Introduction
It has always been a key issue in robotics to improve the positioning accuracy of serial manipulators [1]. According to [2], the angle deviations of each joint act as the main source of terminal positioning errors [2]. Therefore, accurately measuring the joint angle has important research significance. Various sensors convert the physical signal of the joint angle to electrical signal, then the measurement system can obtain the accurate joint angle information through a series of calculation and processing. These measurement systems differ from each other in working principle, complexity, precision, and costs [3]. The most commonly used three kinds of joint angle measurement sensors are magnetic rotary encoder, optical rotation sensor, and MEMS IMU sensor [4].
The magnetic rotary encoder [5] is usually composed of multipole magnets and hall sensors distributed on both sides of the joint. It is low cost, reliable, and widely used, but requires specific magnetic coupling calibration and electromagnetic shielding. Similar to magnetic encoder, the optical encoder [6] consists of a disc that cuts the window equidistantly on the circumference, light source, and photosensitive sensor. It has high measurement accuracy, but it is very expensive and greatly influenced by the environment (shock, vibration etc.). These two types of measurement sensors usually need to be installed at the rotating shaft of the mechanical joint (commonly the centre of the joint), which adds extra difficulty to the design and installation of the manipulator. Especially when the manipulator hardly can install the encoder due to the size or structure, the joint angle then cannot be measured.
With the development of the MEMS inertial sensor [7], measurement of the spatial attitude of the adjacent connecting rod by the gyroscope and accelerometer to calculate the joint angle has become a low-cost and effective method. Vikas V and Crane C D [8] proposed that the measurement system based on the accelerometer and gyroscope array could measure the joint angle, but it needed to measure the length of the manipulator, which was not universal. Philipp M [9] used the inertial measurement unit (IMU) to measure the elbow angle of the upper limbs of the human body and showed a clear improvement in accuracy compared with the visual measurement. Meng D et al. [10] considered the installation error of the IMU sensor on the human body and adopted a linear regression and quaternion theory to virtualise the rotation reference axis to reduce the measurement error.
Usually, an IMU sensor, consisting of multiple axis accelerometers and gyroscopes, calculates the attitude of the sensor by integrating all axes acceleration and angular velocity information [11]. In order to compensate for the drift error of the gyroscopes, some commercial products add magnetometers to the sensor devices to improve the accuracy of Euler angle with the geomagnetic information [12]. However, the magnetometer is influenced by the surrounding magnetic field noise greatly, especially when it works on a robot device consisting of motor, metal connecting rod and shell, and the error caused by the change of magnetic field may be much larger than the drift error of the gyroscopes. Except the inherent estimation error of the fusion algorithm, the biggest error source of the IMU sensor is that the sensor coordinate frame is not consistent with that of the object to be tested [13].
To this end, this paper proposes a method which can measure the manipulator joint angle and compensate the installation error based on an IMU sensor. First, the direction of the motor rotation axis is computed by the motor rotation. Then, the sensor Z-axis is rotated to the same direction as the former using the rotation matrix to compensate for the installation error. Finally, the joint angle can be obtained by the rotation relationship between the rotated sensor coordinate frame and that of the initial position.

manipulator
A typical SRS structure of a 7-DOF humanoid manipulator is considered. As shown in Fig. 1, it consists of a base, an upper arm, a lower arm, and a wrist, with seven rotary joints, among which the first, third, fifth, and seventh joints (from bottom to top) have the same structure and rotate around the vertical direction, meanwhile the second, fourth, sixth joints sharing the same structure rotate around the horizontal direction. The first three joints constitute the human shoulder joint, and the last four joints constitute the elbow joint and the wrist joint. When the manipulator is assembled, an incremental encoder is installed in each driving motor to help the motor realise the control over feedback and the relative rotation angle of each joint.

Measuring method
Before solving the attitude, a reference frame needs to be established in the first place. The usual reference coordinate frames include inertial coordinate frame, carrier coordinate frame, navigation coordinate frame, and world coordinate frame [14]. This paper uses the carrier coordinate frame and the world coordinate frame. As shown in Fig. 2, the carrier coordinate frame is defined as {b}, fixed with the carrier. Its origin O b is located at the centre of the sensor, and the X-axis is along the longitudinal axis of the carrier, while the Y-axis is along the transverse direction of the carrier. In addition, the Z-axis is in accordance with the right hand rule, and the unit vectors of the three coordinate axes are x → , y → , z → , respectively. The world coordinate frame {w} can be defined arbitrarily. Without loss of generality, it is defined to be right handed with Z-axis down, namely, a North-East-Down coordinate frame. Its unit vectors are i → , j → , k → , respectively. The transformation relation between the carrier coordinate frame {b} and the world coordinate frame {w} is expressed by the rotation matrix M of the sensor output.

Compensation of installation error:
After the sensor is installed, as shown in Fig. 3a, the J-axis is the rotation axis of the robot arm, and the J′ axis is the J-axis's parallel vector which passes through the sensor coordinate origin. There will be a certain deviation between the Z-axis of the sensor and the rotating shaft J of the motor. Therefore, this deviation needs to be compensated so that the Z-axis will be adjusted to the same direction as the J-axis. AS this paper only discusses the rotation relationship between the coordinate frames, as shown in Fig. 3b, the origin point O n of the world coordinate frame {w} and the origin O b of the sensor coordinate frame can be placed at the same point O, which act as the centre point of the manipulator, and the J-axis is the rotating shaft of the motor. After fixing the sensor on the manipulator, the relative relationship between the Z-axis of the sensor and the rotating shaft will remain unchanged. At the initial position, the point A is taken from the unit distance of the origin on the Z-axis of the sensor, and the coordinates of point A in the sensor coordinate frame are (0, 0, 1). According to the transformation relation between {w} and {b}, the coordinates of the point A in the {w} can be obtained as the rotation matrix is an orthogonal matrix (the inverse matrix equals to the transposed matrix).
As shown in Fig where m i6 is the seventh element of the rotation matrix M i relative to the frame {w} at the ith position. The plane equation of the plane S can be obtained using the three-dimensional coordinates of n coplanar points. There will be n points A i (i = 1, 2, …, n) while rotating the manipulator around the rotation axis J. Then, the plane equation can be fitted by least square method. Hence, the plane equation can be set as From the definition of the plane equation, the normal vector of the plane S can be expressed as The coordinates of n points are The distance of any point to the plane is To obtain the best fitted plane, ∑ i = 1 n d i 2 needs to be minimal. Then the problem is converted to the solution of extremum as where λ is a constant. Finally, A, B, and C can be obtained by the partial derivative of f. The direction of the Z-axis of the sensor in the world coordinate frame {w} is Assuming that the angle between n → and Z . By the definition of the vector product, the vector Z → b can be adjusted into the same direction of n → through rotating Z → b around w → by Δ degrees. Among them, The unit vector of w → is w → 0 . According to the matrix form of the Rodrigues' rotation formula [15], the above rotation process can be implemented with a 3 × 3 rotation matrix R, which makes the vector Z → b follow the direction of the vector n → in the world coordinate frame {w}.
r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 In which: r 12 = w 0x w 0y (1 − cos Δ) − w 0z sin Δ r 13 = w 0y sin Δ + w 0x w 0z (1 − cos Δ) r 21 = w 0z sin Δ + w 0x w 0y (1 − cos Δ) r 22 = cos Δ + w 0y 2 (1 − cos Δ) r 23 = − w 0x sin Δ + w 0y w 0z (1 − cos Δ) r 31 = − w 0y sin Δ + w 0x w 0z (1 − cos Δ) r 32 = w 0x sin Δ + w 0y w 0z (1 − cos Δ) With the rotation of the manipulator, the direction of the rotational shaft J in the world coordinate frame {w} stays unchanged, but the Z-axis of the sensor does alter. So, the corresponding rotation matrixes R are different every time after rotating. The sensor's Z-axis can be rotated to the same direction of the manipulator rotating shaft J using the rotation matrix R, and the sensor's X-axis and Y-axis are rotated synchronously. As shown in Fig. 5, the Z-axis and the motor rotating shaft J are in the same direction, and the XOY-plane is perpendicular to the J-axis. Therefore, the manipulator joint angle is equal to the rotation angle of the sensor's Z-axis relative to the initial position. Fig. 6, the coordinate of the sensor in the initial state is O 0 -X 0 Y 0 Z 0 , and the rotation matrix relative to the {w} is M 0 . The coordinate frame of the sensor coordinates after the manipulator rotation is O 1 -X 1 Y 1 Z 1 , and the rotation matrix relative to the {w} is M 1 .

Solving the rotation angle of the sensor: As shown in
According to the steps of 1.2.1, the sensor Z-axis can be rotated to the same direction with the rotation axis of the motor using the corresponding correction matrix, and both the X-axis and Y-axis will be rotated synchronously. Assuming Finally, the Euler angle (ZYX order) of the calibrated coordinate frame of the sensor O 1 -X 1 Y 1 Z 1 relative to the sensor's initial coordinate frame O 0 -X 0 Y 0 Z 0 can be obtained by the rotation matrix M 2 as well as the relationship between the rotation matrix and Euler angle. γ = arctan 2 −m 21 cos β , m 20 cos β As the sensor's Z-axis is in the same direction as the motor's rotating shaft, the rotation angle of the manipulator is equal to the rotation angle γ of the sensor around its Z-axis.

Error analysis:
As the manipulator for measurement is fixed on a flat car, the sensor's output parameters may be affected by the sliding of the flat car when the mechanical arm rotates, especially when it rotates rapidly and widely, bringing about the error e 1 . As the sensor is installed in a hard plastic box attached to the surface of the manipulator, the sensor's output parameters will be influenced by the slight difference of the sensor's position when the manipulator rotates, leading to the error e 2 . Due to that e 1 and e 2 are hard to quantify and the motor does not rotate too rapidly and widely, thus for brevity, e 1 and e 2 will be ignored in this experiment.
The maximum error of the measurement is the error e M of the rotation matrix which is the sensor's output. To calculate the rotation shaft's orientation, the coordinates of the points A i can be obtained according to the rotation matrix sensor, bringing about the error e 0 = f 0 (e M ). The normal vector consistent with the rotation The rotation matrix R is calculated by cosΔ, sinΔ and w → and the error produced by Δ can be calculated by (3), The error brought by w → can be obtained by (4), The error brought by the rotation matrix R can be obtained by (7) and (8), For y = arctan(x) is an increasing function and f 6 is an increasing function of e M . Thus, the error measured by this method will increase by the error of the sensor output parameter.

Experiment and data analysis
The experiment is based on the LPMS-B2 attitude sensor from ALUBI, which contains a built-in lithium battery and supports Bluetooth transmission for wireless measurement. It integrates three-axis gyro (±125/ ± 245/ ± 500/ ± 1000/ ± 2000 dps) and accelerometer (±2/ ± 4/ ± 8/ ± 16 g), using a specific fusion algorithm which combines the complementary filtering algorithm and the extended Kalman filter algorithm to calculate the accurate attitude angle (the resolution is 0.01°). The sensor can output a 3 × 3 rotation matrix M in real time by inserting a processor. After correctly installing the sensor, the output data of the sensor are received in real time through Bluetooth using the homemade MFC program, and will be calculated and processed. First, the manipulator is arbitrarily rotated in three different positions, and the parameters of the sensor are recorded, respectively. Then, the three-dimensional coordinates of the point A in the {w} are calculated, and the coordinate of the direction of the motor rotation axis J in the {w} will be calculated as well.
The rotation matrix M 0 at the initial position relative to the {w} is recorded through the initialisation procedure after rotating the manipulator to the initial position. The sensor's Z-axis will be rotated to the same direction with the rotating shaft of the motor J through the calibration algorithm. The rotation angle of the manipulator can be obtained by calculating the rotation angle of the calibrated sensor coordinate relative to the initial position. If the installation error is not compensated and it is initialised directly at the initial position, the measured data will be the rotation angle with installation error.
The experiment is divided into two comparative experiments. Experiment 1 compares the measurement errors under different sensor installation ways and experiment 2 compares the measurement errors under different rotation axis directions. Supposing that the output angle of the motor encoder was the true value and the average of 15 output values after each rotation was taken as the measured value, then the measurement error could be calculated by the measured value subtracting the true value. Rotating the motor from −90 to 90° in each experiment, and making very step be 2°, then we can get 91 samples in total, and finally the average error and root mean square error of each experiment can be calculated. Experiment 1: the direction of the motor's rotating shaft is kept unchanged (the angle between the rotation-axis J and the horizontal direction is 60°), and the way of the sensor's installation is altered. As shown in Fig. 7, the sensor's installation is changed by raising the left, upper and right sides of the sensor. The measurement results of the four installation methods are shown in Fig. 8. Position 1 refers to the situation that the sensor is installed without being raised, and the average error ē 1 is 0.05° and the RMSE1 is 0.04°. Position 2 represents that the sensor's left side lifted, and the average error ē 2 is 0.08° and the RMSE2 is 0.05°. Position 3 stands for that the sensor's upper side is lifted, and the average error ē 3 is 0.07° and the RMSE3 is 0.04°. Meanwhile position 4 connotes that the sensor's right side is lifted, and the average error ē 4 is 0.08° and the RMSE4 is 0.04°. From the curves, the mean errors of four installation are very close. It can be proved that this measurement method is not affected by the way of sensor installation. Experiment 2: the sensor's installation is kept unchanged and the direction of the motor's rotating shaft is changed. The angle between the rotation-axis J and the horizontal direction is α. As shown in Fig. 9, the direction of the motor-4's rotation-axis can be changed by the motor-2 of the manipulator. The rotation angles are measured when α is 0, 20, 40, 60, 80° and the experimental results are shown in Fig. 10. When α is 0°, the average error ē 1 is 0.24°a nd the RMSE1 is 0.10°; when it comes to 20°, the average error ē 2 will be 0.23° and the RMSE2 will be 0.12°; if α becomes 40°, the average error ē 3 changes to 0.11° and the RMSE3 changes to 0. 10°; when α alters to 60°, the average error ē 4 will be 0.03° and the RMSE4will be 0.05°; if α is set to be 80°, then the average error ē 5 gets 0.02° and the RMSE5 reaches 0.05°. It can be concluded that the influence on the rotation angle measured from the motor's rotation-axis is not remarkable. When the rotation-axis is close to the horizontal direction and the rotation angle approaches −90°, the precision becomes lower. The main reason is that there exists mathematical singularity in the definition of Euler angle while Xaxis approaching the perpendicular [16], which will lead to the reduction of the precision and stability of the sensor's rotation matrix. When the rotation-axis does not point to the horizontal direction, the accuracy of the rotation angle measured by this method appears to be high, as well as the robustness.
In order to further verify the effectiveness of the method, a method based on IMU for measuring the rotation angle of a uniaxial rotation device is proposed in [17] to make a comparison. The measurement results using this method are shown in Fig. 11, where α is the angle between the rotation angle J-axis and the horizontal direction. In this method, the direction of the rotation axis is obtained by the three-dimensional angular velocity, and the rotation angle is gained from the change of the RPY angle before and after the rotation. However, due to the low accuracy of the yaw angle, only changes in the roll and pitch angle are considered. Therefore, when the angle α is <45°, the yaw angle changes little before and after the rotation, and the final measured rotation angle error is small, but it is slightly lower than the measurement accuracy of the method proposed here (possibly due to the experimental environment and the operation differing from that in [17] as well as the inconsistency in the accuracy of the sensors used), as shown at the bottom three curves in Fig. 11. When the angle α surpasses 45°, the yaw angle changes greatly before and after the rotation, which cannot be ignored, leading to a higher measurement error, even up to 6°, as shown in the above two curves in Fig. 11. The method proposed here has no such limitations. It is not influenced by the orientation of the rotation axis with low overall measurement error. Therefore, the method has a wider application cope.

Conclusion
An angle measurement method based on an inertial sensor is proposed here, and its effectiveness has been verified by specific experiments. The experimental data proves that it can effectively compensate the installation error of the sensor and ensure that the measurement result will not be affected by the sensor's installation. The measured average error of rotation angle is <0.25°, even merely reaching 0.1°, and the root mean square error is near 0.1°, thus it is confirmed that the measurement method holds high accuracy and robustness. In addition, since the accuracy of the joint angle measured by this method is closely related to the precision of the selected sensor, the adoption of a sensor with higher accuracy will result in the higher accuracy of the joint angle finally measured by this method.