One common problem is that the shaft speed is not measured directly but indirectly. A typical drive system will include a shaft, driven by a gearbox or pulleys, a motor and a motor encoder. Readings from the motor encoder – usually a stream of pulses – are used to calculate the speed of the shaft itself. In turn, the output from the encoder is fed back (in a servo loop) to the motion controller, which adjusts the power fed to the motor to increase or decrease speed. A significant problem is that although the system may be somewhat mechanically coupled, what’s going on at the motor is not the same as what’s going on at the shaft. For example, any gears in between the encoder and shaft will not be perfect and are subject to wear, backlash, thermal expansion/contraction, mechanical tolerances and clearances. Further effects include mechanical friction (especially ‘stiction’ at lower speeds), variations in lubricant properties, mechanical twist due to torque, shaft bending, shaft concentricity and so on.
To determine what is actually going on at the output shaft needs an encoder on the actual shaft itself. In reality, this can prove both difficult and expensive, especially if the shaft is large or where space constraints are tight. A further difficulty arises when the shaft speed is low, since accurate speed control will depend on sufficient measurement information being produced per revolution, to give sensible or timely control of the motor. A measuring device on the shaft with, say, 100 counts per revolution is not going to permit accurate speed control of a shaft that is rotating at 1 rpm, since the measurement will only be updated every couple of seconds. To control shaft speed accurately at low speeds, the greater the need for high resolution angle information at the output shaft.
There are 3 main methods for measuring shaft position or speed: magnetic, optical and inductive. The most common is magnetic – usually Hall effect – but this is not used in high accuracy or low speed applications since it lacks the resolution, whilst magnetic hysteresis or temperature effects will degrade measurement performance.
Optical encoders offer good measurement performance but are delicate and unreliable in harsh conditions. Optical sensors are typically rated to only modest temperature ranges (-20 to 70Celsius is typical); they can fail because of foreign matter and harsh mechanical vibration or shocks can damage the optical grating.
Inductive devices such as inductive resolvers and synchros are the traditional choice for high reliability or harsh environments, including military, oil and gas, aerospace, and heavy industrial applications. Whilst the reputation of inductive devices for reliability and accuracy is well founded, they are bulky, heavy and expensive, especially in the larger sizes or ‘A class’ measurement performance.
New generation inductive technique
A new generation of inductive technique now enables more and more people to choose inductive devices for mainstream control applications. Rather than the traditional wire windings or spools, this new generation of devices uses printed, laminar constructions, which dramatically reduces the bulk, weight and cost compared to traditional devices. At the same time, accuracy increases and the possibility of a wide range of shapes and sizes of sensor opens up. In particular, large bore devices can be provided without massive increases in cost. In turn, this makes direct mounting to the shaft more practical and hence more accurate. Furthermore, the need for high precision gearboxes is eradicated and generally the gearbox can be derated – allowing additional cost reductions.