Measuring or controlling the speed of most rotating shafts is straightforward but significant problems can arise as soon as we start to talk about accuracy. Mark Howard, General Manager, examines the issues involved and suggests some simple solutions to longstanding problems.

There is a famous question that sits at the crossroads of geography, mathematics and philosophy: “How long is the coastline of Britain?” At first glance, a pretty straightforward question – just think back to your geography textbook and dredge up the correct number. Most people say it’s around 11,000 miles – estimated from the perimeter of a triangle formed by Dover, John O’Groats and Land’s End with a ‘fudge factor’ added on for Wales.

For pub quizzes, this answer might be good enough, but is it accurate? Well, you could get all the ordnance survey maps and trace the coastline with a piece of string and eventually produce a second estimate. Certainly, the answer will be different to the first, but is it more accurate? The snag is how do you measure? Where do you trace the string for an accurate measurement? How far up the rivers do you go? Do you trace all the way to the source of the river and then back along the other bank? Do you measure the coastline at high tide or low tide? Do you trace in and out of every rock? At an extreme you would be measuring around every pebble and every grain of sand, to such an extent that the answer is very much larger than first expected. So the truly accurate answer to the question is – it depends on how you measure it.

The same kind of bizarre logic applies to shaft speed measurement. If you simply need a rough idea of how many revolutions a shaft completes in one minute, it’s pretty straightforward and even the crudest of measuring systems should deliver a fair answer. But what happens when you need to know what the speed is, say, every millisecond and then control the actual speed so that this corresponds to a tightly toleranced set point?

To borrow some terminology from the old vinyl record turntables, speed variations can be described as ‘wow’ and ‘flutter’. Wow typically refers to speed variations over relatively long periods and flutter refers to speed variations over relatively short periods, typically less than once per revolution. Often, both are important and the requirements to tightly control both wow and flutter are common in many sectors of industry: CNC machine tool motion control; aircraft flight controls; radar antennae; and weapons control systems. Such control engineering issues do not just relate to complex motion control but also to driving shafts at a constant speed (especially when there are variations in load). When one approaches the issue of a shaft that must rotate at truly constant speed, then the same logic that has us measuring around the grain of sand will quickly tell you that to get a shaft to rotate at a perfectly constant speed is – in the extreme – impossible.

shaftspeedmeasurement

Fig 1. Accurate shaft speed or position control is a common requirement for military equipment

One common problem is that the shaft speed is not measured directly but indirectly. A typical drive system will include a shaft, driven by a gearbox or pulleys, a motor and a motor encoder. Readings from the motor encoder – usually a stream of pulses – are used to calculate the speed of the shaft itself. In turn, the output from the encoder is fed back (in a servo loop) to the motion controller, which adjusts the power fed to the motor to increase or decrease speed. A significant problem is that although the system may be somewhat mechanically coupled, what’s going on at the motor is not the same as what’s going on at the shaft. For example, any gears in between the encoder and shaft will not be perfect and are subject to wear, backlash, thermal expansion/contraction, mechanical tolerances and clearances. Further effects include mechanical friction (especially ‘stiction’ at lower speeds), variations in lubricant properties, mechanical twist due to torque, shaft bending, shaft concentricity and so on.

To determine what is actually going on at the output shaft needs an encoder on the actual shaft itself. In reality, this can prove both difficult and expensive, especially if the shaft is large or where space constraints are tight. A further difficulty arises when the shaft speed is low, since accurate speed control will depend on sufficient measurement information being produced per revolution, to give sensible or timely control of the motor. A measuring device on the shaft with, say, 100 counts per revolution is not going to permit accurate speed control of a shaft that is rotating at 1 rpm, since the measurement will only be updated every couple of seconds. To control shaft speed accurately at low speeds, the greater the need for high resolution angle information at the output shaft.

There are 3 main methods for measuring shaft position or speed: magnetic, optical and inductive. The most common is magnetic – usually Hall effect – but this is not used in high accuracy or low speed applications since it lacks the resolution, whilst magnetic hysteresis or temperature effects will degrade measurement performance.

Optical encoders offer good measurement performance but are delicate and unreliable in harsh conditions. Optical sensors are typically rated to only modest temperature ranges (-20 to 70Celsius is typical); they can fail because of foreign matter and harsh mechanical vibration or shocks can damage the optical grating.

Inductive devices such as inductive resolvers and synchros are the traditional choice for high reliability or harsh environments, including military, oil and gas, aerospace, and heavy industrial applications. Whilst the reputation of inductive devices for reliability and accuracy is well founded, they are bulky, heavy and expensive, especially in the larger sizes or ‘A class’ measurement performance.

New generation inductive technique

A new generation of inductive technique now enables more and more people to choose inductive devices for mainstream control applications. Rather than the traditional wire windings or spools, this new generation of devices uses printed, laminar constructions, which dramatically reduces the bulk, weight and cost compared to traditional devices. At the same time, accuracy increases and the possibility of a wide range of shapes and sizes of sensor opens up. In particular, large bore devices can be provided without massive increases in cost. In turn, this makes direct mounting to the shaft more practical and hence more accurate. Furthermore, the need for high precision gearboxes is eradicated and generally the gearbox can be derated – allowing additional cost reductions.

nextgenerationinductivedevices1

Fig. 2 – An example of the new generation inductive devices

Zettlex of Cambridge is a world leader in this new technique and has produced the IncOder (‘Inductive Encoder’). These are purposely designed to easily fit to shafts or around slip-rings. Supplied as two rings – a stator and a rotor – the shaft passes through the IncOder’s large bore and the rotor simply screws onto the shaft, either directly or by using a grub-screw style collar. IncOder offers absolute angle measurement, combined with a re-settable zero position and a high number of counts per revolution (256k to 512k counts per revolution as standard with 4 million counts possible). As the devices are inductive, there are no problems if they get wet or dirty. The absence of any bearings or seals means that the devices will operate for long periods without requiring maintenance or servicing. Fitting such devices directly to the shaft enables measurement of much more representative conditions for speed and position control.