Preventative maintenance and planned maintenance are widely employed across many industry sectors. They are characterized by regular predetermined maintenance intervals or a component’s expected lifecycle. At these times, components are exchanged, which is why, for example, a vehicle’s cam belt is changed after it has completed a pre-determined mileage. However, component failures don’t always run to well-ordered timetables; they are random, seem unpredictable, and can be extremely expensive, as well as inconvenient when they fail without warning.
It’s better then, to keep tabs on the condition of systems and components. The ‘early warning systems’, made possible by the constant monitoring of devices and machines in operation, can help to either avoid problems before they arise or take immediate action against those that have arisen. This concept of predictive maintenance takes the pro-active principle to new heights.
If it ain’t broke, it will be
A combination of monitoring systems, sensors, and controllers all conspire to simplify processes and anticipate problems as quickly as possible. This allows for maintenance that is flexible and variable and can be shaped around the actual state of the equipment, in contrast to the rigidity of the old, tightly fixed maintenance intervals, which worked according to the timetable. In the example above, the cam belt would only be changed at the point that it needed to be, no sooner – the capacity for calamity is considerable.
So, why fix the things that aren’t broken? Replacing components at the correct time reduces waste and is much more cost effective. Predictive maintenance, therefore, applies the old adage that ‘if it ain’t broke, don’t fix it’ and adds a new one: If it is going to break, fix it now!
To understand when things do need fixing, statistical predictive maintenance models are created. This is done by importing ‘predictors’, which are critical values gleaned from sensor data, process measurement data, and ambient data. These values are fed into an analytics tool which recognizes certain patterns, such as the symptoms of a machine that has failed in the past.
Pre-emptive problem-solving in action
An excellent example of pre-emptive problem solving comes from the oil and gas industry, where the failure of a drilling system can potentially cost one million dollars per hour.
In oil and gas fields, the motor temperature, motor vibration and the delivery pressure of a pump are monitored in real-time so that anomalies are immediately noticeable. On the basis of the collected data (pressure, temperature, vibration), alert rules can also be created so that if, say, vibrations increase over a certain threshold in ten minutes, or temperatures rise or voltage drops, a ’high priority’ alarm is triggered.
In this case, the predictive maintenance discipline can be applied to make better use of sensory data collected in drilling operations. Until recently this was not typically analyzed until after the drilling operations had finished, by which time it was too late to make money-saving interventions. The use of algorithms, to analyze data as it is collected, allows for ‘real time’ reporting, which in turn creates the possibility for timely interventions. Sometimes the affirmative action is a result of management decisions and sometimes troubleshooting is decided using algorithms.
Using this technique, one US oil producer raised the ‘meantime-before-failure’ of its pumps by 1%, which was enough to save it $8m a year.
Context is key
The inevitable consequence of the increasing networking of machines is that preventative maintenance is playing a growing role in what has been dubbed ‘Industry 4.0’. If Industry 4.0 is defined by its regulation by machines and timely automation of events, it follows that the real-time analysis of data is the foundation of this. How else would the machines have the knowledge to make decisions?
It is crucial that data is processed intelligently and quickly since even ‘smart and fast data’ loses its value and validity over time. Efficient event processing and real-time analysis empowers users to act before this happens. However, information and knowledge can be rendered useless without context. In order to use Big Data analytics efficiently, data and data streams - not just historical ones - must be used in the right context and correlations between individual data sets must be established.
By way of example, data tells you that a tomato is a fruit, but context tells you not to put it in a fruit salad. By the same token, in a machine environment, it is a valuable skill to recognize the critical moments and patterns within the multitude of events, or production processes, and react both immediately and correctly. The context on instant decision-making can be arrived at automatically, by algorithmically controlled reaction to events, or by human governance in response to intelligence displayed on real-time dashboards. But of course, the best context is usually provided by a combination of both.
This is where analytics platforms can be used. It’s, or your own, established models and algorithms can be applied in real-time and to multiple scenarios, ranging from oil pumps to manufacturing equipment to aircraft engines. The models give reliable information about changes in failure rates on everything from previous warranty problems to product quality. Combined, they provide the most desired outcome; a world where accurately predicting failure is more efficient and effective.