Every few years, a regularly scheduled frenzy breaks out on the Danube: one of the largest oil and natural gas refineries in Ingolstadt, Germany completely shuts down its entire facility. Over the course of about four weeks, thousands of workers whip the plant into shape—supported by several dozen TÜV SÜD experts from nearby Regensburg. Operating in shifts, they open and clean tanks and pipes, repair condensers and boilers, and inspect safety equipment. Around the clock and according to a precisely scheduled timetable—because time is money, especially when an entire facility is at a standstill. The operators of this Bavarian refinery recently mentioned costs running in the tens of millions of euros.
Wouldn’t it be a boon for operators if such efforts could be minimized? When plants no longer had to remain idle for maintenance purposes or could have longer intervals between check-ups? Jay Lee thought the same thing. Seventeen years ago, the American had a vision: zero downtime in industry is doable. He traveled throughout the United States, presenting his approach to company owners as to how to prevent high failure rates and the associated costs. His idea was for an intelligent system that could recognize failures and defects before they led to a problem.
Like a few other pioneers, Lee called his idea “predictive maintenance.” But except for a handful of companies, nobody took him seriously. Today he grins about those days. “We’re closer to it than ever before,” he says.
Lee is the founding partner of the Center for Intelligent Maintenance Systems in Cincinnati, Ohio (USA). It’s a national science organization networked across several of the country’s universities that promotes initiatives with respect to near-zero breakdown industrial performance. “The biggest difference as compared to ten or twenty years ago,” Lee says, “is that previously we could only use historic data for our analyses. Today, in the era of big data, we can call on virtually inexhaustible sources of data to investigate systems, including sensors, historic data and expert opinions, and have become much more thorough because of it. The new level is correlating the data, which is making us smarter. We can anticipate problems, so it’s predictive maintenance.”
Normally maintenance takes place according to a fixed schedule. Components are replaced after a certain number of operating hours. Applied correctly, this system ensures a high degree of failsafe performance in factories, power plants and refineries. And it’s relatively simple to implement. Yet maintenance that is oriented toward a fixed timetable also has disadvantages: costs needlessly increase if individual parts are swapped out too early. But if they’re replaced too late, plant availability and safety are jeopardized.
Predictive maintenance is different: components or even entire facilities are monitored in real time using sensors. The data is analyzed with smart software and combined with already existing data. Based on this, predictions can be made about the probability of failure—with specific instructions as to whether and when individual parts must undergo maintenance. Ideally, maintenance costs are reduced, as are outages and downtimes.
The US Department of Energy estimates that maintenance costs could be reduced up to 30 percent with predictive maintenance and up to 75 percent of operating failures avoided. However, this works only when the following questions can continually be answered based on the collected data: When does a component’s condition become truly critical? How do you dependably predict the probable time of failure? When is the best time to intervene to remedy the problem?
Pilot project in India
Intense work is being done to answer exactly these questions in the city of Yamuna Nagar, in northern India. HPGCL, an electric utility, operates a 600-megawatt coal-fired plant—and, since mid-2016, it is one of the country’s first facilities where predictive maintenance is being tested. At the operator’s request, TÜV SÜD networked the entire system with sensors, from coal delivery to electricity production in the turbines.
The integrated approach is what’s special about this pilot project. While individual components and systems in many power plants are already monitored by the respective manufacturers, an analysis of the entire plant is usually lacking. “As an independent third party that doesn’t have any vested interest in maintenance, parts replacement or new procurement, we can offer our clients true added value,” says Shatanshu Shekhar, director of the TÜV SÜD project in India. “Not to mention our cumulative engineering expertise and more than 150 years of experience in the area of power plants.”
That this blueprint for smarter maintenance is being developed in India of all places is no accident: the instability in the energy supply in many parts of the country is considered one of the major obstacles to economic growth there. Around two-thirds of the electricity comes from coal-fired plants, the majority of which are more than two decades old and, accordingly, often suffer malfunctions. Brownouts and blackouts are thus simply par for the course. For this reason there is great interest in increasing availability via new methods of maintenance.
The project has been collecting data in India since the middle of last year. To start, Shekhar and his staff analyzed where sensors had to be placed in the power plant in the first place. With this information, more than six hundred data transmitters were installed at critical points and on machines, and then the devices were networked. They record pressure, temperatures and vibrations, to name just a few parameters. “There’s a control room that’s staffed 24/7,” Shekhar explains. “We’re in constant contact with the inspectors with real-time data. If a problem arises, we make recommendations.”
It isn’t always possible to install sensors at the exact spot where the measurements should be made. A power plant is a place of extremes: grime, vibrations and high temperatures of up to 500 degrees Celsius would test the limits of any sensor. The measuring devices must therefore be installed close to their targets, but where they can still function. In combination with mathematical models, the condition of the area that actually needs to be examined can then be accurately analyzed. “There are always problems in a demanding environment like a power plant,” Shekhar says, “but we’ve been able to solve all of them so far.”