Mike Barrett is the Vice President of Sales and Marketing at Eurofins TestOil. He's currently working to help reliability practitioners maintain the health of their rotating equipment through same-day oil analysis services. Mike also specializes in helping teams develop programs, or providing guidance and support in getting a program back on track. Mike recently spoke with Plant Services editor in chief Thomas Wilk about his experiences, as well as what makes programs work, what trouble spots programs run into, and where you might want to take a look at improving your own oil analysis program.
Listen to Mike Barrett on The Tool Belt Podcast
PS: That was a smooth transition into what are some of the challenges? Are there other common challenge points that you see? Do people have trouble taking the same sample from the same location? Is it even before the sampling process takes place? It is issues with making sure the oil is clean before it goes into machines? What are some of the more common ones?
MB: Yeah, a lot of it is from start, setting up the equipment database. We need a an equipment roster to get set up before we can do anything. Sometimes it's as simple as they send us the roster, and we don't even have lubricant, we don't even know what oil is in there to start analyzing. And they don't even know, which is even worse. That's a big problem to start.
And then it's really, where do you get a sample from? Has anyone been trained on how to take a proper sample? The good programs typically have sent their team to training for MLT training (Machinery Lubrication Technician), and those are the guys out in the field around the equipment taking the samples. Getting a consistent sample is a big challenge.
And, you know, it's not a fun job for people. People aren't raising their hand to say, “I want to go out and pull 100 samples every month, that sounds like fun.” So it's a challenge, and what happens is, you get different people taking the samples, who do not take it the same way, who do not take it in the same spot. So then you get variance in the data that we're sending back to them.
We have a couple of customers who have dedicated people who take samples, and they may have to take 400 samples a month, and there'll be two or three of them, their team, and they're paid on sample compliance. They get bonus based on hitting 98% sample compliance. Now, our analysts look at these reports all the time from them, and when someone goes on vacation, they fill somebody else in to take the samples, and the analysts all of a sudden are looking at the reports and saying, wait a minute, why is all the data out of whack? Either someone's not putting the right labels on the bottles, or someone's taking samples in a different spot. It's so crucial to get consistent sample taking, and even sample takers. That's a problem because you just don't get consistency in the program.
I would say Tom, it comes down to training. When we started this back in the 1990s, there was no training and there weren't really trade shows. It was a lot of education we were doing. And then in 2000, you start to see training programs develop, and conferences. Today, there's a heck of a lot more training and there's all sorts of people certified MLA / MLT. It's really nice to see, I guess, but there's still so many plants who are starting programs, and they don't even know what MLA and MLT are.
PS: Your framing that perspective for me, because I joined Plant Services in 2014, not even 10 years ago. And I look at say the Reliable Plant conference, which has been around for 25, 26 years?
MB: 1999 is when it started, I believe.
PS: It feels to me like the show has been around forever, and it's all I know, but clearly 30 years is not forever. It's still a relatively new science in a lot of ways, or discipline I should say,
MB: It is, and that show has exploded from when it started back in 2000 to where it is today. It’s international, you know, and when it started, it was a little show out in Tulsa, OK, and you'd get 100, 150 people there, and now you get 1,500. It's nice to see, there's a lot of awareness around lubrication, keeping your oil clean, lube rooms. People don't understand the importance of having a dedicated room to keep your new oil, and having processes in place to filter it. I walk into plants all the time, and one of the things I ask is, let's go look at your lube room. And they'll say, well, we’ve got a barrel sitting over there, and there's a barrel in that corner. And whenever they need oil, they just come and get it out of the barrels. It's hardly a lube room. That's just part of the whole lubrication processes, are you putting clean, dry oil into your machines.
PS: One of the first eye openers for me was understanding how quickly oil can get contaminants in there. And contaminants don't even have to be that large, of course, it's just a matter of, if they're not kept in a clean, dedicated lube room, it's going to be a challenge to keep even filtered oil clean.
MB: It is. I mean, you have a system and you have a filter on it, you could have pretty clean oil in that machine. And now all of a sudden, you're taking new oil that you think is clean, and you haven't filtered it, and you're putting it into the machine, you're putting dirty oil in on top of clean oil. That was a big problem, but part of these conferences in the early 2000s really stressed this whole cleanliness issue of oil. And that is very big, I mean particle count is a test we run that looks at the overall cleanliness of the oil. In the past that was done on turbine oil or hydraulic oil only. Now we have customers who have big gearboxes, and they want to know how clean their oil is, because they will filter it to improve the cleanliness, and they'll give us a target cleanliness code to say, “we want to keep the oil below that; if it goes above it, you need to flag it, because then we'll bring in a filter cart and filter the oil, because it's so important to us to keep it clean.” But that mentality wasn't always there. And it's the training and the conferences that has helped that.
PS: It's good to see that plus people like yourself and the programs you’re developing that have driven this to where we are today, which where we can talk about some of the some of the pinch points and some of the ingredients that go into making a good program.
MB: It's cool, some of the things we do with customers now. A big thing is data ingestion; some of these bigger corporate accounts have that we do multiple plants for, they have people corporate guys who don't want to sit there and look at oil analysis reports. The could look at 400-500 in a month, (but instead) they want to take the data, and they have some sort of AI / Power BI type of platform that they can bring the data into, and maybe integrate it with vibration data or infrared data, and do some sort of analysis to really pinpoint the machines across the fleet that are having problems. It’s not just looking at an oil analysis report from a gearbox at a power plant. It's much, much bigger thinking.
PS: Let me close our interview with a cost question. You mentioned that this is one of the more cost effective or the most cost effective predictive maintenance approaches. If a champion wants to start a program or even approve the program, and they get challenged by their superiors to estimate the ROI, is this is the kind of thing where the average technician or champion can identify an ROI pretty quickly via oil analysis?
MB: It's not that easy. It's easy to look at eliminating oil changes. Most people have time-based oil changes, we change our oil every year. If you implement an oil program, you should eliminate that immediately, and say we're only going to change the oil when the reports tell us to change your oil. So you could look at that cost savings.
Now, the other big savings is increased equipment reliability, which means less maintenance costs and more productivity. That is harder to determine an ROI. You could say, we're going to have more equipment uptime. Well, how much more? Can you translate 5% more uptime into 5% more productivity output? That's part of the ROI.
It's been difficult to do because it's a cost savings based on a piece of machine not failing, if you're doing oil analysis. You could look at it, all of a sudden, we get a report back and you have high iron, and you say, “you better go out and check that gearbox.” So you go check the gearbox and say, “oh, if we didn't come out here, check it and do this, it would have failed in two months and cost us $100,000.” So doing oil analysis just saved you $100,000. People understand it, but it's hard to put it on paper.
PS: You almost have to look at something like mean time between failure, and look at that time stretch first before the dollar figures come back, when it comes to uptime and downtime. I hear you loud and clear that the most immediate cost savings that can be calculated on paper would be freeing up resources from doing regular time-based sampling and saying, ok, we reduce these routes, now these folks can go work on other jobs.