1660240494277 2012uptake1

How advanced analytics improves asset risk and cost optimization

Dec. 16, 2020
Use innovative approaches to empower maintenance and reliability leaders to continuously identify improvement opportunities.

Derek Valine, senior manager of reliability engineering for Uptake, has 38 years of experience in maintenance and engineering. John Langskov, section leader of PM programs for Arizona Public Service (APS), is a licensed reactor operator who has worked as a leader in all aspects of work management. During the live Q&A portion of the webinar “Asset Risk & Cost Optimization: Using Advanced Analytics to Enable Operational Excellence,” Valine and Langskov demonstrate how advances in technology, such as dynamic algorithms and artificial intelligence, can help assist intensive companies to advance operational excellence initiatives.

PS: I was listening closely to what you were talking about when it came to using data science engines for ingesting and cleaning data and things like label correction. How does that technology deal with text fields like notes from an operator or the last mechanic you do maintenance? Because that stuff can get buried deep in the CMMS system.

DV: Actually, that's one of the areas that we see a lot of value in. We have some methods of mining those textual fields. We can marry those textual fields with other codified fields, work order titles to establish some relationships of where we see specific words or phrases associated with specific types of work.

From there, we can either enrich the data by filling in missing context or add labels to those work orders or whatever that transactional data happens to be. Add labels such that those labels then allow us to categorize and mine that data going forward in a way that's going to be valuable.

PS: When it comes to that 20% reduction in maintenance costs – i.e., eliminating or differing recommended work orders – is identifying scheduled work that should not be done part of what you're talking about? Specifically, the sort of hidden PMs that are either kept around just by accident or by people who just want to walk around and need a break?

DV: I would look at that two ways. One is in just deferred maintenance. I have some equipment that I want to take out of service. If I understand the actual condition of that equipment, can I defer that maintenance and not perform it at this time?

The other way I look at that is strictly in, do I need to perform this task at all? That's actually a huge part of what the optimization engine will do.

It all comes down to the failure modes and it all comes down to how each individual PM task covers those failure modes. What I've seen in my experience, in a lot of cases, is we perform multiple tasks that really address the same failure modes at about the same level because in many cases it makes us feel good. But when you really look at the value of performing two tasks to cover the same failure mode, that value really isn't there. So the tools then give you some really hard data that help you make the decision on which one of those tasks is the most cost-effective: “Can I stop doing the other one altogether or at least extend the free period density on that?”

PS: I've heard people argue that one of the best ways to avoid a problem is not to touch the machine when you don't have to. Otherwise, you introduce problems by doing work that's not necessary. Is that part of what you see, specifically that safety improved and also perhaps reliability because people simply were touching machines only when they had to?

JL: I couldn't tell you the statement is true. It kind of depends on the equipment you're working on, honestly. You'd have some of your PMs they're intrusive and you're actually taking the equipment out of service and you're lifting and landing leads and doing a number of things. Those are obviously going to be more at risk for maintenance-induced types of failures.

I will say that the preventive program that we use, it's like our core business, that program. It factors into induced errors into the account maintenance calculations. That helps a lot. Other equipment, you're not maybe as intrusive on it, so you're not going to see the near-error rate. You need to be aware of it but it is definitely a factor. If you track your maintenance-induced errors, you'll see an improvement in that arena as you reduce the amount of intrusive maintenance that you're doing.

PS: John just brought up the issue of using data from historians, time-series data, since we also have technology for sensor data. Is there really that much value from work order data alone?

DV: Again, depending on where each customer may be, obviously some are more mature than others. So a less mature customer is going to drive a lot more value from work order data to start with. That's part of our journey, to take our customers up that maturity curve by starting with optimizing their maintenance strategies, and then from there that can lead into some condition monitoring maintenance work where equipment is sensored.

To be honest with you, what we find ultimately is that most people are going to end up with some kind of a blended maintenance strategy. It's not cost-effective to sensor everything on your non-critical equipment, but you may have some monitoring that is possible. But there's still going to be some schedule-based maintenance, whether it’s calendar-based, or runtime based, or whatever.

And really the combination of the two is probably going to get you the most cost-effective maintenance strategy. That's one thing that we try very hard to understand is that, when I implement a predictive solution, what failure modes am I now detecting? What failure modes am I mitigating and what time-based tasks or calendar-based tasks can I now go relook at and see if I can relax and drive that cost savings that way? Back to your original question on how much value is in a work order data, I think it's an overlooked source of value if you put it that way.

JL: One of the more exciting things that's happened in the last year with the Uptake solution for us is they updated our ability to change our wear data in the failure distributions on each failure mechanism. That has added a lot of value for us because now we can go in on a specific mechanism and maybe the awareness started five years at the fifth centile in a mild environment. We're looking at our data, it's more like eight years and that will change our solution when we change that wear time but you have to have the ability to mine your data to do that. I would argue that the more mature your plant is, the more mature your plan is, the better your data is for doing failure mode effect analysis and things like that.

Watch the on-demand webinar to learn more

Sponsored Recommendations

Effective Enclosure Heating

Aug. 22, 2024
Effective enclosure heating is essential for peak operational efficiency in outdoor and indoor contexts.

Busbar: The Next Evolutionary Step in Control Panel Design

Aug. 22, 2024
Learn how busbar power distribution can help control panel manufacturers unlock enhanced safety, lower costs, and a reduced automation footprint.

Reduce Contamination with the Right Enclosure for Your Food and Beverage Application

Aug. 22, 2024
Protecting electrical controls and equipment within food and beverage plants presents unique challenges due to the sanitation requirements of the hygienic environment.

Enclosure Climate Control: Achieving the Ideal Temperature

March 28, 2024
There are several factors to consider when optimizing the climate inside your electrical enclosure. Download this white paper to learn more.