Kevin Clark is the VP of Marketing and Customer Success at Falkonry. A veteran of asset management, experienced as a practitioner and educated as an engineer, Kevin brings over 30 years of experience to the fields of engineering, maintenance, and predictive analytics. As a veteran and advocate in the industrial space, Kevin plays a key role in advancing manufacturing and encouraging new technologies as a thought leader, keynote speaker, and M&R expert. He has served through decades of leadership in the Society of Maintenance & Reliability Professionals (SMRP), International Society of Automation (ISA) and as a long-standing board member of Purdue University’s Polytechnic Industry Advisory Board (IAB). Kevin recently spoke with Plant Services editor in chief Thomas Wilk about artificial intelligence's impact on the worlds of operations, maintenance, and reliability.
Listen to Kevin Clark on The Tool Belt Podcast
PS: When it comes to the plants that you've worked with to deploy these solutions, I'm always curious about which roles are in the best position to drive these projects. I've heard, for example, that maintenance is tied up doing very task-oriented work. IT will do the work if the project as approved by a champion. So that leaves me with the Director of Operations, Director of Reliability, the Reliability Engineer to drive these things, even if those roles don't have prior experience with AI. Is that your experience too, that it falls really on operations and reliability to educate themselves and push this forward?
KC: Yeah, absolutely. It does, it’s the argument I'm making right now with a lot of organizations and inside of my own organization as we watch the culture begin to shift towards a data-driven culture. But if you go into most organizations, there's not a role that has in their job description that they can take two or three hours a day and go and analyze anomalies. Nobody has that in their job description.
Whenever we bring AI into a plant there’s a lot of excitement around it, they want to be able to see all that data. But once that data is there and ready for them to analyze, ready for them to maybe act on, most organizations aren't necessarily prepared for that. And it's a challenge. It's interesting watching organizations get prepared for it, but it is a challenge every single time. It's that transformation of moving from, “okay, I got some data, I generally make some decisions off this data, or my CMMS data, or data out of my ERP, I make decisions off of that data.” But when AI comes in, and it says, “this is live data, this is what's happening right now on your asset, it's beginning to fail, or it's showing some abnormal signs,” most organizations aren't prepared for that act, and we're seeing it over and over. The cool thing is you're also watching organizations make the shift – painfully sometimes, but they're making the shift – because the data is so interesting, and so telling about how they're moving towards an optimal run, or maybe not so optimal.
PS: I find your comment about people being unprepared for that moment really fascinating. Is this a case where it's that the processes themselves might not be structured to respond to this kind of data? Is it a more due to a reactive culture being turned into a more productive culture? A little bit of a mix?
KC: It's a mix, yeah, there's no doubt it's a mix. You can go into pretty mature facilities, and still get a very reactive response to AI telling them that their process is going south. That's a pretty telling moment, even when you walk into that mature facility, that (1) it's either telling them something that they probably already knew, but they could never prove it, never confirm it; or (2) it's telling them something completely new, and they have no idea why that's happening.
It's a vast range of responses from people. Because the one thing I would want if AI was coming into my plan, I'd want it to confirm all the opinions that I have, which is generally a lot of opinions. And I think that's what most maintenance techs think when they bring AI in: is this going to just prove that I was right? Well, often it does, but there's also many times that it proves them wrong. And that's an interesting challenge in and of itself, is when our assumptions are proven wrong, because the data doesn't back it up. And then you get the opportunity to go dig in and figure out “okay, I was half right, but this extra piece of information made it a really interesting analysis.”
PS: That can be a tough moment for anyone. I mean, I would hate it if I saw Google Analytics one day, and it's told me I was only half right about the kind of content that we're developing out. But then you’ve got to get over your ego and be nimble enough to shift over and act on that data.
KC: The other thing too, is that AI doesn't know truth. I think that's a point that everybody needs to understand, is AI only knows what you feed it. AI only understands what normally happens. So it's looking for the things that are different, but it can't tell you that the different thing is truth or not. It just can't do that. It can tell you that something is actually happening, but often it can't tell you if it's true. Does that makes sense?
PS: It does. I was going to ask for a final question about a customer case study story that you can think of. Let me preface that by saying years ago we did a case study article with Falkonry and with an ore refinery in Wyoming, where they were having trouble finding out why certain parts of the crushing process were shutting down. It turned out that there was a moment in the crushing and sifting process where it wasn't sifting the particles out finely enough, and some of Falkonry’s algorithms successfully overlaid time series data on top of the operational data to figure out what that problem was. It saved them a lot of downtime. I don't think it was (conveyor) belt tightening, but it was making sure that the ore was moving through the sifting process efficiently enough. What are one are two examples that you've run into where AI has either solved a problem like that, or pointed out something that a plant hadn't seen and they had a sort of an AHA course correction moment?
KC: I'll give you a couple of them, and these are ones I like to refer to because, you know, I'm a reliability engineer at heart but I'm also manufacturing guy from many moons ago. One of the things I love is, not just the fact that we can identify when an asset has something going on that we can't explain – it might lead to a failure and it might not; it might lead to a delay, it might lead to some other things, and one of those other things is quality.
We've seen it over and over, it's difficult to capture – and you have to have the right data, that's a key point here – if you're, monitoring the process, and you have X amount of data, but you really needed X and Y data to do a full monitoring of it, the X data will give you enough, but maybe the X and Y would give you all of it to really be able to make some judgment calls based on what AI is seeing, what it’s learning what it's what it's identifying as an anomaly.
Quality is one of those that if you have the right data, you can not only identify whether the process is running optimally or not, but based on what it's what it's learned and what it's seen, at the end of it, it can tell you whether it's a good product or not. I think that's been that's been the most telling for some of our clients, that it was kind of an unexpected gain – to not only understand whether my asset is going to fail or not, but also to understand if my product is good or not, as an added benefit
PS: Interesting, were those cases in pharma specifically or food, or sort of across the board?
KC: This was the metals industry. So I mean, once the AI learns the process, and again this is dependent upon whether you've got enough data coming in, but once it learns the process and it identifies what is a good run for the product, it can also help identify whether it was a good product at the other end. I think that's been one of the one of the most interesting things for me is to see not only performance, but quality.
PS: That's fascinating. This reminds me of a case study that was given for a mine out west, where they had put vibration sensors on their fleet of Caterpillar ore haulers in order to identify how well the machines were performing. Turns out that once they had tightened down the machines and got them to perform optimally, the sensors were picking up not flaws in the machines, but flaws in the road leading from outside the mine. And the bigger savings was the secondary benefit of filling the potholes in the road that the sensors were picking up. As you just said, you want to find the fault in the asset, but also the surprise benefit. The benefit is improved quality, or better batch control. And in this case, the mine operator said that they had more savings from increasing throughput than they did reducing the capital expense from buying new ore haulers.
KC: Yeah, and the other side of that is if you are able to input the types of materials, the batches of materials that are going through, your AI will be even that much smarter because it'll be able to then to go back and associate a good run with the actual material numbers themselves. So you think of recalls and other things, especially in the life sciences industry like orthopedics or bio-meds of some type: to be able to say that that rant run went through and I can now identify all those components, we could do it the hard way, before we could do it with the data that was there. But it was just super hard, and there's no learning there, you'd have to just go do it as a query. But the (AI) learning would begin to tell you what materials perform better than other materials, which materials are causing more process problems. That association and that learning that AI is doing doesn't go away, it just gets better and better.
That's the thing about anomaly detection, is the longer it runs, the more it learns the more data that's flowing through, the number of anomalies tends to get smaller because now the anomalies are really the ones that are the problem causers, the things that you need to pay attention to. When they first turn it on, you can see hundreds, possibly thousands of anomalies a day until you get it tamed down and it's learned, it understands, it's got feedback. Once that happens, then when you see anomalies, you’d better be paying attention, because they're real.
PS: And then as you said, be open to the fact that you're only half right, and then take action to fix the other half.
KC: Right, that's a hard thing for all of us, especially those of us that have been around for 30-plus, you know, we think we've been there and done that, and we have a pretty good grip on it. But AI will teach you some things. But the thing you need to really understand about AI is you need to teach AI, because AI can be very biased, so if your data is bad, your AI is going to probably be bad.
PS: As you said, AI does not know truth.
KC: It does not know truth, right, it only knows what you teach it. Especially generative AI, that's the way it works is, whatever you teach it, that's what it knows. That is its truth. Whether it's true or not, is irrelevant. That's it’s truth, because that's what you taught it.
PS: Well, I think when we post this podcast, Kevin, I'm going to find a picture from the We Are the World sessions from the 1980s where Quincy Jones had a big sign saying leave your ego at the door.
KC: Yeah! And really, that's what you need to do if you want to build an AI system that works and works well. You need to be able to teach it things that are relevant and things that are helpful.