Podcast: Empowering frontline workers with industrial AI tools that actually work
Key takeaways
- AI strategy must start with frontline users to avoid past digital transformation mistakes.
- Technicians can surface AI use cases by flagging daily workflow pain points.
- Champions on the floor and in IT are key to driving AI adoption and trust.
- Simple, embedded AI tools that solve real problems build lasting habits and buy-in.
This sponsored episode of Great Question: A Manufacturing Podcast focuses on building AI that works for manufacturing teams, and features two key thought leaders with MaintainX, a maintenance and asset management platform purpose-built to help frontline industrial teams run more efficient, resilient, and safer operations. Nick Hasse is the co-founder of MaintainX, and has spent thousands of hours on the factory floor helping manufacturing and industrial leaders transform their operations with frontline-friendly software and AI. Roshan Satish is the lead product manager for applied AI at MaintainX, who is leading the development of AI-powered maintenance tools including MaintainX Copilot.
Below is an excerpt from the podcast:
PS: The theme of today's podcast, the Great Question, is: what’s it going to take to make AI useful for frontline teams? Nick, my first question is for you: why should building an effective AI strategy start with the people who will actually be using it, as opposed to anyone else in the company?
NH: Yeah, it’s a great question, Tom. It’s something that I’m talking about with leaders across the ecosystem today and trying to help them understand. What I’m starting to see is people falling into this trap they fell into with the first digital transformation, right? Which is that you’re buying tools for folks on the carpeted side of the house and not thinking about the folks on the concrete side of the house.
If you don't think about how are these tools actually going to be used by your frontline folks, how does that signals and information being generated make it into the hands of someone who has to act on it, and turn a wrench or push a button or reset a circuit board, you’re missing the whole plot all over again. This is what we’ve seen with the first digital transformation wave—there’s been a lot of progress with new technology across the board and in the manufacturing space specifically, but there’s no other industry where you think about the complete juxtaposition of analog workflows versus digital workflows, especially in manufacturing. Even n the floor, you’ve got this incredible automation, this incredible and machinery, and yet people are still using whiteboards, Post-It notes, clipboards, radios, and paper-heavy processes.
So you have this opportunity to reset wit AI and rethink the strategy from first principles. Hopefully, folks can learn from the mistakes that were made with the first wave of digital transformation and focus on the end user as a core stakeholder in these conversations.
PS: Do people on the front line, as you're talking about, do they know the areas of their workflow or daily process that could be most benefitted from AI? Or is this something that people on the carpeted side might have more exposure to?
NH: I think today the learning curve is pretty similar in terms of the maturity level of understanding. That creates a great opportunity for folks like us and like yourself to try to get ahead of that. Folks have been doing things the same way for the last 25, 30, or 40 years in these environments, and they’ve had glimmers of hope in these opportunities, that maybe something will work but have been burned before, so they might be a little more resistant to some of these changes than say if this was the first time that technology was being introduced to them.
I think the starting point is trying to understand, where are your bottlenecks? what are the things that are creating challenges today? One of the lines I use when talking to frontline workers is, "You didn’t get excited about working in this field because you love doing paperwork and submitting POs. What really got you excited about it?" They like to solve problems, they like to work with their hands, knowing what’s happening." So, I ask "what are the things that don’t bring you joy in your day-to-day workflow?" and start highlighting those examples.
I don’t need them to go full circle and tell me how AIs going to change that for them. But that helps bring up challenges and issues, and elevates those to people who can connect those dots and surface the right problems so that we can have a good starting point. Those are questions that any leader today can slip on a pair of steel toes, walk out on the floor and ask their team right now, and they’ll probably learn more about their business and bottlenecks, or a perspective that they hadn’t considered. They can get that information pretty quickly.
That would be my advice on how to get those workflow opportunities out. I’m not asking the frontline teams to come up with AI solutions, but I’m asking them to surface the problems so we can work with them on that front.
PS: Part of building an effective AI strategy always includes identifying and developing champions on the teams—those who from a cultural perspective not only embrace the technology but lead others into it. What are some things that you both have seen in terms of how organizations sort out those champions from the skeptics?
NH: There are two stakeholders here, and this is a multi-stakeholder conversation. When thinking about an ERP transformation, you’ll work very closely with your CFO and CIO, right, and one of those will really take the mantle and run with it. But when you’re talking about AI initiatives, you need someone who is excited about surfacing opportunities from the floor, from the frontline. I’ve seen that be just an AI-enthusiastic technician who’s geeking out with ChatGPT on the weekend, and is constantly thinking, “it would be so cool if we could try this and that.” Maybe they don’t understand how the technology fully works, but they’ve seen things that made then say “wow” in their personal exploration, that they’re willing to be adventurous enough to help provide some insights and be a champion once those experiments start to take place.
On the other side, you need someone who’s IT-facing, OT-facing. The folks I’m seeing who get really excited about it today are people on the controls engineering side, because they have access to this treasure trove of data, and they’re pretty confident that it’s meaningful, which is why they spend so much time aggregating it. They know they’re just scratching the surface of what’s possible. And now that they're starting to see what the potential of AIs, the promise of what AI can do, people, a lot of them are doing that Judge Judy wrist-tapping motion: “When can I just throw this on in my data set and start to do something that I know it can do.”
And then there's IT architecture folks, and they're the ones that have an obligation to become more educated around the ecosystem, and understand how to identify like, hey, we should not be using the free version of ChatGPT, understanding progressive policies around data security because you don't want to be so restrictive that you're saying “no AI”, but you also want to be able to take advantage and experiment with things that can potentially give you opportunities to learn. I'm not asking those folks to open the floodgates and let every team go wild. It's about finding some strategic projects and opportunities to start to build trust across the rest of the organization, and let that person be the guiding light champion that's saying, “I’m holding the door, I’m watching to make sure that we're not going crazy here.” They don't need to necessarily own it, but they need a champion on the security side of it to help everyone feel better about it. Without that leader involved then we aren't able to empower those other two folks that really want to drive.
RS: I completely agree with that Nick, and a side I see also is beyond the initial adoption or you know the intention to buy the AI, for the managers that I work with and the technicians I speak to the most often, there needs to be someone who encourages adoption and just building the habit. It's one thing to have an AI tool in the first place. It's another to have the instinct to use it in your workflows. So I think especially with more tenured folks who are used to a particular practice or workflow, it helps to have somebody who champions the usage of the tool and reminds people, “we have this a new way of managing information and we have this new way of recording notes and mitigations to different issues, so make sure you remember to actually use that.” I've seen this take a lot of different forms, even as formal as building work orders that are specific, to at the end of your day using AI to record what happened on that given day.
PS: That leads into our next question, maybe Roshan you can start off this one too: what are some of the ways that you're seeing manufacturers use AI in general to tackle problems? What are the opportunities that are surfacing, which people are already addressing with AI?
RS: Honestly, there's a diverse range that's for sure. I think often of the maturity ladder of different companies adopting different tools and having different stacks. But at the end of the day, the ones that I like the most are often the simplest and the most at-hand, basically. People are starting to record what happens and actually track resolutions much more actively, when before tools like MaintainX and CMMS were used as a task management platform, not an information tracking system.
So in a lot of these cases where it’s smaller features that say, “you might have incorrectly entered this data point” or what actually happened here to solve the issue that you were flagging for this work order. Those are the ones that I see most consistently adopted.
On the other end of the spectrum of course on a facility or organizational level, a lot of people are getting value from more intuitive reporting, more intuitive analytics, trying to understand what's happening in an actionable way. I think a lot of products in the past have tried to build around complexity, basically, on charts that have a lot of different lines and a lot of different knobs. But a transition that I see that I really appreciate is towards actionability of those insights. Now that I see a chart, give me an interpretation or give me a recommendation on what to actually change that might move the needle here.
PS: Nick, what are some of the use cases you're seeing out there?
NH: I'll start with the really simple stuff that people don't necessarily think of as AI, just being able to do simple voice transcription. If you're speaking and doing a voice to text in your phone, that’s running, some relatively simple AI models that are helping it understand what words you're saying, maybe add some contextual elements to it that help determine words that sound really similar, and which one to pick. That's an area where I see it helping to enable communication.
Now in the more sophisticated areas where there’s more digitally mature organizations that maybe do have the resources in house to be a bit more aggressive, there's a temptation to boil the ocean, to try to solve a lot of problems and throw everything into some giant model and see what happens. The ones that I'm seeing that are starting to see some early signs of success as well are folks that are taking very discrete processes, very narrow small problems that they have a lot of data around that they can go in and fine tune the model to determine predictive components to it. I know folks that have really high trust models today that can predict failures on very specific processes to 98% confidence, enough that there's not a lot of noise and they're willing to push that to production, but it only applies to very, very small subset, one very specific process on one of their lines, but that's working.
PS: Yeah, I’ve talked to technicians who simply aren't comfortable writing, and to have AI prompts help them create a job plan or create a better job plan, sometimes it's that simple.
NH: Those are the sort of ways that you gently introduce folks to this technology as well. The progression of AI shouldn't be some massive new step function. It should be things that are kind of gently introducing easier ways, but the ones that you will reach for: “I’m definitely not going to forget how that button works, that’s the button that saves me time and makes my life easier.” Those are the things that stick.
RS: 100% just to echo the product side of this, I really like Nick's point on just habit forming, and building the comfort and you know the intuition around using these use cases to the point that they blend into the background. I think over time you forget that the AI tool is even there and that's actually the ideal state rather than something incredibly flashy that you have to go out of your way to use. Once you build this foundation it gets more and more useful, but a little quieter over time. It needs less input than validation, and starts to get ahead of the task that you're going to do next.
PS: That's a great point. It's a variation of when something becomes inevitable. You know, it's here to stay, where it has to feel inevitable, feel like part of the natural workflow. Roshan, maybe we can take a deeper dive into MaintainX Copilot. It was released recently, I want to say February 2025. Could you talk a little bit about Copilot, what problems it was built to solve or the problems it is currently is solving?
RS: Copilot is envisioned as a platform at the end of the day, kind of to the point that we were discussing a minute ago. We want something that is at people's fingertips and that is intuitive and easy to use and shows up in more places over time. But the initial problem we wanted to solve was a very simple one. It's just information management at the end of the day.
One of the biggest hurdles that we notice from talking to technicians and managers was that in a lot of these more mission critical cases, in these situations that caused downtime, people need to move very quickly and they often do, but there's not a ton of resolution tracking. There's not a ton of information at hand when you're solving an issue that you've never seen before. So Copilot at first just wants to pull information from your work history, from your OEM based manuals, and answer questions at a point-in-time workflow, where you get a work order that you've never seen before – it's highly critical, but you don't know how to complete it. Copilot gives you the ability to ask questions about essentially anything: the parts that you need, the repair steps, the validation or the testing once you've actually completed something, and keeps that information. It builds over time and starts to get a more and more comprehensive knowledge base that you can interact with just from the context of your work order creation or your work order completion both for managers and technicians, whether they're building the thing in the first place or they're trying to complete it and trying to track what exactly happened.
PS: This goes back to your point about AI being useful to identify either incorrect data points or outlier data points. This is assisting frontline teams creating a more complete work history and filling out the gaps in these in work orders.
RS: Exactly the case, and a lot of this was built with the end user in mind, even from our rollout patterns. We worked with a handful of pilot customers over the course of many months and had Q&A with all these different personas and tried to understand: which pieces of this are intuitive? Which actions do you want to complete beyond just finding the information in the first place? Where does this go? What form is it most useful in? Is it just words? Is it diagrams as well? And building the product step by step in that way, just trying to understand, not what the most ambitious feature that we could build is, but what the most continuously useful feature is and what it looks like.
PS: I just got back from a maintenance conference in Tennessee and the conversation there was pretty positive about AI, but there were still a couple of people who brought up the old hallucination issue that was famous about a year and a half ago. How does MaintainX prevent this hallucination issue from creeping into the workflow?