Maintenance Mindset: Integrating AI and human expertise for effective oil analysis

Maintenance Mindset: Integrating AI and human expertise for effective oil analysis

Feb. 26, 2025
AI can address surface-level efficiency, but it requires human insight to ensure it is used ethically and responsibly.

Welcome to Maintenance Mindset, our editors’ takes on things going on in the worlds of manufacturing and asset management that deserve some extra attention. This will appear regularly in the Member’s Only section of the site. This week's column features guest contributor Michael D. Holloway, President of 5th Order Industry, writing part 3 of a 3-part series on AI and oil analysis. 

They constantly try to escape
From the darkness outside and within
By dreaming of systems so perfect that no one will need to be good.
But the man that is will shadow
The man that pretends to be.
T.S. Eliot

By now it has been apparent that I like to mix science and philosophy. This is what happens when you earn degrees in chemistry and philosophy! The above quote from T.S. Eliot’s "The Rock" can be interpreted as a cautionary reflection on humanity’s overreliance on systems, technologies, or constructs to address profound moral and existential challenges, while neglecting the foundational role of individual responsibility, ethics, and integrity. When juxtaposed with the notion that artificial intelligence (AI) can significantly enhance diagnostics in lubricating oil analysis and condition monitoring, an interesting philosophical tension emerges.

Dreaming of systems so perfect

Eliot critiques the human tendency to design systems so flawless that they appear to eliminate the need for goodness or moral effort. Similarly, the deployment of AI in lubricating oil analysis seeks to create a system of near-perfect accuracy, efficiency, and decision-making. This reflects a broader cultural faith in technology to "solve" human challenges without the need for subjective human judgment or oversight.

However, this raises the question: can we truly rely on AI alone? While AI systems excel at processing data and identifying trends, they cannot replace the responsibility and domain-specific expertise of human operators. Much like Eliot’s warning, the pursuit of perfect systems should not lead us to neglect the role of individual accountability in interpreting results, making decisions, and implementing actions.

Darkness outside and within

In Eliot’s view, humanity’s desire to escape external and internal challenges often leads to misplaced faith in external constructs. In the context of AI, this could be seen as an overreliance on technology to handle complex maintenance tasks, while ignoring deeper systemic issues such as poor training, inadequate infrastructure, or lack of sustainability practices in the industry.

AI can address surface-level efficiency, but it requires human insight to ensure it is used ethically and responsibly. For instance, using AI to optimize analysis diagnostics is powerful, but without a commitment to application there will be challenges.

The man that is shadows the man that pretends to be

Eliot warns that human authenticity and limitations will always persist beneath the facade of perfection created by systems or technologies. Similarly, while AI might create the appearance of a seamless, objective process, its effectiveness is contingent on human-designed algorithms, high-quality data, and ongoing oversight. The risks of flawed implementation, biases in data, or misinterpretation of AI outputs demonstrate that the “man that is” still casts a shadow over the technological systems we create.

In oil analysis, for example, AI might detect anomalies or predict failures, but if the foundational data is flawed, or the recommendations are applied without understanding the operational context, errors can still occur. This reinforces the need for human expertise to complement and guide AI systems. How many diagnosticians will reject a result because they “know” the analysis was faulty? The buck stops with the diagnostician.

Bridging the philosophical gap

Eliot’s quote does not argue against systems outright but warns of a misplaced reliance on them to the exclusion of human moral and intellectual effort. Similarly, AI in oil analysis and condition monitoring is not a replacement for human involvement but a tool to enhance it. The interplay between human oversight and technological efficiency is crucial:

  • AI as an Enabler, Not a Replacement: AI’s ability to process data and detect patterns is invaluable, but it cannot function independently of human validation and judgment.
  • Responsibility Beyond Systems: Maintenance strategies should not rely solely on AI but also incorporate training, ethical considerations, and proactive planning to address systemic challenges.
  • Technology and Goodness Together: The ultimate goal of integrating AI should be to amplify the human capacity for good—improving sustainability, safety, and operational reliability—not to remove human responsibility altogether.

Eliot’s insights resonate with the modern adoption of AI in industries. They serve as a reminder that while systems like AI can enhance efficiency and decision-making, they must be implemented thoughtfully, with attention to the broader ethical, social, and environmental context. True progress lies in balancing technological innovation with human wisdom and moral accountability, ensuring that the “man that is” collaborates with, rather than hides behind, the systems we create.

In conclusion: Fixing the AI / oil condition monitoring quandary

The pitfalls of utilizing AI instead of human diagnosticians include the risk of over-reliance on algorithms, which can lead to errors if the input data is flawed, incomplete, or biased. As outlined in Part 1 of this series, AI lacks the contextual understanding, intuition, and ethical judgment that human experts bring, potentially resulting in decisions that overlook critical nuances. Moreover, it may foster complacency, diminishing the development of human expertise. 

However, industry experts cite the potential of AI to process vast amounts of data with speed and precision, identify patterns beyond human capability, and provide predictive insights that enable proactive decision-making (see Part 2 of this series). When used as a tool to augment rather than replace human diagnosticians, AI can create a powerful synergy, enhancing accuracy, efficiency, and overall outcomes.

Overcoming the pitfalls of AI and utilizing it effectively requires a balanced approach that integrates human expertise with technological capabilities. Here’s how:

  1. Maintain Human Oversight: Ensure experienced professionals review AI outputs, providing context, ethical judgment, and critical thinking that AI lacks.
  2. Improve Data Quality: Use high-quality, standardized, and diverse datasets to train AI systems, minimizing biases and inaccuracies.
  3. Combine Strengths: Leverage AI for tasks like pattern recognition and predictive analytics, while relying on humans for nuanced decision-making and problem-solving.
  4. Continuous Monitoring and Updates: Regularly update AI algorithms to adapt to changing conditions, incorporate feedback, and align with evolving best practices.
  5. Education and Training: Equip human diagnosticians with knowledge about AI capabilities and limitations, fostering collaboration between humans and machines.
  6. Transparency and Accountability: Design AI systems with clear processes and explainable outputs, ensuring users understand how decisions are made and can address errors effectively.

By treating AI as an assistive tool rather than a replacement, organizations can harness its potential while safeguarding against its limitations, achieving more robust and reliable outcomes.

About the Author

Michael Holloway | Michael Holloway

Michael D. Holloway is President of 5th Order Industry which provides training, failure analysis, and designed experiments. He has 40 years' experience in industry starting with research and product development for Olin Chemical and WR Grace, Rohm & Haas, GE Plastics, and reliability engineering and analysis for NCH, ALS, and SGS. He is a subject matter expert in Tribology, oil and failure analysis, reliability engineering, and designed experiments for science and engineering. He holds 16 professional certifications, a patent, a MS Polymer Engineering, BS Chemistry, BA Philosophy, authored 12 books, contributed to several others, cited in over 1000 manuscripts and several hundred master’s theses and doctoral dissertations.

Sponsored Recommendations

A Paradigm Shift in Pump Selection

Jan. 22, 2025
Discover how INNOMAG® is transforming pump selection with innovative design and unparalleled performance. Learn how this breakthrough solution simplifies operations, enhances ...

The Need for Speed: The Most Advanced Sealless Pump is Also the Fastest

Jan. 14, 2025
Struggling with a troublesome pump? Get a reliable solution in just five days with the fastest, most advanced sealless pump on the market.

Say Goodbye to Pumping’s Weakest Links

Jan. 14, 2025
Shaft seals and ball bearings cause nearly 80% of pump failures—but not with INNOMAG®. With no seals, no bearings, and a thrust-balanced design, these pumps deliver unmatched ...

Ready to Reduce the Cost of Energy with a Highly Efficient, More Forgiving Pump?

Jan. 14, 2025
INNOMAG® pumps save energy and maintain peak efficiency over time—no wear and tear. Discover how they can lower your operating costs and increase reliability.