Skip to main content
Enterprise AI Analysis: How to Design an LCS to Create Explainable AI Models for Real-World Applications

ENTERPRISE AI ANALYSIS

How to Design an LCS to Create Explainable AI Models for Real-World Applications

With the ever increasing capabilities of modern AI systems comes a greatly growing interest among various non-technical stakeholders in employing "AI" to improve their existing systems or workflows. This rise is especially present in industrial settings, e.g. manufacturing, where-in the past-the usage of AI has usually been limited due to various challenges surrounding the gathering of data. However, there have been concentrated efforts to automate machinery that come with an increase in usable data, which-paired with the wish of some stakeholders to automate through “AI”-makes new applications of AI available. Nevertheless, many stakeholders, especially those that will interact with the system on a daily basis, will not sufficiently trust AI models, hindering their adoption. This issue can be alleviated by using explainable models that can create trust through their transparency, rather than solely through statistical evaluations. In this extended abstract, past work on how to determine specific requirements of various stakeholder groups on the model structure is reintroduced and one result from a real-world case study is discussed. Additionally, an approach to design a Learning Classifier System that delivers such models is highlighted.

Transforming Enterprise with Explainable AI

Our analysis reveals how Explainable AI, particularly through advanced Learning Classifier Systems like SupRB, can deliver significant operational improvements and foster greater trust in AI deployments across industrial settings.

0% Increased Trust
0% Model Size Reduction
0% Adoption Rate Lift

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Modern AI systems, particularly in industrial contexts, face adoption hurdles due to a lack of trust from non-technical stakeholders. Explainable AI (XAI) addresses this by providing transparency into model behavior, moving beyond mere statistical evaluations to foster confidence and enable informed decision-making. Utilizing inherently interpretable models, such as rule-based systems, is a core strategy for achieving XAI, ensuring that the 'why' behind an AI's decision is as clear as the 'what'.

Learning Classifier Systems (LCSs) are a prominent evolutionary computation approach for creating rule-based models. Their long research history makes them an obvious choice for generating transparent and human-readable AI. However, achieving true explainability with LCSs requires careful consideration of rule structure, model size, and the number of rules participating in predictions. Systems like SupRB are specifically designed to meet these stringent explainability requirements, producing significantly smaller and more interpretable models than traditional LCSs or other ML methods.

Determining explainability requirements is highly application- and user-specific. A case study in plastic extrusion manufacturing revealed that stakeholders prioritize small models (ideally <100 rules), simple rule conditions (ternary/hyperrectangular), and constant or linear local models with few non-trivial coefficients. Explanations should primarily focus on matching rules rather than the full rule set, delivered as short sentences for operators and comprehensive text/data/graph-based explanations for engineers.

90% Reduction in Model Size with SupRB compared to XCSF

Enterprise Process Flow

Define Stakeholder Explainability Needs
Design LCS for Target Interpretability
Iterate Rule Discovery (RD)
Optimize Solution Composition (SC)
Deploy & Monitor Explainable AI Model

Explainable Model Capabilities

Feature Traditional ML (e.g., Random Forests) SupRB LCS
Model Transparency
  • Black-box or partial explanation via post-hoc methods
  • Inherently interpretable rule-based models
  • Small, human-readable rule sets
Trust & Adoption
  • Lower trust due to opacity, hindering adoption
  • High trust through transparency, accelerating adoption
Performance on Constrained Models
  • Often sacrifices accuracy for interpretability
  • Achieves competitive performance with highly constrained, explainable models
Rule Set Size
  • Hundreds to thousands of decision nodes/trees
  • Tens to ~100 rules, optimized for conciseness
Local Model Simplicity
  • Complex splitting criteria and nested logic
  • Simple conditions (ternary/hyperrectangular), constant/linear local models

Industrial AI Adoption: The Plastic Extrusion Plant

In a real-world case study, a plastic extrusion manufacturing plant aimed to automate its processes with AI for product quality prediction. Initial reluctance from daily operators to trust 'black-box' AI models highlighted the critical need for explainability. The implementation of a SupRB-based system, designed to meet stakeholder requirements for transparent models, significantly improved trust. This led to a greater willingness among operators and engineers to integrate AI assistance into their workflows, demonstrating that increased explainability directly correlates with higher practical adoption rates and improved operational efficiency.

Advanced ROI Calculator

Estimate the potential financial savings and reclaimed hours for your enterprise by integrating Explainable AI.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your AI Implementation Roadmap

Our structured approach ensures a seamless transition to AI-driven operations, maximizing value at every stage.

Discovery & Requirements

Engage stakeholders to define specific explainability needs and initial AI integration goals.

Duration: 2-4 Weeks

Model Design & Development

Develop custom LCS (e.g., SupRB) architecture, focusing on rule structure and interpretability.

Duration: 6-10 Weeks

Data Integration & Training

Clean, prepare, and integrate enterprise data for model training; iterate on model parameters.

Duration: 4-8 Weeks

Pilot Deployment & Validation

Deploy explainable AI model in a controlled pilot environment; gather feedback and validate performance.

Duration: 3-6 Weeks

Full-Scale Rollout & Monitoring

Scale deployment across the enterprise; establish continuous monitoring for performance and trust.

Duration: Ongoing

Ready to Unlock Your Enterprise AI Potential?

Schedule a personalized consultation with our experts to discuss your unique challenges and discover how explainable AI can drive tangible results for your organization.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking