Skip to main content
Enterprise AI Analysis: Workshop Report: From Testing Automation to Fault Prediction using LLM at ISEC 2026

ENTERPRISE AI ANALYSIS

Workshop Report: From Testing Automation to Fault Prediction using LLM at ISEC 2026

This report synthesizes key insights from the ISEC 2026 workshop, exploring the transformative impact of Large Language Models (LLMs) on software engineering. It covers applications in enhancing software quality, reliability, security testing, and advanced fault prediction methods, offering a comprehensive understanding of LLM capabilities and their practical integration.

Executive Impact Snapshot

LLMs are revolutionizing software development by delivering tangible improvements across critical areas, from boosting efficiency to enhancing security and reliability.

0% Productivity Boost
0% Vulnerability Reduction
0% Fault Prediction Accuracy
0x Development Time Saved

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Data-Driven Decision Making
Security Testing with LLMs
Fault Prediction with LLM/ML

Keynote: Data - Art, Science & Engineering

P. Radha Krishna's keynote highlighted data as a strategic asset, driving business value through enhanced decision-making, operational optimization, and customer experience. The discussion traversed the evolution of AI (AI+++) from traditional rule-based systems to the current era of generative and agentic AI, emphasizing LLMs' role in reshaping software engineering practices and the importance of data fabric architectures.

Evolution of AI Paradigms (AI+++)

Traditional Rule-Based Systems
Machine Learning & Data Mining
Intelligent & Autonomous Agents
Deep Learning & Data Science
Generative & Agentic AI

Large Language Model for Security Testing

Sangharatna Godboley's talk focused on how LLMs enhance fuzz testing for software security. By generating more meaningful, diverse, and intelligent seed inputs, LLMs improve program path exploration and significantly increase the effectiveness of vulnerability discovery. Specific techniques like SLS-Fuzz, gptCombFuzz, and gptPromptFuzz were introduced as cutting-edge LLM applications in this domain.

75% Increased Fuzzing Effectiveness with LLMs

LLMs dramatically improve fuzz testing by generating highly relevant seed inputs, leading to a higher rate of vulnerability discovery compared to conventional random methods. This accelerates the identification of security flaws in complex software.

Fault Prediction through Postmortem Analysis with LLM and Machine Learning

Lov Kumar's presentation highlighted a comprehensive framework for software fault prediction, combining LLMs, source code metrics, and various machine learning models. Unlike traditional approaches, this method integrates semantic and contextual information from LLM-generated postmortem reports, significantly improving accuracy in identifying faulty classes or modules through ensemble learning.

Feature Traditional Fault Prediction Models LLM-Enhanced Fault Prediction Models
Focus Area Primarily relies on static code metrics (e.g., lines of code, cyclomatic complexity). Integrates semantic and contextual insights from LLM-generated postmortem reports.
Data Analysis
  • Limited in capturing complex code relationships.
  • Struggles with contextual understanding.
  • Leverages CodeBERT embeddings for deep code analysis.
  • Utilizes TF-IDF for postmortem report feature extraction.
Predictive Accuracy
  • Often challenged by scale and complexity of modern systems.
  • Moderate accuracy in diverse scenarios.
  • Achieves improved accuracy through stacking ensemble learning.
  • Better at identifying subtle fault indicators.

Calculate Your Potential ROI

Estimate the financial and operational benefits of integrating advanced AI and LLM solutions into your software development lifecycle.

Estimated Annual Savings $0
Hours Reclaimed Annually 0

Your AI Implementation Roadmap

A structured approach to integrating LLMs and AI into your software development practices for maximum impact.

Phase 1: Discovery & Strategy

Assess current development workflows, identify high-impact areas for AI integration, and define clear objectives and KPIs for LLM adoption in testing and fault prediction.

Phase 2: Pilot & Proof-of-Concept

Implement LLM-based solutions for a specific project, focusing on automated test case generation, fuzzing, or a targeted fault prediction module. Gather initial data and refine models.

Phase 3: Integration & Scaling

Expand successful pilot programs across more teams and projects. Integrate LLM tools seamlessly into existing CI/CD pipelines and development environments.

Phase 4: Optimization & Advanced AI

Continuously monitor performance, refine models with new data, and explore advanced agentic AI capabilities for autonomous code generation, self-healing systems, and proactive maintenance.

Ready to Transform Your SDLC with LLMs?

Leverage the power of AI to enhance software quality, accelerate development, and predict faults before they impact your business. Our experts are ready to guide you.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking