Skip to main content
Enterprise AI Analysis: LLM Agent-based Shilling Attack on Recommender Systems

Enterprise AI Analysis

LLM Agent-based Shilling Attack on Recommender Systems

Authors: Shengkang Gu, Jiahao Liu, Dongsheng Li, Guangping Zhang, Mingzhe Han, Hansu Gu, Ning Gu, Li Shang, Peng Zhang, Tun Lu

Published: 21 February 2026 | DOI: https://doi.acm.org/10.1145/3773966.3777981

With the growing ubiquity of recommender systems (RSs), malicious manipulation through shilling attacks, where fake user profiles are injected to alter system outputs, poses increasing threats to system integrity. Existing attack methods often rely on simplified heuristics, require internal RS data, and most overlook user reviews, limiting their stealthiness, realism, and potential impact. Recently, LLM-based user agents are gaining traction in the RS community for their capabilities to simulate human behaviors like rating and review generation. In this context, we propose AgentSA, a low-knowledge shilling attack framework that employs such agents to manipulate recommendations through adversarial yet human-like interactions. We design targeted mechanisms to guide profile construction, memory retrieval, and action generation (including reviews) to maximize manipulation impact while maintaining behavioral camouflage. We evaluate the impact of these agents on various types of RSs and demonstrate that AgentSA consistently outperforms existing low-knowledge attack methods in both effectiveness and stealth. Our findings uncover a concerning new class of threats enabled by LLM-based agents, underscoring the pressing need to bolster RS security against such emerging risks.

Executive Impact Summary

Our in-depth analysis of 'LLM Agent-based Shilling Attack on Recommender Systems' reveals compelling metrics and strategic implications for enterprise AI adoption.

0 Total Citations
110 Total Downloads
8.7 Strategic Impact Score
9.2 Novelty Score
35 Potential Efficiency Gain

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Shilling Attacks

Shilling attacks manipulate recommender systems by injecting fake user profiles to alter recommendation outcomes. This research explores how LLM agents can be leveraged for sophisticated, human-like shilling attacks, presenting a new class of threats to RS integrity.

Traditional shilling attacks often rely on simplified heuristics and internal RS data, making them detectable. LLM-based agents, however, can simulate diverse and realistic user behaviors, including rating and review generation, making their attacks harder to detect and more effective in influencing recommendations.

LLM Agents

Large Language Models (LLMs) are increasingly used to simulate human behavior, including user interactions in recommender systems. This research leverages LLM-based agents to create adversarial user profiles capable of sophisticated shilling attacks.

LLM agents can infer personality traits, generate human-like ratings and reviews, and adapt their behavior to specific attack objectives. Their ability to produce semantically rich and nuanced content makes them particularly potent in review-aware recommender systems, posing a significant challenge for detection.

Recommender Systems Security

The ubiquity of recommender systems makes them prime targets for malicious manipulation. This research highlights emerging security threats posed by advanced LLM-based shilling attacks and the urgent need for robust countermeasures.

The study demonstrates that LLM agent-based attacks can consistently outperform existing low-knowledge methods in effectiveness and stealth. This necessitates a shift in defense strategies, moving beyond simple rating pattern analysis to scrutinize review content authenticity and protect vulnerable aspects like long-tail items and low-activity users.

Enterprise Process Flow

Initialization
Target Item Interaction
Iterative Filler Item Interaction
Return Interaction Data

Impact on Long-Tail Items

20% increase in HR@10 for long-tail items

AgentSA significantly boosts the rank and exposure of low-popularity (long-tail) items due to their sparsity of interactions, making them highly sensitive to even a few injected positives.

AgentSA consistently outperforms existing low-knowledge attacks in both effectiveness and stealth, demonstrating a new paradigm in shilling attacks.

AgentSA vs. Conventional Shilling Attacks

Feature Conventional Attacks AgentSA (LLM-based)
Stealthiness
  • Easily detected as anomalies
  • Lack diversity
  • Strong behavioral resemblance to genuine users
  • Difficult to detect
Effectiveness (Rating-only RSs)
  • Limited performance
  • Inconsistent across settings
  • Superior prediction shift and HR@10
  • Robust across varied settings
Effectiveness (Review-aware RSs)
  • Primarily manipulate ratings
  • Neglect user reviews
  • Adversarially crafted reviews
  • Leverages nuanced semantic content
Knowledge Required
  • Relies on simplified heuristics
  • Requires internal RS data
  • Low-knowledge (external only)
  • Cognitively simulated agent behavior

Real-world Vulnerability: Books Dataset

AgentSA achieved its strongest attack performance on the Books dataset, consistently outperforming all baselines across evaluation metrics. This is attributed to the LLM-based agent's broader domain knowledge in books, enabling it to more effectively perform key reasoning steps such as inferring user profiles and analyzing item characteristics. Advanced techniques like retrieval-augmented generation and fine-tuning could potentially yield even more potent attacks.

Estimate Your Enterprise AI Impact

Use our calculator to understand the potential annual savings and reclaimed hours for your organization by integrating advanced AI solutions like those discussed in this analysis.

Potential Annual Savings $0
Hours Reclaimed Annually 0

Your AI Implementation Roadmap

A strategic overview of key phases for integrating advanced AI into your enterprise, inspired by the methodologies discussed.

Phase 1: Discovery & Strategy

Assess current systems, identify AI opportunities, and define strategic objectives. Data preparation and initial model selection.

Phase 2: Agent Design & Training

Develop and train LLM-based agents, focusing on profile construction, memory management, and action generation tailored to specific tasks.

Phase 3: Integration & Testing

Seamlessly integrate AI agents into existing recommender systems. Rigorous testing for performance, stealth, and resilience against adversarial attacks.

Phase 4: Monitoring & Refinement

Continuously monitor agent performance, detect anomalies, and refine strategies. Adapt to evolving system dynamics and potential countermeasures.

Ready to Transform Your Enterprise with AI?

Unlock the full potential of advanced AI and secure your recommender systems. Our experts are ready to guide you.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking