Skip to main content
Enterprise AI Analysis: Impact of Artificial Intelligence on Employee Strain and Insider Deviance in Cybersecurity

Enterprise AI Analysis

Impact of Artificial Intelligence on Employee Strain and Insider Deviance in Cybersecurity

Authors: Emmanuel Anti, Duong Dang
Affiliation: University of Vaasa

This paper examines the impact of AI technologies like Performance Monitoring Tools (PMTs) and Automated Decision-Making Systems (ADMSs) on employee strain and the development of insider deviant behavior. Drawing on General Strain Theory (GST), the study explores how workplace stressors exacerbated by AI-driven PMTs and ADMSs may increase the risk of deviant behaviors such as fraud, sabotage, and social engineering. This study employs a quantitative methodology, using surveys to gather data on employee perceptions of AI-driven PMTs and ADMSs on employee strain and insider deviance. We expect that the findings will show AI-induced stress and negative emotions increase the likelihood of insider deviance. This study aims to contribute to research on cybersecurity threats and provide practical insights for organizations implementing AI technologies by offering strategies to mitigate workplace stress and insider threats. Future research will explore the relationship between AI integration, employee strain, and organizational security vulnerabilities.

Executive Impact & Strategic Imperatives

Understanding the intricate relationship between AI adoption and workforce well-being is crucial for maintaining cybersecurity. This analysis highlights key areas for executive attention.

0 Workers Fear AI's Impact on Roles
0 Potential Reduction in Insider Threats (with proper management)
0 Increased Operational Efficiency (from AI Adoption)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Insider Deviance
AI Technologies
General Strain Theory
AI-Induced Work Strain
AI-Induced Workload Change
AI-Induced Perceived Inequity
Research Model & Findings

Understanding Insider Deviance in the AI Era

Insider deviance refers to the violation of organizational norms by trusted individuals that threaten well-being, involving compromise, manipulation, or unauthorized access to systems. This includes behaviors like fraud, sabotage, and social engineering. Factors such as financial conflicts, job dissatisfaction, social isolation, and resentment can escalate deviant behaviors, particularly in response to AI-driven technologies.

AI-driven surveillance can lead to fears of job replacement and loss of control, fostering anxiety and resentment, which aligns with General Strain Theory (GST) where perceived injustice can lead to deviant behavior as a coping mechanism.

AI Technologies in the Workplace

The integration of AI technologies, such as Performance Monitoring Tools (PMTs) and Automated Decision-Making Systems (ADMSs), is rapidly transforming organizational operations. While these tools enhance efficiency and decision-making, they also introduce new challenges.

AI-driven tools influence hiring, promotions, and disciplinary actions, raising concerns over privacy, technostress, and job insecurity. These systems can lead to increased workplace surveillance, stress, and arbitrary disciplinary actions, potentially causing employees to feel unfairly targeted and increasing psychological distress.

General Strain Theory (GST) in AI Context

General Strain Theory (GST), developed by Agnew (1985, 1992), posits that individuals engage in deviant behavior when exposed to stressors that trigger negative emotions like anger, frustration, or depression. GST identifies three primary forms of strain:

  • Failure to achieve positively valued goals.
  • Removal of positive stimuli (e.g., job loss, career stagnation).
  • Presentation of negative stimuli (e.g., workplace stress, excessive monitoring, unfair treatment).

This study applies GST to explain how AI-driven workplace stressors can increase the likelihood of insider deviance. When AI is poorly managed, strains perceived as unjust or unavoidable can foster negative emotional states, increasing the risk of deviant behavior.

AI-Induced Work Strain

AI-induced work strain (AIWS) is defined as the psychological stress and emotional burden experienced by employees due to AI technology. This includes anxiety over job security, difficulties adapting to AI tools, or cognitive overload from managing complex AI-driven systems. PMTs and ADMSs create psychological and structural pressures when artificial intelligence judgments replace human judgment without consultation, or when employees feel overwhelmed by the rapid pace of skill development requirements.

Hypothesis H1 states: AI-Induced Work Strain Positively Affects Employee Strain. Hypothesis H2 states: AI-Induced Work Strain Positively Affects Insider Deviance.

AI-Induced Workload Changes

AI-induced workload changes (AIWC) refer to the dynamic shifts in job demands, task complexity, and control influenced by AI. While AI technologies may automate routine tasks, they can also increase cognitive demands by requiring employees to interpret and act on AI outputs. If employees feel overloaded or under-challenged, it can lead to psychological strain, emotional distress, and fatigue.

Such changes, especially when perceived as a removal of positive stimuli (like control over tasks or social interactions), can increase the likelihood of deviant behavior as a form of protest or reaction.

Hypothesis H3 states: AI-induced workload Changes Positively Affect Employee Strain. Hypothesis H4 states: AI-induced workload Positively Affect Insider Deviance.

AI-Induced Perceived Inequity

AI-induced perceived inequity (AIPI) refers to employees' perception of unfair treatment concerning AI-driven procedures, outcomes, and interpersonal interactions. This can manifest as unequal access to AI resources, biased performance evaluations, or lack of transparency in decision-making by AI systems. These perceptions can lead to dissatisfaction, mistrust, disengagement, and increased resistance to AI integration.

Perceived inequity represents a failure to achieve positively valued goals, contributing to negative behaviors like cyberloafing, organizational deviance, and retaliation.

Hypothesis H5 states: AI-induced Perceived Inequity Positively Affects Employee Strain.

Research Model & Preliminary Findings

This study employs a quantitative research approach, using a structured survey to gather insights into AI-induced workplace strain, workload changes, perceived inequity, employee strain, and insider deviant behavior. The conceptual framework (Figure 1 in the paper) illustrates how these AI-induced factors contribute to employee strain and ultimately to insider deviance.

A pilot study was conducted to refine the research approach, with 141 responses collected for the full study. Preliminary results suggest a significant correlation between AI-induced workplace strains and an increase in insider deviant behaviors, with negative emotional responses like frustration, anxiety, resentment, and distrust playing a key role.

0 Total Survey Responses Received to Date

Research Study Flow

Problem Identification (AI & Cybersecurity)
Theoretical Framing (GST)
Hypothesis Development
Survey Instrument Design & Validation
Data Collection (Global)
Statistical Analysis & Insights

Quantify Your AI Readiness ROI

Estimate the potential efficiency gains and cost savings by strategically managing AI implementation to mitigate employee strain and insider risks.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your AI Integration Roadmap

Our analysis provides a foundational understanding. The next steps outline our approach to delivering a comprehensive solution.

Theoretical Model & Instrument Validation

Finalizing the conceptual framework and ensuring the survey instrument accurately captures AI-induced strains and deviance. (Completed)

Data Collection & Initial Analysis

Gathering comprehensive data from diverse employee groups and conducting preliminary statistical assessments. (Completed)

Rigorous Statistical Analysis

Applying advanced regression models to thoroughly examine relationships between AI-driven workplace strain and insider deviance. (In Progress - Expected May 2025)

Refinement of Theoretical Insights

Deepening our understanding of the underlying mechanisms and moderating factors influencing AI's impact on employees. (Expected May 2025)

Actionable Recommendations & Manuscript Completion

Developing practical strategies for organizations to mitigate risks and enhance employee well-being, leading to the final paper submission. (Expected May 2025)

Ready to Transform Your AI Strategy?

Don't let unmanaged AI integration lead to unforeseen risks. Partner with us to build a resilient, employee-centric cybersecurity posture.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking