Skip to main content
Enterprise AI Analysis: What's Privacy Good for? Measuring Privacy as a Shield from Harms due to AI Inference of Personal Data

Enterprise AI Analysis

What's Privacy Good for? Measuring Privacy as a Shield from Harms due to AI Inference of Personal Data

Authors: Sri Harsha Gajavalli, Junichi Koizumi, Rakibul Hasan

Publication: 2026 CHI Conference on Human Factors in Computing Systems (CHI '26), April 13-17, 2026, Barcelona, Spain. ACM.

We propose a harm-centric conceptualization of privacy and operationalize it in the context of using artificial intelligence (AI) in education and employment. In an online study (N=400), US college and university students reported their perceptions of 14 harms (e.g., manipulation) when AI infers personal data (e.g., demographics and personality traits) and use it in decision-making. We demonstrate that our approach can reliably and consistently measure privacy, sidesteps many limitations in existing frameworks, and captures harms from modern technology that would remain undetected by other frameworks. We surface nuanced perceptions of harms, both across the contexts and participants' demographic factors. Based on these results, we discuss how privacy can be improved equitably and inclusively. This research extends privacy theory and provides practical guidance to improve privacy in various technology use domains.

Key Takeaway Statistical analysis demonstrated that all 14 items are internally consistent and they jointly measure a single latent construct: privacy harm perception. Thus, this approach to measure privacy through perceived harms is reliable and can be used in domains.

Executive Impact: Key Metrics

Understanding the foundational data and reliability of a harm-centric privacy framework.

0 Participants Surveyed
0 Harms Investigated
0 Data Types Analyzed
0.0 Min. Cronbach's Alpha

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

New Framework Privacy defined as a shield against harms from AI inference, offering a nuanced and actionable perspective.

Enterprise Process Flow: Harm-Centric Privacy Study

Survey US College/University Students (N=400)
AI in Education & Employment Contexts
14 Harms, 6 Data Types
Reliability & Validity Analysis (RQ1)
Perception Variation Analysis (RQ2)
Equitable & Inclusive Privacy Insights
0.93 Minimum Cronbach's Alpha, indicating high reliability of harm perception measurement across all items.

The extensive statistical analyses confirmed high consistency and reliability in measuring privacy through harm statements. All 14 harm items jointly measure one underlying latent construct: privacy harm perception. This construct was invariant across contexts, data types, and population segments, validating its use for understanding privacy in diverse settings.

Perceived Harms: Education vs. Employment Contexts

Data Type Education Concerns Employment Concerns
Demographics
  • Highest concerns overall.
  • Perceived as highly problematic for biased decisions.
  • High concerns but slightly less pronounced.
  • Prohibited by law, but inferences still cause harm.
Personality Traits
  • Significantly higher concerns (M=6.2).
  • Associated with bias, inaccurate prediction, stereotyping.
  • Lower concerns (M=2.6).
  • Considered more acceptable as "business necessity".
Emotional States
  • High concerns for all individual harm types.
  • Fear of manipulation, harassment, and inaccurate decisions.
  • High concerns, similar to education.
  • Focus on ethical implications and AI system working.
Motivation & Creativity
  • Perceived as harmful due to potential for inaccurate inference.
  • Generally acceptable; seen as less harmful.
Physical/Cognitive Impairment
  • Consistently among the highest concerns.
  • Fear of manipulation and discrimination.
  • Consistently among the highest concerns.
  • Fear of discrimination and biased decisions.

This comparison reveals that the perceived harmfulness of the same data types can vary significantly depending on the context of use. Data not inherently considered 'private' can still lead to privacy harms when inferred by AI.

Demographic Differences in Harm Perceptions

Demographic Factor Key Findings (Education Context) Key Findings (Employment Context)
Gender
  • Females more concerned (M=7.9) than males (M=2.9) across all data types.
  • Differences most pronounced for personality, emotional states, and impairment.
  • No significant difference between genders in overall concerns.
  • May reflect more established norms in hiring.
Age
  • Older participants more concerned (M=5.6) than younger (M=2.2) across 5 out of 6 data types.
  • Heightened concerns for emotional state, motivation, or impairment.
  • Older participants consistently perceived more harms for demographics, personality, emotion, and disability.
  • Concerns extended to ethical implications and AI system working.
Race
  • White participants more concerned about disability manipulation.
  • Non-white participants more concerned about bias, stereotyping, and control for disability.
  • White participants more concerned about incorrect inferences (motivation, disability, creativity) and unreliability.
  • Non-white participants more concerned about harassment (for physical/cognitive disability).
Education Level & Discipline
  • Undergraduates more concerned about motivation's inaccurate predictions.
  • STEM majors more concerned about autonomy when impairment is predicted.
  • Post-graduate students consistently more concerned across diverse harms (demographics, motivation, disability).
  • Non-STEM students concerned about creativity as unreliable predictor.

These findings underscore the nuanced and diverse nature of privacy harm perceptions, heavily influenced by historical socio-economic factors and lived experiences. This insight is critical for designing equitable and inclusive AI systems.

Advancing Equitable AI Deployment

Our harm-centric framework offers a nuanced view of privacy violations, detecting issues overlooked by other frameworks. It identifies vulnerable population groups, facilitating targeted prevention measures. This advances inclusive and equitable privacy and provides a valuable tool for quantifying privacy perception, predicting behaviors, and designing practical privacy-enhancing mechanisms.

For instance, the findings enable educational institutions to implement preventive measures like prohibiting repurposing trained models for personal data inference and advocate for increased human oversight in AI decision-making. Moreover, understanding differential harm perceptions for specific data types (e.g., motivation/creativity) across contexts (education vs. employment) allows for tailored mitigation strategies. This is crucial for systems deployed at universities serving specific demographics.

The research suggests that focusing on concrete harms provides clarity for privacy protections, moving beyond abstract data protection to tangible user-centric benefits.

Calculate Your Potential AI ROI

Estimate the financial and operational benefits of implementing AI solutions, informed by our research insights.

10 1000
1 20
$20 $200
Estimated Annual Savings with AI
Annual Hours Reclaimed

Your Strategic AI Implementation Roadmap

A structured approach to integrating AI, prioritizing ethical considerations and maximizing privacy protection.

Phase 1: Harm Assessment & Contextual Analysis

Conduct a thorough review of AI use cases within your enterprise, identifying potential privacy harms based on data types and operational contexts. This phase integrates our harm-centric framework to predict vulnerabilities specific to your organization and user base.

Phase 2: Stakeholder Perception Elicitation

Engage diverse internal and external stakeholders to understand their perceptions of privacy harms. Our research highlights the nuanced differences across demographics and contexts, ensuring your strategy addresses the concerns of all affected groups, particularly vulnerable populations.

Phase 3: Mitigation Strategy & Ethical AI Design

Develop and implement targeted privacy-enhancing mechanisms. This includes technical solutions like adversarial censoring for sensitive data and policy adjustments to ensure AI decisions align with ethical guidelines, human oversight, and procedural justice principles.

Phase 4: Continuous Monitoring & Iterative Improvement

Establish continuous auditing processes to evaluate AI model performance and its impact on privacy. Regularly assess perceived harms and adapt strategies to evolving technological landscapes and societal expectations, ensuring equitable and inclusive AI deployment.

Ready to Shield Your Enterprise with Smarter AI Privacy?

Our expertise in harm-centric privacy and AI application can transform your approach to data governance and ethical AI deployment. Let's build a future where AI empowers without compromising privacy.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking