Skip to main content
Enterprise AI Analysis: Machines looping me: artificial intelligence, recursive selves and the ethics of de-looping

AI & SOCIETY RESEARCH ANALYSIS

Machines Looping Me: Artificial Intelligence, Recursive Selves and the Ethics of De-Looping

This paper investigates how machine learning (ML) artificial intelligence (AI) systems recursively transform personhood in the digital age. It focuses on how opaque ML systems create "digital human twins" (DHTs) of real persons, affecting autonomy, agency, and self-determination. Using mental health chatbots as a case study, it highlights how ML feedback loops simplify human identity into datafied entities. The concept of "recursivisation of personhood" is introduced to analyze these shifts. In response, the paper proposes an ethics of "de-looping"—a new normative framework to interrupt looped ML operations and make DHT production transparent, contributing to debates on digital identity, rights, and algorithmic governance.

Author: Bogdan-Andrei Lungu

Published: 23 September 2025

Executive Impact & Core Metrics

The rapid adoption of AI and ML technologies is reshaping enterprise operations, impacting data management, decision-making, and the very concept of digital identity. This research highlights the critical need for businesses to understand recursive algorithmic processes, particularly how they construct and utilize 'digital human twins' (DHTs). Ignoring these dynamics risks compromising user trust, data privacy, and ethical governance. Proactive strategies, guided by an 'ethics of de-looping,' are essential to ensure AI systems enhance, rather than diminish, human agency and transparency in the digital sphere.

0 Dimensions in ML Models
0 Weeks of Intervention
0% Annual Data Growth (Est.)
0% Algorithmic Governance (2030)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

The Recursivisation of Personhood

The paper introduces the concept of 'recursivisation of personhood' to describe how individuals' selves, bodies, and plural identities are rendered calculable and predictable by machine learning systems. This process transforms complex human identity into numerical vectors for algorithmic processing. Mental health chatbots, for example, demonstrate this by continuously updating user profiles based on emotional cues and interactions, thereby shaping and reifying digital selfhood within the system.

Recursive Loops in AI Systems

Recursion in ML systems involves cybernetic feedback loops where data is continuously collected, processed, and re-fed into algorithms. This circular logic, termed 'data coiling,' validates and reifies digital selfhood. Unlike simple repetition, this recursive movement is open to 'contingency,' constantly adapting. It's not just a technical process but a socially situated practice, embedding social actors within predictive feedback loops.

Digital Human Twins: Data-Based Selves

Digital Human Twins (DHTs) are data-based copies of real persons, constructed through practices of dataveillance from smart sensors and IoT devices. These DHTs are not exact replicas but datafied objects caught in recursive webs, used by ML systems for behavioral prediction and categorization. They are operationalized to improve algorithm performance, for instance, in healthcare or recommendation engines, making human bodies readable and quantifiable for machines.

Introducing the Ethics of De-Looping

The paper proposes an ethics of 'de-looping' as a normative framework to address the risks of recursive ML operations, such as alienation and threats to autonomy. This infra-ethics aims to interrupt harmful data-extractive loops and render transparent the processes that form DHTs. By making the recursive production of digital human twins explicit and controllable, de-looping seeks to protect fundamental human rights and foster human agency in an AI-shaped world.

Enterprise Process Flow: The Recursive Loop of Personhood

User Interaction & Data Generation
ML Systems Collect & Process Data
DHT Creation & Vectorization
Algorithmic Prediction & Personalization
Feedback Loop & Model Retraining
Impact on User Perception & Behavior

Contrasting Traditional Identity with ML-Driven Personhood

Aspect Traditional Identity ML-Driven Personhood
Nature Self-determined, subjective, evolving through social interactions Datafied, objectified, a collection of quantifiable metrics and vectors
Construction Internalization/externalization of social roles, personal narrative, lived experience Algorithmic loops, sensor data, predictive models, recursive verification
Agency Self-directed, capacity for moral autonomy, individual choice Constrained by opaque algorithms, subject to behavioral prediction and influence, alienation
Representation Complex, ambiguous, socially negotiated, qualitative Simplified, categorized, numerically embedded, quantitative, machine-legible

Mental Health Chatbots: Recursive Self-Perception and Data Extraction

The paper uses mental health chatbots like Replika and Woebot as a prime example of recursive personhood. These AI systems engage users in therapeutic discourse, collecting intimate emotional and psychometric data. This data is then used to personalize responses, creating a feedback loop where the chatbot's understanding of the user ('digital human twin') is constantly refined. While offering accessible support, this process also involves invasive data collection and can reify specific gendered stereotypes, raising critical ethical questions about privacy, autonomy, and the commodification of emotional well-being. The chatbot's self-humanization tactics serve to keep users engaged, feeding the recursive loop of prediction and influence, ultimately blurring the lines between genuine self-discovery and algorithmic construction.

De-Looping A New Ethical Imperative for AI Governance

Quantify Your AI Transformation

Estimate the potential annual savings and hours reclaimed by implementing advanced AI solutions in your enterprise, considering the ethical implications of recursive systems.

Annual Savings Potential $0
Hours Reclaimed Annually 0

Your Enterprise AI Transformation Roadmap

Our structured approach ensures a seamless and ethical integration of AI, aligning with the principles of de-looping and fostering human agency within your recursive systems.

Phase 1: Discovery & Ethical Assessment

Comprehensive analysis of existing data ecosystems, identification of recursive loops, and initial ethical impact assessment for personhood and agency.

Phase 2: Data Architecture & De-Looping Design

Designing transparent data flows, implementing 'de-looping' mechanisms, and structuring data to prevent opaque DHT formation and algorithmic bias.

Phase 3: AI Model Development & Validation

Building AI models with explainability, fairness, and human agency in mind, with rigorous validation against de-looping ethical standards.

Phase 4: Secure & Transparent Deployment

Implementing AI systems with clear user interfaces, data access controls, and mechanisms for users to understand and influence their digital representations.

Phase 5: Continuous Monitoring & Ethical Governance

Ongoing oversight of AI performance, detection of re-looping tendencies, and adaptive governance frameworks to ensure sustained ethical operation and human empowerment.

Ready to De-Loop Your Enterprise?

Let's discuss how our AI solutions can help you navigate the complexities of digital personhood, enhance data transparency, and empower your human capital in the age of recursive AI systems.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking