Skip to main content
Enterprise AI Analysis: Controlling AI Agent Participation in Group Conversations: A Human-Centered Approach

Controlling AI Agent Participation in Group Conversations: A Human-Centered Approach

Unlock Collaborative AI: Design Agents That Truly Fit Your Team

While AI agents excel in one-on-one tasks, their integration into complex group conversations poses unique challenges. This research explores how AI can meaningfully contribute to group ideation without dominating the discussion, revealing critical human-centered requirements for effective collaboration.

Executive Impact: Key Findings for Enterprise AI

Understand the quantifiable benefits and crucial design considerations for deploying AI agents in collaborative enterprise environments.

0% AI-CONTRIBUTED IDEAS
0% TOP IDEAS FROM AI
0 USER CONTROL UTILITY
0% USERS PREFERRED REACTIVE AI

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

The Proactivity Dilemma
Koala's Evolution
User Control Imperative
The Control Taxonomy

When Should AI Speak in a Group?

In group settings, the dynamics of AI participation become complex. Unlike one-on-one interactions, it's not straightforward to determine when an AI agent should contribute, as its interventions can either enrich discussions or be perceived as intrusive. This research directly addresses the "proactivity dilemma" in multi-party AI interactions.

From Koala I to Koala II: Refining AI Behavior

The study developed "Koala," an LLM-based conversational agent for group ideation in Slack. Initial findings from Koala I revealed user preferences for AI presence but also significant concerns about overwhelming proactivity. This led to "Koala II," an enhanced version with updated LLMs (Llama 3), refined prompts for targeted suggestions, and the introduction of explicit user controls.

Empowering Users with Dynamic AI Controls

A key finding was the strong user desire for explicit control over AI behavior. Participants valued the ability to adjust the AI's contribution rate, content relevance threshold, and even where its messages appear (in-channel vs. thread). This highlights a shift from fixed AI behaviors to dynamic, user-configurable participation, essential for managing collaborative workflows effectively.

A New Framework for Human-AI Collaboration

The research culminates in a comprehensive taxonomy for designing proactive AI agent behaviors. It categorizes control aspects into when, what, and where the AI contributes, alongside methods for specifying, managing access to, and implementing these controls. This framework provides AI creators with crucial considerations for building adaptable, human-centered conversational agents.

Enterprise Process Flow: Koala's Decision Logic

User Posts Message in Slack
Backend Handles Event
Check Proactivity / Addressing
Generate & Score Response
Evaluate Contribution Value
Post to Slack Channel

Comparative Analysis: Koala I vs. Koala II Behaviors

Feature Koala I (Proactive) Koala II (Modified Proactive) User Preference (Study 1 vs. 2)
Participation Impact
  • Distracting and intrusive
  • Overwhelmed users
  • Dominated conversation
  • Too talkative (length & frequency)
  • Quieter and less invasive
  • Responded at the right pace
  • More comfortable and natural interaction
  • Stayed on topic
  • Study 1: Majority (72.2%) preferred Reactive AI
  • Study 2: No groups switched to Reactive, favored modified proactive engagement
Utility & Contributions
  • Helped get started
  • Perceived as speeding up process
  • Provided structure and summaries
  • Offered validation and information
  • Mixed effects on human creativity
  • Beneficial, improved user experience
  • Promoted more engagement
  • Provided targeted and collaborative suggestions
  • Constructive criticism on existing ideas
  • Overall: Preferred AI over no AI
  • Koala II: Found useful due to better behavior and controls
Control Mechanisms
  • Users desired explicit controls
  • Attempts to control via natural language chat (e.g., "koala - leave the rest to us")
  • Requests for improved default behaviors
  • Introduced settings panel (toggle, contribution threshold, response location, message display)
  • User utility for controls rated 4.46/5
  • Explored role-based, natural language, and persona controls
  • High value placed on controls
  • Desire for dynamic, in-session adjustment of AI behavior
  • Preference for role-based/persona controls

User Empowerment: The Demand for AI Control

4.46/5 Mean Utility Rating for AI Control

Participants rated the utility of direct AI controls at an average of 4.46 out of 5, strongly indicating that users need and value the ability to customize AI agent behaviors. This emphasizes that AI participation in groups cannot be a one-size-fits-all solution.

A Taxonomy for Human-Centered AI Control

Our work identifies a rich design space for proactive, LLM-powered conversational agents along with the insight that there is no single 'best fit' point in this space: the ideal interactive behaviors of the agent will depend on the individual preferences of group members and their task at hand, requiring a dynamic ability to adjust those behaviors.

The taxonomy considers two high-level concerns: the aspects of the agent's interactive behaviors to be controlled (when, what, and where), and the ways to control them (specification, access, and implementation). This framework empowers AI creators to design adaptable, human-centered conversational agents.

Quantify Your AI Impact

Estimate the potential time savings and financial benefits of implementing intelligently controlled AI agents in your enterprise workflows.

Annual Savings $0
Hours Reclaimed Annually 0

Your Path to Controlled AI Implementation

Based on our research, here's a strategic roadmap for integrating human-centered AI agents into your organization.

Phase 1: Expand AI Contexts

Investigate AI agent performance in diverse group activities beyond ideation, such as decision-making, cooperative learning, and varied collaborative applications to broaden applicability.

Phase 2: Behavioral Data & Prioritization

Conduct empirical studies to identify which AI behaviors are most frequently adjusted by different group types, allowing for prioritization and optimization of control mechanisms.

Phase 3: Dynamic Control Evolution

Develop and implement real-time, dynamic control mechanisms for AI agents, enabling users to co-design AI behaviors and adapt them fluidly throughout interactions.

Ready to Take Control of Your Enterprise AI?

Our human-centered approach ensures your AI agents are not just powerful, but perfectly integrated and controllable within your team's workflow.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking