Skip to main content

Enterprise AI Analysis of "Trapped by Expectations: Functional Fixedness in LLM-Enabled Chat Search"

Authors: Jiqun Liu, Jamshed Karimnazarov, Ryen W. White

Source: J. ACM 37, 4, Article 111 (August 2025)

Executive Summary

This pivotal research explores "Functional Fixedness," a cognitive bias where users' past experiences with technology dictate how they interact with new, more powerful systems like Large Language Models (LLMs). The study reveals that employees often fail to unlock the full potential of enterprise AI tools because their interaction patterns are anchored to older, simpler interfaces like traditional search engines or basic virtual assistants. Through a comprehensive study of 450 users interacting with ChatGPT, the authors demonstrate that prior tool usage creates distinct user profiles with predictable behaviorssome exploratory, others rigid and command-based. A critical insight for enterprises is that while this fixedness initially hinders performance and ROI, moments where the AI fails to meet expectations can act as a catalyst for behavioral change. These "healthy frictions" force users to adapt and formulate more sophisticated prompts, ultimately breaking them out of their cognitive ruts. This paper provides a data-backed foundation for why off-the-shelf AI deployments often underdeliver and underscores the necessity for custom-designed AI solutions that actively guide users, manage expectations, and turn moments of failure into opportunities for growth and deeper engagement. For businesses, this means the key to maximizing AI value lies not just in the technology itself, but in building intelligent, adaptive user experiences that mitigate cognitive biases.

Key Takeaways for Enterprise Leaders

  • AI Underutilization is a Cognitive Problem: Your team's inability to leverage advanced AI features is likely due to Functional Fixedness, not a lack of technical skill. They are using a supercomputer like a simple calculator.
  • One-Size-Fits-All AI Fails: Employees with backgrounds in search, data analysis, or administrative support will use the same AI tool in vastly different, often suboptimal, ways. A custom approach is needed to cater to these ingrained user profiles.
  • 'Failure' Can Be a Feature: When an LLM doesn't immediately provide the right answer, it can be a valuable training moment. These "unmet expectations" are crucial for encouraging users to think more critically about how they interact with AI.
  • ROI is Tied to User Adaptation: The true return on your AI investment will only be realized when users move beyond their initial, limited interaction styles. This requires proactive, intelligent system design, not just passive deployment.

Is Your Team Trapped by AI Expectations?

Unlock the full potential of your enterprise AI investment. Let's discuss a custom strategy to overcome functional fixedness and drive true productivity gains.

Book a Custom AI Strategy Session

Deconstructing Functional Fixedness in the Enterprise Context

Functional fixedness isn't an abstract psychological concept; it's a daily drag on enterprise productivity. It's the reason your team uses a powerful new BI dashboard only to export data to Excel, recreating old workflows. It's why they issue simple, keyword-based queries to a sophisticated generative AI that is capable of complex reasoning and synthesis. The research by Liu et al. provides a powerful lens through which to understand this challenge in the age of LLMs.

The core issue is a mismatch between the user's mental model and the system's actual capabilities. This cycle, as outlined in the paper, perpetuates inefficiency:

Past Tool Experience (e.g., Search Engines) Fixated Expectations ("AI is for simple Q&A") Limited Interaction (Simple, rigid prompts) Unmet Expectations ("The AI is not smart") Leads to... Opportunity for Adaptation

Key Research Findings & Enterprise Implications

The study's findings provide a roadmap for diagnosing and treating functional fixedness in an enterprise setting. By understanding how different user backgrounds affect AI interaction, we can build targeted solutions.

The Architect: Deliberate, Structured, and Cautious

These users, accustomed to the precision of traditional search engines, treat LLMs like a database. They spend more time crafting long, keyword-heavy prompts and carefully scrutinize the results. Their approach is incremental, making small adjustments to their queries rather than trying entirely new angles.

Enterprise Risk: Architects can be slow to adapt and may miss the LLM's creative and synthesizing capabilities. They are trapped in a retrieve-and-review mindset, limiting innovation and complex problem-solving.

Architect's Behavioral Metrics

The Explorer: Flexible, Iterative, and Fast

As native users of generative AI, Explorers are comfortable with the conversational nature of LLMs. They issue shorter, more direct prompts and rapidly iterate, showing a willingness to experiment with different phrasing. They understand the system's capabilities and are more likely to use it for complex tasks like brainstorming or drafting content.

Enterprise Opportunity: Explorers are your AI power users. The goal is to cultivate this behavior across the organization. However, the research shows even they can fall into habits, such as under-utilizing conversational context, which can be optimized with custom tools.

Explorer's Behavioral Metrics

The Commander: Directive, Task-Oriented, and Rigid

Shaped by experiences with tools like Alexa or Siri, Commanders treat the LLM as a subordinate to be given direct orders. They use command-like phrases and engage in fewer back-and-forth interactions. Their goal is task execution, not exploration or dialogue.

Enterprise Risk: This is the most constrained user type. Commanders severely limit the LLM's potential, using it for simple tasks that represent a fraction of its capabilities. This leads to significant underutilization of the AI investment.

Commander's Behavioral Metrics

The Turning Point: How Unmet Expectations Drive Growth

The paper's most optimistic finding is that functional fixedness is not permanent. When a user's initial, rigid prompt fails, it creates a moment of "healthy friction." The study found that after a response "does not meet expectations," users instinctively create subsequent prompts that are significantly longer and contain more unique words. This is a crucial, teachable moment.

Impact of Unmet Expectations on Next Prompt

When a response fails, users adapt by providing more detail.

An enterprise AI system should not aim for 100% immediate success on simple queries. Instead, a well-designed system can leverage these moments of friction to guide the user toward more effective interaction patterns, turning a frustrating experience into a learning opportunity that permanently enhances their AI literacy.

A Strategic Framework for Overcoming Functional Fixedness

At OwnYourAI.com, we translate these academic insights into a practical, three-phase framework we call "Adaptive AI Scaffolding." This approach moves beyond generic AI deployment to create a system that actively teaches and evolves with your users.

Ready to Build an AI That Adapts to Your Team?

Stop settling for underperforming AI tools. We design and implement custom LLM solutions with built-in adaptive scaffolding to break cognitive barriers and unlock your team's full potential.

Schedule Your Custom Implementation Call

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking