Skip to main content

Enterprise AI Analysis of "Where there's a will there's a way"

A Deep Dive into AI Regulation, Adoption, and the Business Imperative for Custom Governance

Executive Summary

A groundbreaking study by Honglin Bao, Mengyi Sun, and Misha Teplitskiy, titled "Where there's a will there's a way: ChatGPT is used more for science in countries where it is prohibited," reveals a startling reality about AI regulation: geographical restrictions are largely ineffective. By developing a sophisticated, domain-specific classifier for scientific texts, the researchers found that ChatGPT adoption was paradoxically higher in countries where it was officially banned. This phenomenon, driven by a thriving ecosystem of workarounds, offers critical lessons for the enterprise.

For business leaders, this research is a wake-up call. It demonstrates that top-down prohibition of powerful tools, whether by governments or corporations, often fails. Employees, like the scientists in the study, will find ways to access technologies they perceive as beneficial, creating significant "shadow AI" risks related to data security, compliance, and intellectual property. The study also highlights that the primary benefit of this AI usage was increased visibility (attention) rather than improved quality of work (impact), a crucial distinction for measuring ROI. At OwnYourAI.com, we interpret these findings as a clear mandate for a new approach: instead of futile bans, enterprises must implement robust, custom AI governance frameworks that manage risk while empowering innovation.

Develop Your Custom AI Governance Strategy

The Ineffectiveness of AI Bans: Key Research Findings

The paper's central conclusion is that determined users will circumvent restrictions. The data shows a clear and growing divergence in AI tool adoption, with restricted regions leading the charge. This isn't just a niche trend; it's a widespread behavior among highly skilled professionals.

ChatGPT Adoption Rates (by August 2023)

Estimated ChatGPT Usage Over Time

The following chart illustrates the core finding of the paper. After the release of ChatGPT, usage in countries with restrictions (prohibited) quickly surpassed usage in countries with legal access (supported), and the gap continued to widen.

The 'Fingerprint' Method: A Superior Approach to AI Detection

A major contribution of this study is its novel methodology for detecting AI-generated text. Generic, off-the-shelf detectors like GPTZero and ZeroGPT have known biases and often perform poorly on specialized content. The researchers overcame this by building a custom classifier trained specifically on scientific abstracts. This model learned the subtle linguistic "fingerprints" left by early versions of ChatGPT.

  • Overrepresented Words: Terms like "delve," "intricate," and "groundbreaking" appeared more frequently in AI-polished text.
  • Underrepresented Words: Words like "hence," "thus," and "might" were less common in AI outputs compared to human writing.

This domain-specific approach proved far more accurate, highlighting a critical lesson for enterprises: to truly understand AI usage and manage its risks within your organization, you need custom, context-aware monitoring tools, not generic solutions.

AI Detector Accuracy on Scientific Texts

The custom "Ensemble" classifier developed by the researchers significantly outperformed popular commercial tools, demonstrating the power of domain-specific models.

Enterprise Implications: The 'Workaround' Economy and Shadow AI

The scientists in the study used VPNs, purchased accounts, and employed proxy servers to bypass OpenAI's restrictions. Your employees are no different. When a corporate policy bans a useful generative AI tool without providing a viable, sanctioned alternative, a "workaround economy" emerges within the company. This creates massive blind spots for IT and security leaders, known as "Shadow AI."

The ROI of Generative AI: Measuring Attention vs. True Impact

One of the most nuanced findings is that while ChatGPT use was correlated with higher attention metrics (more views and downloads of preprints), it showed no statistically significant link to higher impact metrics (citations or publication in top journals). This suggests the AI's primary benefit was in improving the clarity, language, and "polish" of the abstracts, making them more appealing to readers, rather than fundamentally improving the underlying scientific quality.

For enterprises, this translates to a critical question for ROI analysis: Is your AI strategy designed to enhance marketing and communication (attention), or to drive core innovation and product quality (impact)? Both are valuable, but they require different tools, metrics, and custom solutions.

Interactive ROI Calculator: Attention vs. Impact

Estimate the potential "attention" boost a custom AI writing assistant could provide for your internal R&D or marketing teams. This model is based on the paper's finding of a ~2 percentile point increase in views.

Strategic Roadmap for Enterprise AI Governance: From Prohibition to Enablement

The clear lesson from this research is that prohibition is not a viable strategy. A mature enterprise AI policy must be built on principles of managed access, risk mitigation, and strategic enablement. We propose a three-tiered framework that can be customized to your organization's unique risk appetite and business goals.

Interactive Knowledge Check

Test your understanding of the key takeaways from this analysis and their enterprise implications.

Ready to Move Beyond Ineffective Bans?

The evidence is clear: you can't stop the tide of generative AI. The only winning move is to build a custom strategy that channels its power safely and effectively. Let OwnYourAI.com help you design and implement a bespoke AI governance framework that protects your assets while unlocking unprecedented innovation.

Book Your AI Strategy Session Now

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking