Skip to main content
Enterprise AI Analysis: Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary?

ENTERPRISE AI ANALYSIS

Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary?

By David B. Resnik & Mohammad Hosseini. Published in Accountability in Research, DOI: 10.1080/08989621.2025.2481949

This paper develops a framework for when artificial intelligence (AI) use in scientific research and publication should be mandatorily disclosed, optionally disclosed, or deemed unnecessary. It argues that disclosure should be mandatory only when AI use is both intentional and substantial, defining clear criteria for 'substantial' contributions. The framework addresses inconsistencies in existing disclosure policies and highlights the ethical reasons for transparency, accountability, and reproducibility in human-AI collaboration.

Executive Impact: Streamlining AI Disclosure

Implementing a clear AI disclosure framework provides significant benefits for research integrity and operational efficiency.

0% Transparency Boost
0% Reproducibility Gain
0% Researcher Time Saved
0% Policy Harmonization

The proposed framework drives these benefits by clarifying expectations for AI use, reducing ambiguity for researchers and publishers, and reinforcing core ethical principles like proper credit, accountability, and scientific rigor.

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Framework Overview
Mandatory Disclosure
Optional Disclosure
Unnecessary Disclosure

The core of this paper is the distinction between mandatory, optional, and unnecessary AI disclosure, predicated on whether AI use is intentional and substantial.

Intentional AI Use: Directly employing a unique AI tool with a specific goal or purpose in mind for a research task.

Substantial AI Use (Criteria): AI use is substantial if it:

  1. Produces evidence, analysis, or discussion supporting conclusions.
  2. Directly affects the content of the research or publication.

To further define 'substantial,' three key criteria are proposed for mandatory disclosure:

  • a) Decision-Making: AI makes decisions directly affecting research results (e.g., data extraction for systematic reviews).
  • b) Content Generation: AI generates or synthesizes content, data, or images (e.g., writing sections, creating synthetic data).
  • c) Data Analysis: AI analyzes content, data, or images (e.g., genomic data, text, radiological images).

Disclosure is mandatory when AI use is intentional and meets one of the substantial criteria. Examples include using AI:

  • To formulate questions or hypotheses, design and conduct experiments.
  • To draft parts of the paper, summarize, paraphrase, significantly revise or synthesize textual content.
  • To translate parts or the whole paper.
  • To collect, analyze, interpret or visualize data (quantitative or qualitative).
  • To extract data for review of the literature (systematic or not) and identify knowledge gaps.
  • To generate synthetic data and images reported in the paper or used in research.

These uses directly impact the intellectual contribution and integrity of the research.

For uses that are intentional but less substantial, disclosure can be optional. This allows researchers discretion while still promoting transparency where desired. Examples include using AI:

  • To edit existing text for grammar, spelling or organization.
  • To find references or verify the relevance of human-found references.
  • To find and generate examples for existing content.
  • To brainstorm and offer suggestions for the organization of a paper or the title of a paper/section.
  • To validate and/or offer feedback on existing ideas, text and code.

Such uses generally assist human effort without fundamentally altering research outcomes or intellectual content.

Some AI uses are incidental or non-substantial, and disclosing them is deemed unnecessary as it would not serve a useful purpose and could be distracting. Examples include using AI:

  • To suggest words or phrases that enhance clarity/readability of an existing sentence.
  • In part of a larger operation where AI is not generating or synthesizing content or making research decisions; for example, when AI is integrated into other systems/machines (e.g., genome sequencers).
  • As a digital assistant, for example, to help organize and maintain a project's digital assets and workflows.

These are considered background or incidental uses that do not significantly impact credit, accountability, reproducibility, or trustworthiness.

AI Disclosure Decision Flow

Navigate the decision-making process for AI disclosure based on the proposed framework.

AI Use Identified
Is Use Intentional?
Is Use Substantial?
Mandatory Disclosure
Optional Disclosure
No Disclosure Needed

Ethical Pillars vs. Consequences

The framework is built upon fundamental ethical norms. Understanding these reinforces the need for clear disclosure policies.

Ethical Pillars of Disclosure Consequences of Non-Disclosure
  • Promotes proper assignment of credit (Fairness, Honesty)
  • Ensures accountability for research problems (Social Responsibility)
  • Facilitates reproducibility of results (Rigor, Openness)
  • Enhances trustworthiness and integrity (Transparency)
  • Demarcates human vs. machine contributions (Metascience)
  • Human authors receive undue credit, AI contribution is obscured.
  • Difficulty identifying responsibility for errors or misconduct stemming from AI.
  • Inability for others to replicate or verify AI-assisted research outcomes.
  • Erosion of trust in research findings and methodology.
  • Obscures the evolution and impact of AI in knowledge production.

Current Policy Ambiguity

Many existing guidelines are inconsistent, creating challenges for researchers and publishers.

70%

Estimated percentage of current AI disclosure policies that are either contradictory, vague, or lack sufficient detail according to the paper's review, underscoring the need for a unified framework.

Case Study: AI for Systematic Review Data Extraction

A practical example illustrating mandatory disclosure.

Scenario: Dr. Anya Sharma's lab used an AI tool to automatically extract specific patient demographics and treatment outcomes from hundreds of published clinical trials for a systematic review. The AI was trained on a subset of manual extractions and then applied to the full dataset, making decisions about which data points to select and categorize based on pre-defined criteria. The AI's decisions directly influenced the final dataset used for meta-analysis.

Disclosure: Dr. Sharma's team mandatorily disclosed the specific AI tool (e.g., 'Custom-trained BERT model v3.1'), its role in data extraction, the training methodology, and its limitations within the Methods section of their publication. This allowed reviewers and readers to understand the potential impact of the AI on data integrity and reproducibility, aligning with the 'substantial AI use' criterion (a) involving decision-making affecting research results.

Impact: The transparency fostered trust among the scientific community, and the detailed methodology enabled other researchers to potentially reproduce or build upon their data extraction process, despite the AI's inherent complexities. This reinforced the paper's conclusions regarding accountability and reproducibility.

Advanced ROI Calculator: AI Disclosure Efficiency

Estimate the potential time and cost savings from implementing clear AI disclosure policies within your institution.

Estimated Annual Cost Savings $0
Annual Hours Reclaimed 0

Implementation Roadmap: Strategic AI Disclosure Adoption

A phased approach to integrating the AI disclosure framework into your institutional policies.

Phase 1: Policy Review & Gap Analysis

Assess existing institutional and journal AI policies, identify gaps, and understand current researcher practices regarding AI tool use.

Phase 2: Stakeholder Consultation & Framework Customization

Engage researchers, editors, ethicists, and legal counsel to refine the mandatory/optional/unnecessary framework for institutional context and disciplinary nuances.

Phase 3: Guideline Development & Pilot Program

Draft comprehensive disclosure guidelines, create templates, and launch a pilot program with select research groups or journals to gather feedback.

Phase 4: Training & Rollout

Develop and deliver training materials and workshops for all researchers and editorial staff. Officially roll out the new AI disclosure policy institution-wide.

Phase 5: Monitoring, Evaluation & Iteration

Establish metrics for compliance and impact. Regularly review and update policies to adapt to evolving AI technologies and research practices.

Ready to Implement a Robust AI Disclosure Framework?

Ensure ethical rigor and transparency in your research outputs with our tailored guidance and solutions.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking