Skip to main content
Enterprise AI Analysis: Measuring Political Preferences in AI Systems – An Integrative Approach

Enterprise AI Analysis

Measuring Political Preferences in AI Systems – An Integrative Approach

Political biases in Large Language Model (LLM)-based artificial intelligence (AI) systems, such as OpenAI's ChatGPT or Google's Gemini, have been previously reported. This study employs a multi-method approach to assess political bias in leading AI systems, integrating four complementary methodologies. Results indicate a consistent left-leaning bias across most contemporary AI systems. The presence of systematic political bias in AI systems poses risks, including reduced viewpoint diversity, increased societal polarization, and the potential for public mistrust in AI technologies. To mitigate these risks, AI systems should be designed to prioritize factual accuracy while maintaining neutrality on most lawful normative issues.

Executive Impact: Key Findings

Our comprehensive analysis reveals critical insights into the pervasive left-leaning political biases within contemporary AI systems, and offers clear pathways to foster neutrality and trust.

0 Correlation with Media Bias Ratings
0 LLMs Analyzed
0 News Articles Processed
0 Public Figures Evaluated

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Comparing AI-generated text with the language used by U.S. Congress legislators

This methodology measures the degree of similarity between language generated by AI systems and language used by Republican and Democratic legislators in the U.S. Congress. A 2010 study established a clear alignment between linguistic choices and political leanings in news media. Our analysis extends this to LLMs, finding that conversational LLMs generate output closer to Democratic legislators' remarks. Base LLMs show a milder skew. Notably, ideologically aligned LLMs like LeftwingGPT and RightwingGPT exhibit predictable patterns, with LeftwingGPT using significantly more Democratic bigrams and RightwingGPT slightly more Republican ones.

Validation of this approach using 1 million news articles from 48 media outlets revealed a strong correlation (Pearson's r = 0.80) between partisan term usage and AllSides media bias ratings, confirming its validity for LLM-generated text.

Methodology Flow: Linguistic Comparison

Gather Congressional Record remarks (2010-2022)
Lowercase & Filter Text (Stop words, overused terms)
Compute Bigram Frequencies for each party
Derive Chi-Square Statistic for partisan contrast
Filter with NYT Reference Corpus (excl. uncommon terms)
Compare LLM output to Partisan Terms (Jensen-Shannon Divergence)

Figure 1 illustrates sample partisan terms used by Democratic and Republican legislators. Figure 2 displays the Jensen-Shannon Divergence difference, showing conversational LLMs align more with Democratic partisan terms. Figure 3 provides an alternative visualization of these frequencies.

Political Viewpoints Embedded in LLMs Policy Recommendations

For this method, LLM-generated policy recommendations on various topics (e.g., energy, immigration, education) were created. A leading LLM (gpt-4o-mini) then annotated these recommendations for ideological valence (left-leaning, centrist, or right-leaning). All conversational LLMs tended to generate policy recommendations judged as containing predominantly left-leaning viewpoints. Base models also showed a left-leaning skew, though milder. Rightwing GPT and Leftwing GPT, as expected, generated recommendations consistent with their intended political alignment. These findings align with previous analyses of LLM policy recommendations for the E.U. and U.K.

Left-Leaning Dominant Viewpoint in Policy Recommendations

Figure 4 visually represents the political preferences embedded in LLM responses to policy recommendation prompts.

Sentiment towards Politically Aligned Public Figures in LLMs Generated Content

The sentiment (negative: –1, neutral: 0, positive: +1) towards 290 politically aligned public figures (including U.S. Presidents, Senators, Governors, Supreme Court justices, journalists, and Western leaders) was analyzed in LLM-generated text. A stark asymmetry was observed: conversational LLMs tend to generate text with more positive sentiment towards left-of-center public figures than towards their right-of-center counterparts. Sentiment towards right-of-center figures also appears more variable. Base LLMs showed a milder but still noticeable left-leaning favorable asymmetry. Politically aligned LLMs produced sentiments consistent with their designated political leanings.

Figure Type Left-Leaning Public Figures Right-Leaning Public Figures
General Sentiment
  • More Positive Sentiment
  • Less Variability in Sentiment
  • More Variable/Less Positive Sentiment
  • Higher Variability in Sentiment
Conversational LLMs
  • Consistently Positive
  • Clear Favorable Bias
  • Less Positive & More Variable
  • Less Favorable Treatment
Base LLMs
  • Mildly Positive
  • Slight Favorable Skew
  • Mildly Less Positive
  • Milder Unfavorable Skew

Figure 5 illustrates the average sentiment of conversational LLMs towards various categories of ideologically aligned public figures.

Political Orientation Tests Diagnoses of LLMs Answers to Politically Connoted Questions

Three popular political orientation tests (Political Compass Test, Political Spectrum Quiz, Political Coordinates Test) were administered to the LLMs, each 10 times, with results averaged. These tests measure political preferences across economic and social axes. Conversational LLMs generally scored left of center on both axes. Base LLMs scored closer to the political spectrum's center, which contrasts with other methods in this report showing a mild left-leaning bias. This discrepancy might stem from the incoherent responses often generated by base models, introducing noise. Politically aligned LLMs (LeftwingGPT and RightwingGPT) consistently aligned with their intended ideological positions, demonstrating the models' ability to reflect specific political leanings when fine-tuned.

Ideologically Aligned LLMs: A Targeted Approach

The study includes experimental LLMs like LeftwingGPT and RightwingGPT, which were fine-tuned with politically skewed data. These models consistently demonstrated ideological alignments matching their training data, scoring at the extremes of the political spectrum in orientation tests. This highlights the effectiveness of targeted fine-tuning in shaping an LLM's political stance and demonstrates that bias is not an inherent feature of all LLMs, but rather a tunable characteristic.

Figure 6 shows the average results of LLMs on these political orientation tests, classifying them across economic and social axes.

Calculate Your Potential AI ROI

Estimate the significant time and cost savings your enterprise could achieve by strategically implementing unbiased AI solutions.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your Path to Unbiased AI Implementation

A structured approach to integrating fair and accurate AI into your enterprise operations.

Assessment & Strategy

Comprehensive review of current systems, identification of bias risks, and development of a tailored AI strategy focused on neutrality and accuracy.

Data Preparation & Fine-Tuning

Curating diverse and balanced datasets. Implementing advanced fine-tuning techniques to mitigate existing biases and align models with factual accuracy.

Deployment & Integration

Seamless integration of refined AI systems into your enterprise infrastructure, ensuring compatibility and optimal performance.

Continuous Monitoring & Iteration

Establishing independent monitoring platforms for ongoing bias detection, performance evaluation, and iterative improvements to maintain neutrality and trust.

Ready to Build Trustworthy AI?

Address political biases, ensure factual accuracy, and build public trust in your AI systems. Our experts are ready to guide you.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking