Opinion & Thinking Issues
On Activism within Academic Computing
Authored by Randy Connolly, Mount Royal University
Executive Impact: Re-evaluating AI Ethics in Academia
This article challenges the narrow view of ethics in academic computing, extending beyond plagiarism to encompass broader societal harms of generative AI. Drawing parallels with historical debates around nuclear weapons, it advocates for a 'reflexive dimension' in computing practice. This involves academics acting as public intellectuals and activists, expressing concerns about technologies that threaten civic institutions, rather than merely optimizing their use. The piece highlights the importance of science engaging with civil society and being willing to 'say no' to technologies deemed irredeemably unethical.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
The article critiques the prevalent, narrow understanding of ethics in computing, particularly concerning generative AI. It argues that 'harm' extends far beyond plagiarism to include psychological, cognitive, environmental, and broader societal impacts.
Key Assertion: Generative AI and Civic Institutions
“'When AI systems are fully embraced and implemented ... they will either destroy these [civic] institutions directly or make them so vulnerable that their demise is inevitable.'”
Source: Hartzog and Silbey [14]
Academic computing has an 'instrumental' dimension focused on problem-solving and applying research. However, the article emphasizes a crucial 'reflexive dimension'—self-criticality and defense of the wider public interest. This involves questioning the field's 'doxa' and advocating for public good.
| Type of Practice | Internal Orientation | External Orientation |
|---|---|---|
| Instrumental |
|
|
| Reflexive |
|
|
Educator's Responsibility: Saying No to Unethical Tech
“'Too often we are tempted to believe that as users of this technology we are not accountable to its consequences. But as knowledge producers, we do have a responsibility to the wider public. So, if our scholarly activity has convinced us of the dangers of a manifestation of a particular computing technology, we should be willing to say no to it.'”
Source: Randy Connolly, ACM Inroads
The article draws a parallel between the current AI debate and past scientific activism, particularly around nuclear weapons. It highlights how scientists historically took principled stands against technologies deemed intrinsically harmful, even in the face of increased government funding for related research.
Historical Precedent: Nuclear Weapons Debate
Gramsci's 'Monsters' and the Interregnum
“'The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum, a great variety of morbid symptoms [also translated as 'monsters'] appear'”
Source: Antonio Gramsci [28]
Estimate the Impact of Ethical AI Integration
Understand the potential savings and reclaimed productivity by adopting a more ethical and reflexive approach to AI in your organization.
Your Roadmap to Responsible AI Leadership
Implement a strategic approach to AI that prioritizes ethics, fosters critical thinking, and aligns with broader societal well-being.
Phase 1: Ethical Framing & Awareness
Redefine computing ethics beyond plagiarism to include psychological, cognitive, environmental, and civic harms. Promote critical discussions in academic settings (e.g., SIGCSE, ITiCSE) about the true societal costs of AI innovations.
Phase 2: Fostering Reflexive Computing
Encourage a 'reflexive dimension' in computing education and research. This means actively questioning the underlying assumptions ('doxa') of the field and evaluating technologies like generative AI for their intrinsic ethical implications, not just their 'best use.'
Phase 3: Public Intellectual Engagement
Empower academic computing professionals to act as public intellectuals and activists. This includes publicly expressing concerns when convinced of a technology's dangers, publishing critical analyses, and educating the wider public and students about the sociotechnical harms of digital innovation, much like scientists in the nuclear age.
Phase 4: Institutional Responsibility & 'Saying No'
Develop institutional frameworks that support academics in taking principled stands, even if it means opposing the widespread adoption of certain technologies. Cultivate a culture where saying 'no' to inherently unethical computing practices is seen as a valid and essential form of scholarship, deeply engaged with civil society.
Ready to Shape the Future of Ethical AI?
Join us in fostering a responsible and critical approach to AI within academic computing and beyond. Schedule a strategy session to discuss how these insights can guide your institution or research.