Lead researcher Dr. Fabio Motoki emphasized that generative AI tools like ChatGPT are not neutral and reflect inherent biases that can shape public perceptions and policies.
Co-author Dr. Pinho Neto warned that unchecked biases in generative AI could deepen societal divides and erode trust in democratic processes.
Co-author Victor Rangel raised questions about the rationale behind the censorship of right-leaning images, especially given the team's ability to bypass such refusals.
Dr. Motoki highlighted the implications for free speech and fairness, contributing to ongoing debates around constitutional protections and the applicability of fairness doctrines to AI systems.
The study found systematic deviations toward left-leaning perspectives in ChatGPT's responses when compared to actual survey data from Americans.
The research team employed three innovative methods to assess ChatGPT's political alignment, including prompts that mimicked an average American's responses in a Pew Research Center political survey.
Researchers noted instances where ChatGPT censored images with right-wing themes, citing misinformation concerns despite finding no harmful content upon review.
The findings indicated that ChatGPT often avoids mainstream conservative viewpoints, which could exacerbate societal divides.
A recent study conducted by a British-Brazilian team has raised concerns about the political biases present in generative AI, particularly in ChatGPT.
Researchers warned that uneven treatment of political ideologies by AI could distort public discourse, highlighting the need for alignment with societal values and democratic principles.
The study calls for transparency and regulatory safeguards as AI becomes increasingly integral in journalism, education, and policymaking.
These findings were published in the Journal of Economic Behavior and Organization, shedding light on the implications of biased AI.



