🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel
On November 14, 2025, at 9:30 a.m., the Arizona House of Representatives will convene a high-profile hearing on artificial intelligence, democratic governance, and election integrity. Chaired by Alexander Kolodin (R-LD3), the session will convene leading AI and policy experts to probe how the rapid expansion of AI technologies could reshape — and potentially jeopardize — the fairness and transparency of voting systems.
As AI systems increasingly influence information flows, decision-making, and public discourse, the hearing surfaces at a moment of growing concern: Can democracies keep pace with technologies capable of shaping opinion, disrupting networks, and intervening in electoral processes? With Arizona’s 2026 elections looming, this is not just theoretical. The presence of heavy-hitters from AI alignment, policy, and constitutional spheres signals that the state is treating these threats with serious urgency.
Rep. @realAlexKolodin Brings Together Leading AI and Policy Experts for Hearing on Protecting Elections in the Age of Artificial Intelligence@AZHouseGOP Rep. Alexander Kolodin, Chair of the House Ad Hoc Committee on Election Integrity and Florida-style Voting Systems, is… pic.twitter.com/nPD7Nx5z6R
— Arizona House Republicans (@AZHouseGOP) November 7, 2025
Titled “The Implications of Artificial Intelligence for Democratic Governance and How to Preserve Meaningful Elections,” the hearing is scheduled for Friday, November 14, 2025, at 9:30 a.m. in House Hearing Room 4 at the Arizona State Capitol.
Kolodin, who represents Legislative District 3 (North Scottsdale, Fountain Hills, and Rio Verde in Maricopa County), announced the event in a November 7 press release from the Arizona House of Representatives. The Arizona House Republican Majority Caucus also shared the details on X, highlighting the committee’s focus on election integrity and AI-driven threats such as deepfakes, algorithmic bias, and information manipulation.
Expert Witnesses
The witness list includes several leading voices on AI ethics, policy, and governance according to the release:
- Diane (“Di”) Cooke, Non-Resident AI Fellow in the International Security Program at the Center for Strategic and International Studies (CSIS). Cooke researches AI risks in defense and national security, including generative AI, deepfakes, and human-machine teaming. She has advised both U.S. and U.K. AI policy initiatives and holds degrees from King’s College London (MA in Intelligence and International Security) and the University of St. Andrews (BA).
- David Inserra, Fellow for Free Expression and Technology at the Cato Institute. Inserra specializes in free-speech and tech policy, previously working four years on Meta’s content-policy teams. He holds a Master of Public Policy from George Mason University and a BA from the College of William & Mary.
- Connor Leahy, Founder and CEO of Conjecture, an AI alignment firm established in 2022 to ensure AI systems align with human values. Leahy co-founded EleutherAI and was formerly a machine-learning engineer at Aleph Alpha GmbH.
- Nick Dranias, author of the forthcoming Why Excellence Matters: Building Better AI Through Better Ethics. Dranias serves as General Counsel for Honduras Próspera Inc. and formerly worked as Senior Litigation Counsel at the Arizona Attorney General’s Office and with the Goldwater Institute. He holds a Juris Doctor from Loyola University Chicago and a BA in Economics and Philosophy from Boston University.
- Dr. Robert Epstein, Senior Research Psychologist at the American Institute for Behavioral Research and Technology (AIBRT). Epstein’s work examines AI’s influence on opinions and voting behavior, including the Search Engine Manipulation Effect (SEME), Search Suggestion Effect (SSE), and Answer-Bot Effect (ABE). He has testified before Congress on AI’s potential to shape electoral outcomes.
The committee will examine how emerging AI technologies could distort democratic discourse and election integrity — from synthetic media to algorithmic content curation. Kolodin’s hearing follows Arizona’s recent debates on regulating deepfake content and protecting electoral infrastructure in the digital age.
“It has already popped up in a number of different states where generative artificial technology has led to reproductions of the human voice, and likeness in video, with such convincing clarity that it is hard to distinguish between the person themselves and the deep fake version of them,” Kolodin, told the House Municipal Oversight & Elections committee in January when it considered his House Bill 2394 to provide legal recourse against “digital impersonations” aka deep fakes. HB 2394 was ultimately passed by the legislature and signed into law in May.
Kolodin said the goal is to ensure “free, fair, secure, and honest elections,” echoing themes he outlined in earlier statements to AZ Free News. “The states cannot be complacent when it comes to the rapid development of AI,” Kolodin said in a statement. “The risk of insufficient oversight of AI is literally what dystopian nightmares are made of. Although it is reasonable to be excited about the prospects of AI to improve human life and society, it is equally critical to be vigilant about the ways it can be abused to erode our freedoms, including threatening democratic governance and our elections.”
The State of Arizona will publicly livestream the hearing, continuing the state’s growing focus on AI’s impact on democratic institutions.
This content is courtesy of, and owned and copyrighted by, https://californiaglobe.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.
