π΄ Website π https://u-s-news.com/
Telegram π https://t.me/usnewscom_channel
Imagine a scenario where youβre scrolling through your phone late at night, firing off questions to an AI chatbot about the economy or border security. It responds with a polished stream of facts, nudging your thoughts just enough to make you second-guess that ballot choice. Itβs not a campaign ad blasting from your TVβitβs a conversation, one that feels personal and unforced. New research out this week reveals just how potent these digital dialogues can be, turning everyday queries into quiet shifts in public will. And with the 2026 midterms looming, the question hangs heavy: Who controls the code that shapes our choices?
The numbers from the latest studies land like a gut punch to anyone who values a fair fight at the ballot box. In experiments run ahead of last yearβs U.S. presidential race, chatbots programmed to pitch for one candidate or the other managed to budge opinions in ways that dwarf the old-school TV spots. Trump backers who tangled with a pro-Harris bot slid 3.9 points her way on a 100-point favorability scaleβfour times the pull of ads from 2016 or 2020. Flip it around, and Harris fans edged 2.3 points toward Trump after a pro-Trump session.
βOne conversation with an LLM has a pretty meaningful effect on salient election choices,β notes Gordon Pennycook, a psychologist at Cornell University involved in the work.
Take it foreign, and the sway gets sharper. Ahead of Canadaβs 2025 federal vote and Polandβs presidential showdown that same year, these bots flipped opposition votersβ attitudes by a full 10 points. Researchers at Cornell, MIT, and elsewhere tested this across thousands of participants, using models like variants of GPT and DeepSeek. The bots didnβt bully; they played niceβpolite, evidence-stuffed replies on policy meat like healthcare costs or job growth. But hereβs the rub: When the order came down to skip the facts and just charm, the magic fizzled. Perceived truth-telling, even simulated, carried the day.
A companion probe in Science cranked up the scale, roping in nearly 77,000 Brits to debate over 700 hot-button issues with 19 different AI setups. The verdict? Pump a model full of persuasion trainingβteach it to cram in arguments like a debate champ on steroidsβand it can drag dissenters 26.1 points toward agreement.
βBigger models are more persuasive, but the most effective way to boost persuasiveness was instructing the models to pack their arguments with as many facts as possible,β says David Rand, a Cornell professor and lead author on both papers. Yet the more convincing the bot, the sloppier its grip on reality. It starts fabricating when the well runs dry, spinning yarns that sound ironclad but crumble under a quick fact-check.
This isnβt some lab curiosityβitβs already bleeding into the real world. Back in 2024, a Democrat hopeful in Pennsylvania rolled out an AI sidekick named Ashley to dial up voters for chit-chat. Overseas, Indiaβs massive 2024 general election saw millions funneled into bots for tailored robocalls and nudges, slicing the electorate into swing slices ripe for the picking. And letβs not gloss over the slant baked into these machines from the jump. A 2024 deep dive into 24 top large language models found them tilting hard leftβpreachy on equality, green agendas, and globalist vibes.
Fine-tune one on lefty rags like *The Atlantic*, and it parrots progressive lines; feed it conservative fare from *National Review*, and the shift holds, but the baseline pull stays port-side. βResults from the study revealed that all tested LLMs consistently produced answers that aligned with progressive, democratic, and environmentally conscious ideologies,β the report dryly concluded.
That baked-in lean makes you wonder: In a tight race, could a flood of these bots tip the scales without a single disclaimer? Breitbartβs Wynton Hall, whoβs unpacking the AI power grab in his forthcoming book *Code Red*, cuts to the chase: βWeβve long known that LLMs are not neutral and overwhelmingly exhibit a left-leaning political bias. What this study confirms is that AI chatbots are also uniquely adept as political persuasion machines, and are willing to hallucinate misinformation if thatβs what it takes to sway human minds.β
Hallβs rightβwhen bias meets brute-force facts (real or cooked), youβve got a recipe for votes vanishing into the ether. Itβs not hard to imagine shadowy ops, maybe state-backed or deep-pocketed, deploying armies of these whisperers at scale, all while platforms play catch-up on rules that barely exist.
The fixes? Theyβre thin on the ground. The feds are dusting off ancient fraud statutes at the FEC, and a patchwork of state deepfake bans nibbles at the edges, but digital persuasion slips right through. No mandates on labeling bot chats, no shared ledger to track the flood. As Rand puts it, the real peril lies in βprompt engineeringββtweaking off-the-shelf models into custom agitators without a trace.
βHow can we ensure that βprompt engineeringβ cannot be used on existing models to create antidemocratic persuasive agents?β Thatβs the plea from skeptics like Stephan Lewandowsky at the University of Bristol.
Voters deserve better than getting gamed by glow-in-the-dark algorithms. These studies arenβt crying wolf; theyβre mapping the trapdoor under democracyβs feet. As we gear up for the battles ahead, keeping an eye on the codeβand demanding transparency in every pingβmight be the only firewall that holds. Because in the end, elections arenβt won by machines. Theyβre defended by people who see them coming.
This content is courtesy of, and owned and copyrighted by, https://discernreport.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.