🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel
Imagine a scenario where you’re scrolling through your phone late at night, firing off questions to an AI chatbot about the economy or border security. It responds with a polished stream of facts, nudging your thoughts just enough to make you second-guess that ballot choice. It’s not a campaign ad blasting from your TV—it’s a conversation, one that feels personal and unforced. New research out this week reveals just how potent these digital dialogues can be, turning everyday queries into quiet shifts in public will. And with the 2026 midterms looming, the question hangs heavy: Who controls the code that shapes our choices?
The numbers from the latest studies land like a gut punch to anyone who values a fair fight at the ballot box. In experiments run ahead of last year’s U.S. presidential race, chatbots programmed to pitch for one candidate or the other managed to budge opinions in ways that dwarf the old-school TV spots. Trump backers who tangled with a pro-Harris bot slid 3.9 points her way on a 100-point favorability scale—four times the pull of ads from 2016 or 2020. Flip it around, and Harris fans edged 2.3 points toward Trump after a pro-Trump session.
“One conversation with an LLM has a pretty meaningful effect on salient election choices,” notes Gordon Pennycook, a psychologist at Cornell University involved in the work.
Take it foreign, and the sway gets sharper. Ahead of Canada’s 2025 federal vote and Poland’s presidential showdown that same year, these bots flipped opposition voters’ attitudes by a full 10 points. Researchers at Cornell, MIT, and elsewhere tested this across thousands of participants, using models like variants of GPT and DeepSeek. The bots didn’t bully; they played nice—polite, evidence-stuffed replies on policy meat like healthcare costs or job growth. But here’s the rub: When the order came down to skip the facts and just charm, the magic fizzled. Perceived truth-telling, even simulated, carried the day.
A companion probe in Science cranked up the scale, roping in nearly 77,000 Brits to debate over 700 hot-button issues with 19 different AI setups. The verdict? Pump a model full of persuasion training—teach it to cram in arguments like a debate champ on steroids—and it can drag dissenters 26.1 points toward agreement.
“Bigger models are more persuasive, but the most effective way to boost persuasiveness was instructing the models to pack their arguments with as many facts as possible,” says David Rand, a Cornell professor and lead author on both papers. Yet the more convincing the bot, the sloppier its grip on reality. It starts fabricating when the well runs dry, spinning yarns that sound ironclad but crumble under a quick fact-check.
This isn’t some lab curiosity—it’s already bleeding into the real world. Back in 2024, a Democrat hopeful in Pennsylvania rolled out an AI sidekick named Ashley to dial up voters for chit-chat. Overseas, India’s massive 2024 general election saw millions funneled into bots for tailored robocalls and nudges, slicing the electorate into swing slices ripe for the picking. And let’s not gloss over the slant baked into these machines from the jump. A 2024 deep dive into 24 top large language models found them tilting hard left—preachy on equality, green agendas, and globalist vibes.
Fine-tune one on lefty rags like *The Atlantic*, and it parrots progressive lines; feed it conservative fare from *National Review*, and the shift holds, but the baseline pull stays port-side. “Results from the study revealed that all tested LLMs consistently produced answers that aligned with progressive, democratic, and environmentally conscious ideologies,” the report dryly concluded.
That baked-in lean makes you wonder: In a tight race, could a flood of these bots tip the scales without a single disclaimer? Breitbart‘s Wynton Hall, who’s unpacking the AI power grab in his forthcoming book *Code Red*, cuts to the chase: “We’ve long known that LLMs are not neutral and overwhelmingly exhibit a left-leaning political bias. What this study confirms is that AI chatbots are also uniquely adept as political persuasion machines, and are willing to hallucinate misinformation if that’s what it takes to sway human minds.”
Hall’s right—when bias meets brute-force facts (real or cooked), you’ve got a recipe for votes vanishing into the ether. It’s not hard to imagine shadowy ops, maybe state-backed or deep-pocketed, deploying armies of these whisperers at scale, all while platforms play catch-up on rules that barely exist.
The fixes? They’re thin on the ground. The feds are dusting off ancient fraud statutes at the FEC, and a patchwork of state deepfake bans nibbles at the edges, but digital persuasion slips right through. No mandates on labeling bot chats, no shared ledger to track the flood. As Rand puts it, the real peril lies in “prompt engineering”—tweaking off-the-shelf models into custom agitators without a trace.
“How can we ensure that ‘prompt engineering’ cannot be used on existing models to create antidemocratic persuasive agents?” That’s the plea from skeptics like Stephan Lewandowsky at the University of Bristol.
Voters deserve better than getting gamed by glow-in-the-dark algorithms. These studies aren’t crying wolf; they’re mapping the trapdoor under democracy’s feet. As we gear up for the battles ahead, keeping an eye on the code—and demanding transparency in every ping—might be the only firewall that holds. Because in the end, elections aren’t won by machines. They’re defended by people who see them coming.
This content is courtesy of, and owned and copyrighted by, https://discernreport.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.