SCIENCE & TECH: Your AI-generated password looks unbreakable, but researchers say it could fall in hours on old computers

SCIENCE & TECH: Your AI generated password looks unbreakable, but researchers

🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel



  • AI-generated passwords follow patterns hackers can study
  • Surface complexity hides statistical predictability beneath
  • Entropy gaps in AI passwords expose structural weaknesses in AI logins

Large language models (LLMs) can produce passwords look complex, yet recent testing suggests those strings are far from random.

A study by Irregular examined password outputs from AI systems such as Claude, ChatGPT, and Gemini, asking each to generate 16-character passwords with symbols, numbers, and mixed-case letters.

At first glance, the results appeared strong and passed common online strength tests, with some checkers estimating that cracking them would take centuries, but a closer look at these passwords told a different story.

LLM passwords show repetition and guessable statistical patterns

When researchers analyzed 50 passwords generated in separate sessions, many were duplicates, and several followed nearly identical structural patterns.

Most began and ended with similar character types, and none contained repeating characters.

This absence of repetition may seem reassuring, yet it actually signals that the output follows learned conventions rather than true randomness.

Using entropy calculations based on character statistics and model log probabilities, researchers estimated that these AI-generated passwords carried roughly 20 to 27 bits of entropy.

A genuinely random 16-character password would typically measure between 98 and 120 bits by the same methods.

The gap is substantial — and in practical terms, it could mean that such passwords are vulnerable to brute-force attacks within hours, even on outdated hardware.

Online password strength meters evaluate surface complexity, not the hidden statistical patterns behind a string – and because they do not account for how AI tools generate text, they may classify predictable outputs as secure.

Attackers who understand those patterns could refine their guessing strategies, narrowing the search space dramatically.

The study also found that similar sequences appear in public code repositories and documentation, suggesting that AI-generated passwords may already be circulating widely.

If developers rely on these outputs during testing or deployment, the risk compounds over time – in fact, even the AI systems that generate these passwords do not fully trust them and may issue warnings when pressed.

Gemini 3 Pro, for example, returned password suggestions alongside a caution that chat-generated credentials should not be used for sensitive accounts.

It recommended passphrases instead and advised users to rely on a dedicated password manager.

A password generator built into such tools relies on cryptographic randomness rather than language prediction.

In simple terms, LLMs are trained to produce plausible and repeatable text, not unpredictable sequences, therefore, the broader concern is structural.

The design principles behind LLM-generated passwords conflict with the requirements of secure authentication, thus, it offers protection with a lacuna.

“People and coding agents should not rely on LLMs to generate passwords,” said Irregular.

“Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation.”

Via The Register


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.





Source link

Exit mobile version