🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel
The Senate broke out in heated debate Monday over a controversial rule within President Donald Trump’s One Big Beautiful Bill Act — one carrying enormous implications for the rapidly developing artificial intelligence sector.
The proposal would block any state governments from regulating the AI industry for years.
It has some appeal, because few think it’s a great idea for a patchwork of conflicting state laws to slow AI’s rocketing development.
But passing that rule without any federal regulation would make the AI industry a law-free zone, where Big Tech companies can essentially do whatever they want with an untested, sometimes exploitative new technology.
That’s why the Senate could throw out the idea.
And it’s why we still urgently need federal regulation on AI companies. With or without a moratorium on state regulation, we need some uniform federal standards to govern AI.
If the AI industry is going to grow sustainably and responsibly, we need legislation to provide guardrails and clear rules about how to protect the creators of content that AI tools use — publishers, authors, journalists, artists, musicians and creatives of all types.
Right now, those content creators are AI’s victims.
Big Tech and AI companies scrape vast amounts of content to build and operate their generative AI products, which turn content into GenAI outputs for users.
Sometimes they just reproduce content creators’ passages word for word — without credit or compensation.
AI companies admit these unfair and un-American tactics are fundamental to their businesses, but they refuse to pay because it’s cheaper to steal.
Even worse, this predatory behavior lets AI models act as information gatekeepers.
If Big Tech is left to its own devices, Americans will have less access to accurate information, and certainly no one to hold accountable for errors and mistakes.
Reporting on stories that Americans need to know will dwindle as the AI companies undermine the business models of publishers, opening the door to viewpoint suppression and creating opportunities for foreign propagandists.
How dire these problems will become is a matter of guesswork — because AI development is currently a black box.
Developers do not share information on whether or how they are obtaining consent for using publisher content. (News reports suggest that when they do share information about these methods, it is sometimes misleading.)
Publishers must hire experts to reverse-engineer how their content has been taken, a costly process that overburdens small publishers and can’t always identify all works that were used in training the models.
This lack of transparency hinders the enforcement of intellectual property rights and distorts regulatory decisions, business development and more.
Federal legislation could address these issues by requiring recordkeeping and full disclosure.
AI companies must let publishers know whether a generative AI model was trained on their work — and must also explain whether certain publications have been specifically excluded from AI models, so that the public can judge any bias.
Further, AI companies must disclose the sources they use to keep their models’ responses current.
Simple rules such as these will prompt commercial GenAI developers to enter agreements with publishers to use their content — agreements that will likely block AI companies and foreign actors from distorting the news that the public receives.
The benefits will be widespread. These rules would strengthen America’s position in the AI race by making its products more trustworthy and preserving the journalism that lies at its foundation.
Protecting intellectual property and homegrown content is what gives American AI companies an international competitive edge. Strong federal rules will also keep many small media businesses viable, and protecting thousands of workers and their communities.
The White House blueprint for AI wisely recognizes that AI development must be responsible and aligned with American values, including respect for intellectual property and the rule of law.
Congress seems to understand that AI needs regulation. It’s time to set this growing but potentially dangerous industry on a solid foundation.
Danielle Coffey is president and CEO of the News/Media Alliance, which represents more than 2,200 publishers nationwide.