SCIENCE & TECH: WeTransfer issues flurry of promises that it’s not using your data to train AI models after its new terms of service aroused suspicion

Science & tech: wetransfer issues flurry of promises that it's

🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel



  • WeTransfer users were outraged when it seemed an updated terms of service implied their data would be used to train AI models.
  • The company moved fast to assure users it does not use uploaded content for AI training
  • WeTransfer rewrote the clause in clearer language

File-sharing platform WeTransfer spent a frantic day reassuring users that it has no intention of using any uploaded files to train AI models, after an update to its terms of service suggested that anything sent through the platform could be used for making or improving machine learning tools.

The offending language buried in the ToS said that using WeTransfer gave the company the right to use the data “for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy.”

That part about machine learning and the general broad nature of the text seemed to suggest that WeTransfer could do whatever it wanted with your data, without any specific safeguards or clarifying qualifiers to alleviate suspicions.

Perhaps understandably, a lot of WeTransfer users, who include many creative professionals, were upset at what this seemed to imply. Many started posting their plans to switch away from WeTransfer to other services in the same vein. Others began warning that people should encrypt files or switch to old-school physical delivery methods.

WeTransfer noted the growing furor around the language and rushed to try and put out the fire. The company rewrote the section of the ToS and shared a blog explaining the confusion, promising repeatedly that no one’s data would be used without their permission, especially for AI models.

“From your feedback, we understood that it may have been unclear that you retain ownership and control of your content. We’ve since updated the terms further to make them easier to understand,” WeTransfer wrote in the blog. “We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension.”

While still granting a standard license for improving WeTransfer, the new text omits references to machine learning, focusing instead on the familiar scope needed to run and improve the platform.

Clarified privacy

If this feels a little like deja vu, that’s because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A change to the company’s fine print implied that Dropbox was taking content uploaded by users in order to train AI models. Public outcry led to Dropbox apologizing for the confusion and fixing the offending boilerplate.

The fact that it happened again in such a similar fashion is interesting not because of the awkward legal language used by software companies, but because it implies a knee-jerk distrust in these companies to protect your information. Assuming the worst is the default approach when there’s uncertainty, and the companies have to make an extra effort to ease those tensions.

Sensitivity from creative professionals to even the appearance of data misuse. In an era where tools like DALL·E, Midjourney, and ChatGPT train on the work of artists, writers, and musicians, the stakes are very real. The lawsuits and boycotts by artists over how their creations are used, not to mention suspicions of corporate data use, make the kinds of reassurances offered by WeTransfer are probably going to be something tech companies will want to have in place early on, lest they face the misplaced wrath of their customers

You might also like





Source link

Exit mobile version