Travel & Lifestyle: Experts Warn Sharing Medical Information With ChatGPT Health Is Dangerous

ChatGPT Health encourages users to upload their medical records, which cybersecurity experts say is risky.

🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel

Think about your most personal information — your health diagnoses, medications you take, medical test results. And now think about uploading some of that sensitive information online.

That’s exactly what ChatGPT Health, a new service from its parent company OpenAI, wants you to do.

According to ChatGPT, medical questions are one of the most searched topics on the AI platform, which is why the company decided to launch the health and wellness tool.

ChatGPT Health encourages users to upload health data such as medical records and test results, and connect smart devices and fitness apps to the platform so users can have all of their health data in one place. Users also purportedly get personalized health recommendations and advice (not diagnoses or treatments, an OpenAI press release stresses) from the AI service.

Naturally, cybersecurity experts have concerns about the mass upload of health records and personal wellness data. Here’s what to know:

ChatGPT Health and other AI health providers won’t have to follow the HIPAA privacy regulations that medical providers do.

“The thing that struck me the most, and the thing that concerns me the most, is the fact that OpenAI and ChatGPT, they’re not a bona fide health care provider,” said Robert D’Ovidio, an associate professor of criminology and justice studies at Drexel University in Philadelphia.

“So, what does that mean to you and I? That means that they’re not covered under HIPAA,” D’Ovidio said, referring to the federal law that protects your health data and requires health providers to follow strict privacy and security rules. “The same protections that we get when we go to our doctor’s offices, when we go to an urgent care, when we go to the hospital, they’re not there in terms of when we’re dealing with ChatGPT,” he added.

An OpenAI spokesperson told HuffPost that the company takes privacy seriously, especially when it concerns health information, and that ChatGPT Health users can delete their data at any time.

ChatGPT, like your doctor, may know that you’re struggling with depression or have a heart disease diagnosis. However, unlike your doctor, the online tool would be under no federal obligation to secure that information, said Nathan Evans, the faculty director of the cybersecurity program and co-chair of computer science at the University of Denver.

“So, that’s sort of my number one biggest question mark there,” Evans said.

“And even if they say they’re going to follow the strictest security and privacy standards, if there’s not something that’s enforcing that on them [like the federal government], then it’s possible that they could change their tune in the future.”

Collecting health information may make ChatGPT data a ‘larger target’ in terms of cybersecurity attacks.

One good thing? ChatGPT Health states it’s using data isolation and data segmentation so your health-related data and medical uploads aren’t mixed in with what you’re searching and doing on the non-health side of ChatGPT, according to D’Ovidio.

But even with promises of data segmentation and isolation, “You’re still at risk when it comes to data breaches,” D’Ovidio added.

An OpenAI spokesperson said because of the sensitive nature of the data uploaded to ChatGPT Health, the tool uses “purpose-built encryption and isolation” to protect and compartmentalize health conversations and information.

But, “one has to question whether them offering this type of service puts a larger target on their back, knowing how useful such information is, especially when it comes to the sensitivity of such data,” D’Ovidio said. “To me, as a criminologist who studies this, this says, ‘Well, yeah, they’re painting a much larger target on their back by doing this.’”

ChatGPT Health is partnering with a company called b.well, an “AI-powered digital health platform,” according to its website.

“That certainly adds another complexity to the equation,” D’Ovido said, because users’ information is with ChatGPT and OpenAI, but people “also have to be concerned about what’s going on at b.well because b.well is now part of the equation.”

This is another potential target in regard to data breaches and privacy issues, he said.

ChatGPT Health isn’t the only AI company entering health care. Anthropic announced Claude for Healthcare just days after the announcement of ChatGPT Health. This should be a “call to action” for our elected officials in Washington, according to D’Ovidio.

“Maybe it’s time to rethink HIPAA in the sense of where the boundaries are, who is defined as a health care provider,” he added. “We don’t want to be in a situation where we have every state adopting its own regulation because that just creates chaos.”

Andriy Onufriyenko via Getty Images

ChatGPT Health encourages users to upload their medical records, which cybersecurity experts say is risky.

Sharing your health information with any company that isn’t your doctor’s office puts your sensitive health data at risk.

Early last year, the online genetic testing company 23andMe filed for bankruptcy, which put in question the health and ancestral data it has from millions of users.

While this isn’t a current concern for OpenAI, there is no way to actually predict the future and what potential company changes could mean for your sensitive health data, experts say.

“We don’t anticipate that OpenAI is going to run out of money or go out of business, but they are not a profitable company. They’re hemorrhaging money to try to build bigger,” said Evans.

With most of the data ChatGPT collects, the goal is to use it to train and build bigger and better models, according to Evans. ChatGPT (and Claude) have both said users’ health data won’t be used for training.

“There haven’t been huge breaches, but that doesn’t mean that there’s not a possibility that it’s going to happen in the future,” Evans said, adding that stolen data and internal mess-ups are fears for data security, too.

“Once you give away your data, once you share your data, it’s gone. That is a one-time decision. It is always out there until those files are definitively erased,” which is a difficult task, said Cori Faklaris, an assistant professor in the College of Computing and Informatics at the University of North Carolina at Charlotte.

“Not everybody feels trusting of these companies, and probably for some good reasons, either they’ve been burned in the past with other companies or they’re just not persuaded by their public statements and by the things that are on their website,” Faklaris said.

“It’s not necessarily clear what OpenAI’s goals are for your information,” Evans said. “They make it sound like they’re doing this health thing to improve the world and improve their everyday users, but at the end of the day, it’s probably a monetary play.”

ChatGPT Health isn’t launching in Europe, which one cybersecurity expert said is telling.

ChatGPT Health is currently not being released in the European Union, which is potentially revealing, according to Evans.

The EU has “one of the strongest privacy and user security protections out there today,” Evans added, known as the General Data Protection Regulation, or GDPR.

“It’s kind of telling that you can’t use it if you’re in that place,” he added. “Maybe there are some privacy and security issues to be taken into account because [ChatGPT doesn’t] necessarily feel like they can comply with those very stringent rules.”

There isn’t an equivalent set of privacy and security laws in the U.S., which could make it easier for something like ChatGPT Health to launch here.

If you are going to use AI for personalized health support, keep a few things in mind.

Making a decision to upload your health records, medical test results, and smartwatch fitness and health data to ChatGPT Health is a big one, and you should treat it as such.

“First of all … never do it just reflexively. Don’t just press the button — try to stop and think, ‘Am I getting more benefits from the insights I’m going to get versus losing track of this data to this company?’” Faklaris said.

Also, remember that AI makes things up (also known as hallucinating), Faklaris said. So it could be giving you wrong information, which, if it’s health-related, could be really detrimental.

Don’t let an AI health service take the place of your actual medical provider, Faklaris added. In a press release, ChatGPT Health also states this service should support and not replace care from a doctor.

D’Ovidio said that if you do decide to use this tool, you should make sure you have “good security hygiene” when it comes to your ChatGPT account. This means using a unique password that you change regularly and turning on dual-factor authentication.

You also shouldn’t share your health data with multiple AI companies, according to Evans. “You’re just exposing yourself even more, right, if you start using multiple different places for this information,” he added.



Source link

Exit mobile version