scorecard
  1. Home
  2. tech
  3. AI
  4. article
  5. OpenAI & Thrive-backed “hyper-personalised” AI health coach is in the works. Should you be excited or worried?

OpenAI & Thrive-backed “hyper-personalised” AI health coach is in the works. Should you be excited or worried?

OpenAI & Thrive-backed “hyper-personalised” AI health coach is in the works. Should you be excited or worried?
Tech4 min read
In a recent op-ed published in the TIME, OpenAI CEO Sam Altman and Thrive Global’s Arianna Huffington pose as champions of affordable healthcare. Soft-launching the new company Thrive AI Health through the article, the duo speak of using artificial intelligence to help people take charge of their well-being and make expert health advice accessible to all. Most notably, they suggest that AI could help improve “both our health spans and our lifespans”.

Thrive AI Health will primarily focus on promoting healthy behaviour, like the five foundational behaviours Sam himself swears by: getting enough sleep, eating well, exercising, spending time in nature and meditating. Furthermore, this AI health coach wouldn't just tell you to eat better or exercise more, but promises to take your preferences, your schedule and your health data into account. OpenAI’s Whoop coach already exists while Google and Apple are also planning to roll out similar tools soon.

Related: AI can identify clinically anxious youth based on brain structure: Study

According to the TIME article, the AI will be trained on the “best peer-reviewed science”. And here’s where it starts to get tricky. Both peer-reviewed and ‘under-review’ scientific literature consists of inaccurate and confusing interpretations when it comes to human health. Being an ever-evolving field of academia, the studies coming out in this domain have always been subjected to cherry-picking of data and misinterpreting the findings.

So, will AI be able to navigate this challenge, especially considering the recent episodes of AI hallucinations even among the most advanced large language models? AI definitely holds promise to make affordable healthcare a reality at warp speed, but how could we make it safe if the training data itself is ambiguous?

Dealing with AI hallucination

It’ll be a hot while before we purge our minds of the memory of a Google AI search result stating that it was perfectly normal for 5-10 cockroaches to crawl into your penis hole and that this was how they got the name “cock” roach. So forgive us if we don’t jump with joy at the thought of relying on technology trained using LLMs for any sort of health advice.
We asked Dr Marcus Ranney — Longevity Physician, Founder of Human Edge and formerly General Manager with Thrive Global — what he thought about the limitations of using LLM-trained AI for healthcare.

Human Edge, which provides wellness and fitness services, also employs generative AI tools. And Dr Ranney acknowledged that they had to deal with their share of AI hallucinations and inaccuracies. To tackle this little problem, Human Edge was taking the “man-in-the-middle” approach during training. Here, an actual healthcare professional would intercept the conversation between the AI and the user, and select the right answer from the options suggested by the tool.

The other problem is that Thrive AI Health’s responses will obviously depend on the user’s input and what medical data they choose to share about themselves. While Altman and Huffington are thankfully playing it safe and only intend for this app to start off with generalised (and mostly harmless) tips, we do wonder if it could land some people in a soup simply because they forgot to share something or thought it inconsequential.

Impact on doctors

A common misconception that many harbour is that such healthcare tools could replace healthcare providers. We can argue this point by saying that AI is only supposed to support the healthcare system and not uproot it. But in a country like India, where we tend to rely on the pharmacist’s advice instead of a doctor’s appointment to save time and money, over-reliance on AI healthcare tools could be a real problem.

However, Dr Ranney stresses on how this could serve as the first tier of health advice. It would certainly also help reduce the burden of overworked doctors who are also busy battling the surge of misinformation on the internet — cue the tiff between Indian actress Samantha Ruth Prabhu and The Liver Doc.

Read: Samantha Ruth Prabhu faces heavy backlash after recommending hydrogen peroxide inhalation to cure viral fever

When someone as influential as the actress shares the “benefits” of inhaling hydrogen peroxide with her millions of followers, it is bound to make waves, no matter how dubious the source of the information may be. Talking about the barrage of confounding medical advice bandied on social media and its many dangers, the Human Edge CEO insists that a reliable AI health coach could actually be a much better source of information.

Could an AI health coach lead to data breaching issues?

Depending on how much you engage with it, this health coach will likely know every tiny detail about you — from how many minutes you spend on your porcelain throne every morning to how many times you stayed up past your bedtime that week. And let’s face it, not all of us are completely okay with this sacred information being made available to others.

Altman and Huffington do seem to be particular about ensuring “robust privacy and security safeguards”, but the technology will also rely on potentially several billion data points — data from users. So the chances of a data breach, no matter how miniscule, do exist. As Dr Ranney said, we’re going to need a lot of regulation to prevent something like this.

That being said, if Thrive’s AI health coach delivers on its promises, it’s going to be a huge deal that revolutionises healthcare worldwide. While Dr Ranney did share his viewpoints from an industry perspective, he says he’s very excited about Thrive AI Health — and frankly, so are we (even if it didn't seem like it).

READ MORE ARTICLES ON


Advertisement

Advertisement