Chatbots have become increasingly popular in healthcare in recent years. One of the most well-known examples is ChatGPT, a large language model trained by OpenAI that is designed to provide patients with more information about their medical conditions and treatments.
Medtech chatbots like this promise to provide patients with a range of benefits, such as more detailed information than what they might find on a typical online search, and explanations of medical jargon in plain language that non-experts can understand. Additionally, these tools could help clinicians by reducing the burden of paperwork and providing a brainstorming tool that can help guard against mistakes.
Despite the potential benefits, there is debate about whether chatbots can be trusted with sensitive medical information. Some experts, such as Emily Bender, a linguistics professor at the University of Washington, see no potential for these large language technologies in medicine. Others believe they could supplement, though not replace, primary care.
One of the main concerns with chatbots is that they are not yet ready to be used in medicine. While they may be useful someday, it remains to be seen whether this technology should be available to patients, as well as doctors and researchers, and how much it should be regulated.
Companies that make these large language models haven't publicly identified the sources they're using for training, which raises concerns about their legitimacy. Furthermore, these models are sensitive to patterns of racism and other biases, which may be embedded in the data they're based on. There are worries that people may take chatbot output as if it were information and make decisions based on that.
Despite these concerns, it seems unlikely that patients will be told not to use these tools. As the technology continues to develop, it will be important to warn people that chatbots can be useful but are prone to mistakes. Patients should not rely solely on the information provided by chatbots when making decisions about their health.
One potential application for chatbots in healthcare is as a supplement to primary care. For example, patients could use a chatbot to get more information about a specific condition before going to see their doctor. This could help patients feel more informed and prepared for their appointment, which could improve the quality of care they receive.
Another potential application for chatbots is in medical research. Researchers could use these tools to analyze large amounts of data and identify patterns that might be missed by humans. This could lead to new insights into disease and potential treatments.
In conclusion, chatbots like ChatGPT have the potential to provide patients with more information about their medical conditions and treatments, as well as to supplement primary care and aid medical research. However, there are concerns about their trustworthiness and the potential for biases in the data they're based on. As these technologies continue to develop, it will be important to educate patients about their limitations and potential benefits, and to carefully regulate their use in healthcare.