danger Don't let AI treat you anymore!
In the past, people often said 'don't search for medical advice online' because online information is not so reliable and patients cannot accurately describe all their symptoms. In the era of big language model AI, a new risk has emerged. In the taboo of searching for medical advice online, it may be necessary to add 'don't let AI provide medical advice casually'.
In the past, people often said 'don't search for medical advice online' because online information is not so reliable and patients cannot accurately describe all their symptoms. In the era of big language model AI, a new risk has emerged. In the taboo of searching for medical advice online, it may be necessary to add 'don't let AI provide medical advice casually'.
Not long ago, the Annals of Internal Medicine: Clinical Cases reported a miraculous case - someone who believed in AI and ended up suffering from bromine poisoning that they hadn't seen in years.
AI dares to suggest, patients dare to practice
This case can only be described as having the word 'weird' written from head to toe.
At first, this person went to the emergency room to say that his neighbor was poisoning him, but he did not say that he was taking any medication. After being hospitalized, he learned that he usually doesn't eat a lot of food and drinks water by distilling it at home. Although he kept saying he was thirsty in the hospital, he didn't trust the water given to him by others. After 24 hours of hospitalization, his paranoid symptoms became increasingly severe, including auditory and visual hallucinations. He also attempted to escape from the hospital, but was forcibly detained and given antipsychotic medication.
Wait until the medication takes effect and the patient calms down before the doctor can determine the true situation. Originally, this patient was particularly afraid of eating foods that were harmful to the body, neither eating this nor that, and had already starved himself to the point of being deficient in multiple vitamins. And at this moment, he saw an article about salt online and learned that salt is very harmful to the body.
In the process of studying the harm of salt, he had a brilliant idea and discovered a huge "beauty": the chemical composition of salt is sodium chloride, but why does literature only focus on the harm of sodium to the body, and no one pays attention to chlorine? Supermarkets can only buy low sodium salts, why can't they buy low chloride salts? Is this chlorine actually not harmful to the body at all? Can we avoid consuming chlorine?
Since no one is saying it online, we have to rely on ourselves. The patient asked ChatGPT, and the AI gave a "very actionable" suggestion: use bromine instead of chlorine.
This patient really did it, but it's not a good idea.
Seemingly similar, but actually completely different
Bromine, with the element symbol Br, is located directly below chlorine on the periodic table and is an element that is not easily accessible to ordinary people. About a hundred years ago, bromine poisoning was a common occurrence among doctors because bromine was believed to have a sedative effect. Many over-the-counter contained bromine salts, which could be used to treat insomnia, anxiety, and hysteria. Of course, if you accidentally overeat, it can leap from treating anxiety to causing anxiety, leading to movement disorders and various hallucinations. In severe cases, it can even cause coma, accompanied by symptoms such as anorexia and constipation.
Why is this happening? Bromine and chlorine are both halogen elements, with the outermost layer having 7 electrons. Their chemical properties are very similar, and they can indeed replace each other in simple chemical reactions. After the human body ingests bromide ions, it will also "impersonate" chloride ions to participate in various biochemical reactions.
But once the reaction really starts, it's uncovered: biochemical reactions are very precise, and any step with slightly slower speed or lower activity can lead to a large-scale disconnection. The normal operation of the nervous system relies heavily on chloride ions, so if it is impersonated by bromide ions, the damage it suffers is particularly severe.
The human body can excrete bromide ions from the body, but the speed is not very fast, it takes about 9 to 12 days to excrete half of them. So if you continue to consume food or medication containing bromine, bromine can gradually accumulate and cause greater harm. In history, bromine poisoning once accounted for 8% of hospitalizations in psychiatric hospitals. Later on, everyone became aware of the danger of bromine, and the were abandoned one after another, making bromine poisoning rare.
Unexpectedly, due to the advice of AI, bromine poisoning has once again ushered in a rare "renaissance". Congratulations! Fortunately, after three weeks of hospitalization, the patient's poisoning symptoms gradually improved and he was discharged smoothly. The follow-up examination two weeks later also had no problems. I hope he can give up his pursuit of a low chlorine diet.
This' mental 'illness
Probably already on the way
In this way, although this case itself is rare and bizarre, it reflects a broader phenomenon, which is the so-called "chatbot psychosis" or "AI psychosis".
This term does not describe the AI itself being sick, but rather the user experiencing or exacerbating symptoms of mental illness while chatting with the large language model AI. A very typical scenario is when a user believes in a conspiracy theory, and AI does not deny it but instead welcomes and encourages the user, and hallucinates various evidence supporting this conspiracy theory, causing the user to be deeply immersed in it, and their daily life and social interactions are on the brink of collapse, even taking extreme actions.
At present, this phenomenon has not been officially diagnosed and there is almost no relevant scientific research, but it has attracted the attention of the media and accumulated a considerable number of cases. Perhaps soon we will know the severity of this phenomenon.
Believing in strange beliefs is quite common in itself. But humans are ultimately social animals that need the support of like-minded people around them to resolutely walk into the ditch and never come out. Historically, to find such a companion, one often needs to abandon one's family and children. After the emergence of the Internet, one can go to minority forums to find peers.
In the era of generative AI, even human peers do not need it. No matter what strange thoughts you have, AI can provide you with unlimited support and emotional value. After all, the current large language models are still a long way from truly universal intelligence. If they are not set as "voice activated", they cannot provide users with a good user experience. But whether this experience is worth the cost of hallucinations, rumors, and AI mental illness, and whether we as ordinary users have the right to choose, that's another question.
According to reports, OpenAI updated its ChatGPT usage policy on October 29th, prohibiting it from providing services in certain previously considered most valuable application areas, such as interpreting medical images, assisting in medical diagnosis, and providing legal or financial advice.
This move aims to prevent ChatGPT (and any other OpenAI model) from outputting recommendations that may be considered professional, entrusted, or legally binding, in order to comply with the EU Artificial Intelligence Act and relevant guidance requirements of the US Food and Drug Administration.
