ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.
Why would you ask it to do that in the first place??
To prove to all of the tech bros that ChatGPT isn’t an actual AI, perhaps. At least that’s the feeling I get based on what the article says.