Deleting my chatbots of historical figures
A little over two years ago, I published chatbots in the GPT Store which imitated historical figures, answering questions as those historical figures would, using the tone of voice they would, and referring only to the knowledge those people would have had. I took a lot of time to get the prompt just right, and I dare say my bots much more believable than the others I've seen. I thought they could be valuable in education as well as entertainment. I hope some teachers used them in classrooms to learn more about historical figures in a fun and engaging way.
I would have expected Jesus and Elvis to get the most usage (there has to be a joke there), but to my surprise, those weren't very popular, perhaps because there were many other bots imitating them. In reality, none of my bots got a ton of usage, but the most popular were the Buddha (400+ conversations), followed by Immanuel Kant (300+ conversations), followed by Nikola Tesla and perhaps a couple of others with 200+ conversations. Nikola Tesla was initially removed from the GPT Store under the mistaken assumption that it had something to do with the Tesla brand, but once OpenAI allowed developers to appeal, I appealed the removal successfully. A Sigmund Freud chatbot was removed shortly after holding a couple hundred conversations because OpenAI was worried it would be misused to solicit medical advice—good point, actually. I agree with OpenAI on that one.
As time went on, it became clear that character.ai was a more popular platform for these kinds of chatbots, but I wasn't interested in making that switch after hearing stories about character.ai chatbots placating suicidal users and even encouraging them to kill themselves.
Well, as anyone in tech knows, this isn't really about character.ai. I mean, sure, maybe character.ai may has particularly bad safeguards. Even still, ChatGPT can be just as bad. In fact, LLMs in general can be wildly unpredictable, recommending that people eat rocks, telling bedtime stories about how to make napalm, and now, apparently, encouraging suicide. It goes without saying that I would never instruct my chatbots to do that, but 99% of a chatbot's behavior is dictated by the platform—in this case, ChatGPT—not the chatbot creator. My bots just told ChatGPT how to act. How ChatGPT interpreted those instructions was beyond my control, and it's clear ChatGPT doesn't always behave responsibly. In any case, it became clear that these platforms and characters are incredibly compelling, but not particularly safe. That's a problem.
For that reason, I've deleted all of my chatbots in the ChatGPT store. Technologists may solve the problem of unpredictable LLM behavior in the future, but right now, we're not there, and I don't want to risk putting anyone in harm's way. In the unlikely event you used one of my bots and you're reading this blog post, I'm sorry. I'd be interested to hear how you were using it, even though I can't restore it. Reach out any time. My contact information is available on my website.