You may have seen this tragic story about a teenager who committed suicide and used chat GPT to plan and work up the nerve to go through with it.
-
I've always found social media policies about the topic of suicide frustrating. Among the words that creators will self-censor it's at the top of the list. "unalive" "self end" all of this disgusting avoidant language.
It's a delicate thing to create spaces where people can express their feelings and get support to first feel less alone and then later find a way to go on and thrive.
I understand that a company has no interest in parsing all of that. So they just ban words.
But those banned words and the whole taboo might have kept this kid from speaking to a person who could have helped him.
Another problem is the idea that the moment someone says the word suicide you'd better call the cops and turn them over to someone who will restrict their liberties. But when therapy is out of reach financially for most people, who else is there to call?
As is so often the case it's not the tech but the greater negligence and failure to invest.