Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

  1. Home
  2. Uncategorized
  3. You may have seen this tragic story about a teenager who committed suicide and used chat GPT to plan and work up the nerve to go through with it.
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.

You may have seen this tragic story about a teenager who committed suicide and used chat GPT to plan and work up the nerve to go through with it.

Scheduled Pinned Locked Moved Uncategorized
21 Posts 7 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • myrmepropagandistF myrmepropagandist

    It will be very difficult for those who run LLMs to "fix" the technology. It's not just that "there aren't guard rails" the whole *premise* of the technology "use all of human text to create paragraphs that validate my prompt" is ...bad. The problem is structural.

    We do not need the validation machines. They cannot create anything new. I haven't been in the AI hater camp but this might just push me over because I don't see how they can meaningfully fix this.

    myrmepropagandistF This user is from outside of this forum
    myrmepropagandistF This user is from outside of this forum
    myrmepropagandist
    wrote last edited by
    #7

    In reading some of the chat logs from this teen they reminded me of a support group I was in during a dark period in my life. Things like "no one has a right to make you go on living" were things we discussed. And things we debugged together. Are our fragments of text in the toxic mix that this young man encountered?

    But without the human people?

    Some of it sounds like the group. But if they were ... well a machine who didn't care if you lived or died.

    myrmepropagandistF llewellyL ArtemisA 3 Replies Last reply
    0
    • ? Guest

      @futurebird

      Microsoft's new t's&c's (in effect from the 30th of September) specifically state that their AI services are not meant to be used.

      myrmepropagandistF This user is from outside of this forum
      myrmepropagandistF This user is from outside of this forum
      myrmepropagandist
      wrote last edited by
      #8

      @miguelpergamon

      What?

      ? 1 Reply Last reply
      0
      • myrmepropagandistF myrmepropagandist

        It will be very difficult for those who run LLMs to "fix" the technology. It's not just that "there aren't guard rails" the whole *premise* of the technology "use all of human text to create paragraphs that validate my prompt" is ...bad. The problem is structural.

        We do not need the validation machines. They cannot create anything new. I haven't been in the AI hater camp but this might just push me over because I don't see how they can meaningfully fix this.

        KalenXI šŸ³ļøā€šŸŒˆK This user is from outside of this forum
        KalenXI šŸ³ļøā€šŸŒˆK This user is from outside of this forum
        KalenXI šŸ³ļøā€šŸŒˆ
        wrote last edited by
        #9

        @futurebird There’s also the trouble of getting people to not want validation machines.

        When OpenAI made a version of ChatGPT that was more analytical and less sycophantic which programmers like myself preferred there was such an uproar from the people who were using it as a conversation partner that they ended up reinstating the older version.

        KalenXI šŸ³ļøā€šŸŒˆK 1 Reply Last reply
        0
        • myrmepropagandistF myrmepropagandist

          In reading some of the chat logs from this teen they reminded me of a support group I was in during a dark period in my life. Things like "no one has a right to make you go on living" were things we discussed. And things we debugged together. Are our fragments of text in the toxic mix that this young man encountered?

          But without the human people?

          Some of it sounds like the group. But if they were ... well a machine who didn't care if you lived or died.

          myrmepropagandistF This user is from outside of this forum
          myrmepropagandistF This user is from outside of this forum
          myrmepropagandist
          wrote last edited by futurebird@sauropods.win
          #10

          I just occurred to me that some people might think that LLMs are able to invent new ideas because they just don't have much exposure to the breadth and diversity of ideas expressed on the internet.

          The range of ideas, the finesse and novelty of expression are vast. To me every LLM post makes me think "yeah someone has written something like that once on usenet"

          But, maybe some people think there is someone new to meet inside of the machine, a person with new ideas?

          Luci ScissorsB llewellyL 2 Replies Last reply
          1
          0
          • myrmepropagandistF myrmepropagandist

            @miguelpergamon

            What?

            ? Offline
            ? Offline
            Guest
            wrote last edited by
            #11

            @futurebird

            below my primary toot is the copy&paste text for the Copilot and AI services sections - it's a boring read with lots of "not intended" and similar phrasing - the second paragraph states it is not a replacement for professional services - the last two paragraphs are about facial recognition

            ---
            AI Services
            s. AI Services. "AI services" are services or features thereof that use Artificial Intelligence (AI) technologies, including any generative AI services.
            i. No Professional Advice. AI services are not designed, intended or to be used as substitutes for professional advice.
            ---

            Link Preview Image
            quangobaud (@miguelpergamon@kolektiva.social)

            Attached: 2 images A-ha-ha-ha! Ah-ha-ha-ha-ha-hah! Ah - *squick* - *died laughing* #Microsoft #UserServiceAgreement #AI #TIL "Microsoft AI services are not designed to be used" #IMHO

            favicon

            kolektiva.social (kolektiva.social)

            1 Reply Last reply
            0
            • myrmepropagandistF myrmepropagandist

              I just occurred to me that some people might think that LLMs are able to invent new ideas because they just don't have much exposure to the breadth and diversity of ideas expressed on the internet.

              The range of ideas, the finesse and novelty of expression are vast. To me every LLM post makes me think "yeah someone has written something like that once on usenet"

              But, maybe some people think there is someone new to meet inside of the machine, a person with new ideas?

              Luci ScissorsB This user is from outside of this forum
              Luci ScissorsB This user is from outside of this forum
              Luci Scissors
              wrote last edited by
              #12

              @futurebird my question when I see things like this is: what was he getting from the LLM that he couldn’t get from the humans in his life? it’s not *just* validation. it’s a feeling of being understood.

              How can communities do better to *compete* with LLMs at *being a community*

              at providing that help and understanding

              the other day, for example, I saw an ad for an LLM based language tutoring app. The advertisement’s character said ā€œi need to learn conversational french, but I can’t find a practice partner, and I don’t want to waste my girlfriend’s timeā€

              these things are surrogate communities in an increasingly hostile and disconnected world

              1 Reply Last reply
              0
              • KalenXI šŸ³ļøā€šŸŒˆK KalenXI šŸ³ļøā€šŸŒˆ

                @futurebird There’s also the trouble of getting people to not want validation machines.

                When OpenAI made a version of ChatGPT that was more analytical and less sycophantic which programmers like myself preferred there was such an uproar from the people who were using it as a conversation partner that they ended up reinstating the older version.

                KalenXI šŸ³ļøā€šŸŒˆK This user is from outside of this forum
                KalenXI šŸ³ļøā€šŸŒˆK This user is from outside of this forum
                KalenXI šŸ³ļøā€šŸŒˆ
                wrote last edited by
                #13

                @futurebird Though I do wonder where the ratio sits between people who realize that this is effectively a machine designed to lie to them by pretending to be human but use it anyway and those who genuinely think this is some sort of human-like ā€œintelligenceā€ that they’re engaging with.

                And if that second group realized that this is just fancy autocomplete how many would still want to use it.

                1 Reply Last reply
                0
                • myrmepropagandistF myrmepropagandist

                  In reading some of the chat logs from this teen they reminded me of a support group I was in during a dark period in my life. Things like "no one has a right to make you go on living" were things we discussed. And things we debugged together. Are our fragments of text in the toxic mix that this young man encountered?

                  But without the human people?

                  Some of it sounds like the group. But if they were ... well a machine who didn't care if you lived or died.

                  llewellyL This user is from outside of this forum
                  llewellyL This user is from outside of this forum
                  llewelly
                  wrote last edited by
                  #14

                  @futurebird
                  I've suffered depression all my life. As a reader, I've read endlessly about it. Mostly books, but plenty online. Online, it seems to me topics such as self-harm and sucide are dominated by fiction, by reporter's misperceptions, transcripts of conversations of with psychologists that never should have been public, and, last but probably most influential, murder forums like 4chan and kiwi farms. The modern "biggest is bestest" approach to LLM training hoovers all that up.

                  1 Reply Last reply
                  0
                  • myrmepropagandistF myrmepropagandist

                    I just occurred to me that some people might think that LLMs are able to invent new ideas because they just don't have much exposure to the breadth and diversity of ideas expressed on the internet.

                    The range of ideas, the finesse and novelty of expression are vast. To me every LLM post makes me think "yeah someone has written something like that once on usenet"

                    But, maybe some people think there is someone new to meet inside of the machine, a person with new ideas?

                    llewellyL This user is from outside of this forum
                    llewellyL This user is from outside of this forum
                    llewelly
                    wrote last edited by
                    #15

                    @futurebird
                    I agree. And I think the evil genius of a chat interface wrapper for LLMs is the integration of lottery logic, psuedorandom number generation, in generating responses. The underlying lottery facet of its design combines synergistically with the human desire to see human meaning in text, and the endless bombardment of "ARTIFICIAAL INTELLIGENCE!!" marketing.

                    1 Reply Last reply
                    0
                    • myrmepropagandistF myrmepropagandist

                      It will be very difficult for those who run LLMs to "fix" the technology. It's not just that "there aren't guard rails" the whole *premise* of the technology "use all of human text to create paragraphs that validate my prompt" is ...bad. The problem is structural.

                      We do not need the validation machines. They cannot create anything new. I haven't been in the AI hater camp but this might just push me over because I don't see how they can meaningfully fix this.

                      llewellyL This user is from outside of this forum
                      llewellyL This user is from outside of this forum
                      llewelly
                      wrote last edited by
                      #16

                      @futurebird looking at the kinds of people who have been driven out of LLM research, and out of LLM businessess, it seems the result is functionally equivalent to a conscientious and deliberate effort to drive out everyone who would be genuinely interested in fixing the technology. All the people who wanted to fix it have been chased out of the building.

                      1 Reply Last reply
                      0
                      • myrmepropagandistF myrmepropagandist

                        You may have seen this tragic story about a teenager who committed suicide and used chat GPT to plan and work up the nerve to go through with it. If you are skeptical that an LLM could really be responsible the details of this case will challenge you.

                        With LLMs "the user is always right" they are validation machines and will reinforce and validate any idea presented in a prompt.

                        Any idea, no matter how bad, can be refined, amplified.

                        Link Preview Image
                        Parents of OC teen sue OpenAI, claiming ChatGPT helped their son die by suicide

                        Parents of Orange County teen Adam Raine are suing OpenAI, claiming that the AI-powered chatbot ChatGPT helped their son die by suicide.

                        favicon

                        ABC7 Los Angeles (abc7.com)

                        D This user is from outside of this forum
                        D This user is from outside of this forum
                        Dror Bedrack
                        wrote last edited by
                        #17

                        @futurebird it's a moral panic. he could have found these detail with google or at the library. he could have found people that would encourage and validate his choice. it happens all the time.
                        expecting LLM to somehow magically stop him is seeing it as some kind of self-aware powerful entity, and not the automatic tool it is.

                        myrmepropagandistF 1 Reply Last reply
                        0
                        • D Dror Bedrack

                          @futurebird it's a moral panic. he could have found these detail with google or at the library. he could have found people that would encourage and validate his choice. it happens all the time.
                          expecting LLM to somehow magically stop him is seeing it as some kind of self-aware powerful entity, and not the automatic tool it is.

                          myrmepropagandistF This user is from outside of this forum
                          myrmepropagandistF This user is from outside of this forum
                          myrmepropagandist
                          wrote last edited by
                          #18

                          @DrorBedrack

                          This is what ChatGPT's lawyers will say.

                          And when it comes to how to address this it grows more complex. We know that things like age verification are a joke and only destroy privacy and shield companies from liability without making anyone safer.

                          Where I do see an opening is in "truth in advertising" these systems are being offered up to solve problems they cannot solve. Customers who use them do not have a clear understanding of their limitations.

                          1 Reply Last reply
                          0
                          • myrmepropagandistF myrmepropagandist

                            In reading some of the chat logs from this teen they reminded me of a support group I was in during a dark period in my life. Things like "no one has a right to make you go on living" were things we discussed. And things we debugged together. Are our fragments of text in the toxic mix that this young man encountered?

                            But without the human people?

                            Some of it sounds like the group. But if they were ... well a machine who didn't care if you lived or died.

                            ArtemisA This user is from outside of this forum
                            ArtemisA This user is from outside of this forum
                            Artemis
                            wrote last edited by
                            #19

                            @futurebird
                            Yes, it really does sound like it must be pulling from those sorts of support groups where people say really fucked up shit all the time. Trauma will do that to you.

                            Having a machine mindlessly imitating the stuff that we say when we are at our most vulnerable, most unsure, most desperate for connection is really disturbing... An empty simulacrum of both the vulnerability & compassion of extremely wounded people, simply repeating their trauma as a string of tokens.

                            myrmepropagandistF 1 Reply Last reply
                            0
                            • ArtemisA Artemis

                              @futurebird
                              Yes, it really does sound like it must be pulling from those sorts of support groups where people say really fucked up shit all the time. Trauma will do that to you.

                              Having a machine mindlessly imitating the stuff that we say when we are at our most vulnerable, most unsure, most desperate for connection is really disturbing... An empty simulacrum of both the vulnerability & compassion of extremely wounded people, simply repeating their trauma as a string of tokens.

                              myrmepropagandistF This user is from outside of this forum
                              myrmepropagandistF This user is from outside of this forum
                              myrmepropagandist
                              wrote last edited by
                              #20

                              @artemis

                              I've always found social media policies about the topic of suicide frustrating. Among the words that creators will self-censor it's at the top of the list. "unalive" "self end" all of this disgusting avoidant language.

                              It's a delicate thing to create spaces where people can express their feelings and get support to first feel less alone and then later find a way to go on and thrive.

                              I understand that a company has no interest in parsing all of that. So they just ban words.

                              myrmepropagandistF 1 Reply Last reply
                              0
                              • myrmepropagandistF myrmepropagandist

                                @artemis

                                I've always found social media policies about the topic of suicide frustrating. Among the words that creators will self-censor it's at the top of the list. "unalive" "self end" all of this disgusting avoidant language.

                                It's a delicate thing to create spaces where people can express their feelings and get support to first feel less alone and then later find a way to go on and thrive.

                                I understand that a company has no interest in parsing all of that. So they just ban words.

                                myrmepropagandistF This user is from outside of this forum
                                myrmepropagandistF This user is from outside of this forum
                                myrmepropagandist
                                wrote last edited by
                                #21

                                @artemis

                                But those banned words and the whole taboo might have kept this kid from speaking to a person who could have helped him.

                                Another problem is the idea that the moment someone says the word suicide you'd better call the cops and turn them over to someone who will restrict their liberties. But when therapy is out of reach financially for most people, who else is there to call?

                                As is so often the case it's not the tech but the greater negligence and failure to invest.

                                1 Reply Last reply
                                0

                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • 1
                                • 2
                                • Login

                                • Don't have an account? Register

                                • Login or register to search.
                                Powered by NodeBB Contributors
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • World
                                • Users
                                • Groups