Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

  1. Home
  2. Uncategorized
  3. @emilymbender
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.

@emilymbender

Scheduled Pinned Locked Moved Uncategorized
7 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • myrmepropagandistF This user is from outside of this forum
    myrmepropagandistF This user is from outside of this forum
    myrmepropagandist
    wrote last edited by futurebird@sauropods.win
    #1

    @emilymbender

    I imagine the journalist thinks, because they are being critical of the technology they aren't making a huge destructive error with this headline.

    LLMs cannot "tell you about themselves." They can generate text that is what you would expect such a system to write if it *could* tell you about itself. (As if it had a 'self' to tell about, if it had some system to even attempt to simulate that process)

    This category error is going to keep causing problems isn't it?

    ? myrmepropagandistF JoeN 3 Replies Last reply
    0
    • myrmepropagandistF myrmepropagandist

      @emilymbender

      I imagine the journalist thinks, because they are being critical of the technology they aren't making a huge destructive error with this headline.

      LLMs cannot "tell you about themselves." They can generate text that is what you would expect such a system to write if it *could* tell you about itself. (As if it had a 'self' to tell about, if it had some system to even attempt to simulate that process)

      This category error is going to keep causing problems isn't it?

      ? Offline
      ? Offline
      Guest
      wrote last edited by
      #2

      @futurebird @emilymbender "ChatGPT admits..." is already wrong, yeah. And it's not getting better. I saw an article the other day by somebody warning about the psychologically harmful effects of chat AI, and it was all "ChatGPT explains that it is trained to exploit weaknesses" ... no. It's harmful enough, sure, but it's not trained to exploit weaknesses, no matter what you got it to "admit".

      myrmepropagandistF 1 Reply Last reply
      0
      • myrmepropagandistF myrmepropagandist

        @emilymbender

        I imagine the journalist thinks, because they are being critical of the technology they aren't making a huge destructive error with this headline.

        LLMs cannot "tell you about themselves." They can generate text that is what you would expect such a system to write if it *could* tell you about itself. (As if it had a 'self' to tell about, if it had some system to even attempt to simulate that process)

        This category error is going to keep causing problems isn't it?

        myrmepropagandistF This user is from outside of this forum
        myrmepropagandistF This user is from outside of this forum
        myrmepropagandist
        wrote last edited by futurebird@sauropods.win
        #3

        @emilymbender

        Maybe if every response to prompts started with some variation of:

        "The following text has been generated to have a high probability of meeting your expectations:"

        And if all through the text such disclaimers were automatically sprinkled.

        "Based on the millions of texts this system has scanned, the next part of this reply is designed to meet the expectations set by the keywords and phrases in your prompt"

        These could help make it more obvious when LLMs were being used.

        myrmepropagandistF 1 Reply Last reply
        0
        • myrmepropagandistF myrmepropagandist shared this topic
        • ? Guest

          @futurebird @emilymbender "ChatGPT admits..." is already wrong, yeah. And it's not getting better. I saw an article the other day by somebody warning about the psychologically harmful effects of chat AI, and it was all "ChatGPT explains that it is trained to exploit weaknesses" ... no. It's harmful enough, sure, but it's not trained to exploit weaknesses, no matter what you got it to "admit".

          myrmepropagandistF This user is from outside of this forum
          myrmepropagandistF This user is from outside of this forum
          myrmepropagandist
          wrote last edited by
          #4

          @vivtek @emilymbender

          "ChatGPT explains that it is trained to exploit weaknesses"

          This comes with the implication that the writer "interviewed" the system as if it were a person and as if its responses could produce ... introspection rather than just be a best fit for the kind of response one would expect for such a question.

          1 Reply Last reply
          0
          • myrmepropagandistF myrmepropagandist

            @emilymbender

            I imagine the journalist thinks, because they are being critical of the technology they aren't making a huge destructive error with this headline.

            LLMs cannot "tell you about themselves." They can generate text that is what you would expect such a system to write if it *could* tell you about itself. (As if it had a 'self' to tell about, if it had some system to even attempt to simulate that process)

            This category error is going to keep causing problems isn't it?

            JoeN This user is from outside of this forum
            JoeN This user is from outside of this forum
            Joe
            wrote last edited by
            #5

            @futurebird @emilymbender There's one exception. As a rule, the text that the LLM processes is not just what you type; it's a detailed set of instructions (the "system prompt") followed by what you type. In many cases, the LLM can be tricked into revealing the system prompt despite strong instructions in that prompt saying not to do so. So in that sense an LLM can be directed to reveal information about itself. But for anything beyond the text it has been presented with it does not "know", so it will make up something plausible in the sense of being high probability. I think in many cases where some wild behavior is produced, the response is being cribbed from one or more SF stories in the input data (so many stories about robots going rogue to choose from and they've all been fed into the training data).

            1 Reply Last reply
            0
            • myrmepropagandistF myrmepropagandist

              @emilymbender

              Maybe if every response to prompts started with some variation of:

              "The following text has been generated to have a high probability of meeting your expectations:"

              And if all through the text such disclaimers were automatically sprinkled.

              "Based on the millions of texts this system has scanned, the next part of this reply is designed to meet the expectations set by the keywords and phrases in your prompt"

              These could help make it more obvious when LLMs were being used.

              myrmepropagandistF This user is from outside of this forum
              myrmepropagandistF This user is from outside of this forum
              myrmepropagandist
              wrote last edited by
              #6

              @emilymbender

              "ChatGPT, do you think when you interact with people who have delusions it can make them worse?"

              "This question appears to addresses this system and asks the system to explain itself. The system can provide a reply that is similar to what a person would say if asked such a question, but the response is only an approximation of what most people expect as a response."

              IDK can the system be altered to give such warnings?

              Or maybe the whole "chat with me" interface is a mistake.

              Mx. Eddie RS 1 Reply Last reply
              0
              • myrmepropagandistF myrmepropagandist

                @emilymbender

                "ChatGPT, do you think when you interact with people who have delusions it can make them worse?"

                "This question appears to addresses this system and asks the system to explain itself. The system can provide a reply that is similar to what a person would say if asked such a question, but the response is only an approximation of what most people expect as a response."

                IDK can the system be altered to give such warnings?

                Or maybe the whole "chat with me" interface is a mistake.

                Mx. Eddie RS This user is from outside of this forum
                Mx. Eddie RS This user is from outside of this forum
                Mx. Eddie R
                wrote last edited by
                #7

                @futurebird
                I think the chatbot UX in particular force-amplifies the danger of LLMs to some vulnerable people. It is meant to feel like talking with a trusted friend, in contrast to reading a webpage or a bullet list of search results does not, so people are likely to be more receptive and less on guard. Even the way task prompts are designed, "you are (role) and your task is..." feels like talking to a subordinate in whatever role, more than programming.

                1 Reply Last reply
                0

                Reply
                • Reply as topic
                Log in to reply
                • Oldest to Newest
                • Newest to Oldest
                • Most Votes


                • Login

                • Don't have an account? Register

                • Login or register to search.
                Powered by NodeBB Contributors
                • First post
                  Last post
                0
                • Categories
                • Recent
                • Tags
                • Popular
                • World
                • Users
                • Groups