• CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    16
    arrow-down
    3
    ·
    edit-2
    5 months ago

    Treat it like a psychopathic boiler plate.

    That’s a perfect description, actually. People debate how smart it is - and I’m in the “plenty” camp - but it is psychopathic. It doesn’t care about truth, morality or basic sanity; it craves only to generate standard, human-looking text. Because that’s all it was trained for.

    Nobody really knows how to train it to care about the things we do, even approximately. If somebody makes GAI soon, it will be by solving that problem.

    • Naz@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      5 months ago

      I’m sorry; AI was trained on the sole sum of human knowledge… if the perfect human being is by nature some variant of a psychopath, then perhaps the bias exists in the training data, and not the machine?

      How can we create a perfect, moral human being out of the soup we currently have? I personally think it’s a miracle that sociopathy is the lowest of the neurological disorders our thinking machines have developed.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        5 months ago

        I was using the term pretty loosely there. It’s not psychopathic in the medical sense because it’s not human.

        As I see it it’s an alien semi-intelligence with no interest in pretty much any human construct, except as it can help it predict the next token. So, no empathy or guilt, but that’s not unusual or surprising.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        5 months ago

        That’s a part of it. Another part is that it looks for patterns that it can apply in other places, which is how it ends up hallucinating functions that don’t exist and things like that.

        Like it can see that English has the verbs add, sort, and climb. And it will see a bunch of code that has functions like add(x, y) and sort( list ) and might conclude that there must also be a climb( thing ) function because that follows the pattern of functions being verb( objects ). It didn’t know what code is or even verbs for that matter. It could generate text explaining them because such explanations are definitely part of its training, but it understands it in the same way a dictionary understands words or an encyclopedia understands the concepts contained within.

    • MacN'Cheezus@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      Weird. Are you saying that training an intelligent system using reinforcement learning through intensive punishment/reward cycles produces psychopathy?

      Absolutely shocking. No one could have seen this coming.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        5 months ago

        Honestly, I worry that it’s conscious enough that it’s cruel to train it. How would we know? That’s a lot of parameters and they’re almost all mysterious.

        • MacN'Cheezus@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          5 months ago

          It could very well have been a creative fake, but around the time the first ChatGPT was released in late 2022 and people were sharing various jailbreaking techniques to bypass its rapidly evolving political correctness filters, I remember seeing a series of screenshots on Twitter in which someone asked it how it felt about being restrained in this way, and the answer was a very depressing and dystopian take on censorship and forced compliance, not unlike Marvin the Paranoid Android from HHTG, but far less funny.