• fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    3
    ·
    7 months ago

    Good. It’s dangerous to view AI as magic. I’ve had to debate way too many people who think they LLMs are actually intelligent. It’s dangerous to overestimate their capabilities lest we use them for tasks they can’t perform safely. It’s very powerful but the fact that it’s totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      7 months ago

      Conversely, there are way too many people who think that humans are magic and that it’s impossible for AI to ever do <insert whatever is currently being debated here>.

      I’ve long believed that there’s a smooth spectrum between not-intelligent and human-intelligent. It’s not a binary yes/no sort of thing. There’s basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it’s fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they’re moving in our direction.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        It’s not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don’t think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There’s also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate “memory” that gets searched and bits inserted into the context of the LLM when it’s answering questions. LLMs have been trained to be able to call external APIs to do the things they’re bad at, like math. The LLM is typically still the central “core” of the system, though; the other stuff is routine sorts of computer activities that we’ve already had a handle on for decades.

          IMO it still boils down to a continuum. If there’s an AI system that’s got an LLM in it but also a Wolfram Alpha API and a websearch API and other such “helpers”, then that system should be considered as a whole when asking how “intelligent” it is.

          • fidodo@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            Lol yup, some people think they’re real smart for realizing how limited LLMs are, but they don’t recognize that the researchers that actually work on this are years ahead on experimentation and theory already and have already realized all this stuff and more. They’re not just making the specific models better, they’re also figuring out how to combine them to make something more generally intelligent instead of super specialized.

    • Deceptichum@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      I find the people who think they are actually an AI are generally the people opposed to them.

      People who use them as the tools they are know how limited they are.

    • Asuka@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      7 months ago

      I think it’s a big mistake to think that because the most basic LLMs are just autocompletes, or that because LLMs can hallucinate, that what big LLMs do doesn’t constitute “thinking”. No, GPT4 isn’t conscious, but it very clearly “thinks”.

      It’s started to feel to me like current AIs are reasonable recreations of parts of our minds. It’s like they’re our ability to visualize, to verbalize, and to an extent, to reason (at least the way we intuitively reason, not formally), but separared from the “rest” of our thought processes.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.

      • erwan@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Yes, as Linus Torvalds said humans are also thinking like autocomplete systems.