List of icons/services suggested:

  • Calibre
  • Jitsi
  • Kiwix
  • Monero (Node)
  • Nextcloud
  • Pihole
  • Ollama (Should at least be able to run tiny-llama 1.1B)
  • Open Media Vault
  • Syncthing
  • VLC Media Player Media Server
  • abbadon420@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    3 months ago

    Than what are the minimal specs to run ollama (llama3 8b or preferably 27b) at a decent speed?

    I have an old pc that now runs my plex and arr suite. Was thinking of upgrading it a bit and running ollama on it as well. It doesn’t have a gpu, so what else does it need? I don’t have a big budget, so no new nvidia card for me.

    • Smokeydope@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      “decent speed” depends on your subjective opinion and what you want it to do. I think its fair to say if it can generate text around your slowest tolerable reading speed thats a bare minimum for real time conversational things. If you want a task done and don’t mind stepping away to get a coffee it can be much slower.

      I was pleasantly suprised to get anything at all working on an old laptop. When thinking of AI my mind imagines super computers and thousand dollar rigs and data centers. I don’t think mobile computers like my thinkpad. But sure enough the technology is there and your old POS can adopt a powerful new tool if you have realistic expectations on matching model capacity with specs.

      Tiny llama will work on a smartphone but its dumb. llama3.1 8B is very good and will work on modest hardware but you may have to be patient with it if especially if your laptop wasn’t top of the line when it was made 10 years ago. Then theres all the models in between.

      The i7 6600U duo core 2.6ghz CPU in my laptop trying to run 8B was jusst barely enough to be passing grade for real time talking needs at 1.2-1.7 T/s it could say a short word or half of a complex one per second. When it needed to process something or recalculate context it took a hot minute or two.

      That got kind of annoying if you were getting into what its saying. Bumping the PC up to a AMD ryzen 5 2600 6 core CPU was a night and day difference. It spits out a sentence very quick faster than my average reading speed at 5-6 t/s. Im still working on getting the 4GB RX 580 GPU used for offloading so those numbers are just with the CPU bump. RAM also matters DDR6 will beat DDR4 speed wise.

      Heres a tip, most software has the models default context size set at 512, 2048, or 4092. Part of what makes llama 3.1 so special is that it was trained with 128k context so bump that up to 131072 in the settings so it isnt recalculating context every few minutes…

      • abbadon420@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        I have a 4 core i7, 16gb RAM and no GPU yet. I haven’t tried anything yet, because I need to wipe windows and install mint first, but it sounds promising. Thanks for the details.