• 1 Post
  • 7 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • Well, openERP or openbravo were what I would have recommended ten years ago, but due to their commercialization aren’t really relevant any longer. If I personally was setting this up for myself I would probably use redmine and a plugin that gives redmine the invoice functionality. However I wouldn’t call it simple for a first timer to pull off, but if redmine is mastered you will find very extensible and customizable to any particular project’s needs.




  • Also you’re asking about multi gpu, I have a few other cards stuffed in my backplane. The GeForce GTX 1050 Ti has 4GB of vram, and is comparable to the P40 in performance. I have split a larger 33B model on the two cards. Splitting a large model is of course slower than running on one card alone, but is much faster than cpu (even with 48 threads). However speed when splitting depends on the speed of the pci-e bus, which for me is limited to gen 1 speeds for now. If you have a faster/newer pci-e standard then you’ll see better results than me.




  • I have a p40 I’d be glad to run a benchmark on, just tell me how. I have Ooba and llama.cpp installed on linux Ubuntu 22.04, it’s a Dell r620 with 2 x 12 3.5 Ghz cores (2 threads per core for 48 threads) Xeon with 256GB ram @ 1833Mhz, I have a pci-e gen 1 20 slot backplane. The speed of the pci-e bus might impact the loading time of the large models, but seems to not affect the speed of inference.

    I went for the p40 for costs per GB of vram, speed was less important to me than being able to load the larger models at all. Including the fan and fan coupling i’m all in about $250 per card. I’m planning on adding more in the future, I to suffer from too many pci-e slots.

    The cuda version I dont think will become an issue anytime to soon but is coming to be sure.