@Wilshire@lemmy.world to Technology@lemmy.worldEnglish • 4 months agoThe first GPT-4-class AI model anyone can download has arrived: Llama 405Barstechnica.comexternal-linkmessage-square61fedilinkarrow-up1215arrow-down117
arrow-up1198arrow-down1external-linkThe first GPT-4-class AI model anyone can download has arrived: Llama 405Barstechnica.com@Wilshire@lemmy.world to Technology@lemmy.worldEnglish • 4 months agomessage-square61fedilink
minus-square@raldone01@lemmy.worldlinkfedilinkEnglish2•edit-24 months agoMy specs because you asked: CPU: Intel(R) Xeon(R) E5-2699 v3 (72) @ 3.60 GHz GPU 1: NVIDIA Tesla P40 [Discrete] GPU 2: NVIDIA Tesla P40 [Discrete] GPU 3: Matrox Electronics Systems Ltd. MGA G200EH Memory: 66.75 GiB / 251.75 GiB (27%) Swap: 75.50 MiB / 40.00 GiB (0%)
minus-squaresunzulinkfedilink1•4 months agook this is a server. 48gb cards and 67gb ram? for model alone?
minus-square@raldone01@lemmy.worldlinkfedilinkEnglish2•4 months agoEach card has 24GB so 48GB vram total. I use ollama it fills whatever vrams is available on both cards and runs the rest on the CPU cores.
My specs because you asked:
CPU: Intel(R) Xeon(R) E5-2699 v3 (72) @ 3.60 GHz GPU 1: NVIDIA Tesla P40 [Discrete] GPU 2: NVIDIA Tesla P40 [Discrete] GPU 3: Matrox Electronics Systems Ltd. MGA G200EH Memory: 66.75 GiB / 251.75 GiB (27%) Swap: 75.50 MiB / 40.00 GiB (0%)
ok this is a server. 48gb cards and 67gb ram? for model alone?
Each card has 24GB so 48GB vram total. I use ollama it fills whatever vrams is available on both cards and runs the rest on the CPU cores.