r/LocalLLaMA
Posted by u/Dismal-Effect-1914
Nemo 30B is insane. 1M+ token CTX on one 3090
Tools 386 points
105 comments
1 month ago
Been playing around with llama.cpp and some 30-80B parameter models with CPU offloading. Currently have one 3090 and 32 GB of RAM. Im very impressed by Nemo 30B. 1M+ Token Context cache, runs on one 3090, CPU offloading for experts. Does 35 t/s which is faster than I can read at least. Usually slow as fuck at this large a context window. Feed it a whole book or research paper and its done summarizing in like a few mins. This really makes long context windows on local hardware possible. The only other contender I have tried is Seed OSS 36b and it was much slower by about 20 tokens.
More from r/LocalLLaMA
r/LocalLLaMA · u/KvAk_AKPlaysYT
Recent
Hot
Anthropic: "We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax." 🚨
Tools
3.1K 674 3 weeks ago
r/LocalLLaMA · u/HeadAcanthisitt...
Recent
Hot
I feel personally attacked
Tools
3.0K 151 1 week ago
r/LocalLLaMA · u/Xhehab_
Recent
Hot
Distillation when you do it. Training when we do it.
Tools
2.6K 156 3 weeks ago