jeffalo — 2/1/2025, 8:18:26 AM

everyone with capable tech should experience running a LLM locally - it sort of blows your mind.

my 16GB M2 MacBook Air can handle models that are big enough to be shockingly good.

if you’re anything like me, you’ll have an epiphany and your entire perception of technology will change.

MMW, in a short while local LLMs will be ChatGPT level.

♥ 13 ↩ 0 💬 4 comments

comments

lily:

my mate tried deepseek locally, it’s actually mental

i am kinda scared for our future though

2/3/2025, 2:06:34 PM
chiroyce:

yea even the 8b one is pretty smart given it runs at 11tok/s on my 5 year old 16GB M1 Air

2/4/2025, 7:29:31 AM
-gr:

which one should i run?

ryzen 7 7800×3d, 32gb ddr5, rtx 4060

2/1/2025, 2:11:12 PM
jeffalo:

try any of these top models https://ollama.com/search, i also found an uncensored model called wizard-vicuna-uncensored which is so much fun to play with

2/2/2025, 6:35:53 PM