Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A 120gb ram MacBook Pro will run gpt-oss-120b at a very respectable clip and I’ve found it to be quite serviceable for a lot of tasks.


I bought one for this purpose, but LM Studio doesn't seem to want to run even the most quantized versions. Any suggestions?


Are you using the mix quantized versions? Also, there’s a setting that disables the memory allocation guardrails they put in place, which are irrelevant since macOS handles oom quite gracefully.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: