Gemma4:26b is also worth trying. I find it runs much faster on my hardware.
Edit: Qwen3.6:35B might be the sweet spot. It’s bigger than the 27B, but actually more lightweight when running. TIL the 27B is not a MoE model; it’s a dense model. The 35B is a MoE model with only 3B active params.
So far, I think Qwen3.6:35B might be giving me better results than Gemma4:26B. It’s a bit slower than Gemma4:26B, but definitely faster than Qwen3.6:27B.
Gemma4:26b is also worth trying. I find it runs much faster on my hardware.
Edit: Qwen3.6:35B might be the sweet spot. It’s bigger than the 27B, but actually more lightweight when running. TIL the 27B is not a MoE model; it’s a dense model. The 35B is a MoE model with only 3B active params.
So far, I think Qwen3.6:35B might be giving me better results than Gemma4:26B. It’s a bit slower than Gemma4:26B, but definitely faster than Qwen3.6:27B.