Hey, has anyone here used Qwen3.5-27B-NVFP4-GGUF with llama.cpp yet?

reddit-localllama · www.reddit.com ·3 pts·15 replies ↗ ·1d

Hey! I was wondering if anyone of you have used Qwen3.5-27B-NVFP4-GGUF on RTX5090 on llama.cpp?

llama

open →

← back to top