Two TUIs to Help You Run qwen3.5-v2-turbo-pro-final-next and Choose Between GGUF-Q4_K_M and AWQ-Int4-g128

Two annoying problems, two TUIs

Model naming is a mess. Every provider does it differently, and it changes weekly. You see a blog post about Codex v5.3-final-for-real-turbo and your first question is: how do I actually point to this in an API call?

models browses 2,000+ AI models across 85+ providers. Pricing, context windows, benchmarks. It also tells you how much it costs to run your agent swarm and get zero productivity out of it.

brew tap arimxyer/tap
brew install models
models

The other problem: you want to run Qwen 3.5 locally. But which of the eighty Hugging Face versions with different quantization strategies can you actually fit? And which one should you pick?

llmfit detects your hardware and tells you what will actually run on your machine right now.

brew tap AlexsJones/llmfit
brew install llmfit
llmfit

Both are fun little tools I find joyful to use. They’re in your terminal, they’re fast, and they solve something that’s genuinely annoying.