What happened?
Hi, I'm running a llama cpp remote computer running Qwen3-coder-next, llama chat app works fine in port 8080, but when I connect ProxyAI, using ollama to that endpoint, model listing works but chats don't work, in theory it should work, ollama and llama cpp use openai api
I'm always getting empty replys
Relevant log output or stack trace
Steps to reproduce
- setup remote llama cpp
- select model
- write chat
- send
CodeGPT version
3.7.5-241.1
Operating System
Linux
What happened?
Hi, I'm running a llama cpp remote computer running Qwen3-coder-next, llama chat app works fine in port 8080, but when I connect ProxyAI, using ollama to that endpoint, model listing works but chats don't work, in theory it should work, ollama and llama cpp use openai api
I'm always getting empty replys
Relevant log output or stack trace
Steps to reproduce
CodeGPT version
3.7.5-241.1
Operating System
Linux