Skip to content

Latest commit

 

History

History
26 lines (12 loc) · 819 Bytes

File metadata and controls

26 lines (12 loc) · 819 Bytes

GenAI Support

It is possible to run LLM support via different strategies:

  • Access to cloud based LLM's via the API interface to them (eg. OpenAI)
  • Local LLM's

In a first approach, we implented OLLAMA models, in our case QWEN3:0.6b, a very small, but in my opinion a good one. Then we accessed it via Python Script and a third approach, we customized it via hyperparameter tuning and system prompt in a modelfile