- Add run_async to LlamaCppChatGenerator (#2821)
- Llama.cpp - pin transformers test dependency; fix type error (#2784)
- Update Llama CPP components to auto call run
warm_up(#2748)
- Change pytest command (#2475)
- Remove Readme API CI workflow and configs (#2573)
- Make fmt command more forgiving (#2671)
- [breaking] Llama_cpp - drop Python 3.9 and use X|Y typing (#2710)
- Enhancement: Adopt PEP 585 type hinting (part 4) (#2527)
- Add pydoc configurations for Docusaurus (#2411)
- Download pre-built wheels for llama-cpp-python on macOS (#2235)
- Fix llama.cpp types (#2271)
- Feat:
LlamaCppChatGeneratorupdate tools param to ToolsType (#2438)
- Add image support to LlamaCppChatGenerator (#2197)
- Standardize readmes - part 2 (#2205)
LlamaCppChatGeneratorstreaming support (#2108)
- Remove black (#1985)
- Fix llama.cpp types; add py.typed; Toolset support (#1973)
- Test llama.cpp with python 3.12 (#1601)
- Review testing workflows (#1541)
- Remove Python 3.8 support (#1421)
- Use Haystack logging across integrations (#1484)
- Update ChatGenerators with
deserialize_tools_or_toolset_inplace(#1623) - Align core-integrations Hatch scripts (#1898)
- Chore: remove
jsonschemadependency fromdefaultenvironment (#1368)
- [breaking] Llama.cpp - unified support for tools + refactoring (#1357)
- Llama.cpp - gently handle the removal of ChatMessage.from_function (#1298)
- Make llama.cpp Chat Generator compatible with new
ChatMessage(#1254)
- Do not retry tests in
hatch run testcommand (#954)
- Adopt uv as installer (#1142)
- Update ruff linting scripts and settings (#1105)
- Unpin
llama-cpp-python(#1115) - Fix linting/isort (#1215)
- Use text instead of content for ChatMessage in Llama.cpp, Langfuse and Mistral (#1238)
- Chore: lamma_cpp - ruff update, don't ruff tests (#998)
- Fix: pin
llama-cpp-python<0.3.0(#1111)
- Replace DynamicChatPromptBuilder with ChatPromptBuilder (#940)
- Retry tests to reduce flakyness (#836)
- Update ruff invocation to include check parameter (#853)
- Pin
llama-cpp-python>=0.2.87(#955)
- Ci: install
pytest-rerunfailureswhere needed; add retry config totest-covscript (#845) - Fix: pin llama-cpp-python to an older version (#943)
- Refactor: introduce
_convert_message_to_llamacpp_formatutility function (#939)
- Llama.cpp: change wrong links and imports (#436)
- Fix order of API docs (#447)
- Update category slug (#442)
- Small consistency improvements (#536)
- Disable-class-def (#556)
- [breaking] Rename model_path to model in the Llama.cpp integration (#243)
- Generate api docs (#353)
- Model_name_or_path > model (#418)
- Llama.cpp - review docstrings (#510)
- Llama.cpp - update examples (#511)
- Make tests show coverage (#566)
- Remove references to Python 3.7 (#601)
- Chore: add license classifiers (#680)
- Chore: change the pydoc renderer class (#718)
- Basic implementation of llama.cpp chat generation (#723)
- Update import paths for beta5 (#233)
- Mount llama_cpp in haystack_integrations (#217)
- Add Llama.cpp Generator (#179)