Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions gallery/index.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,58 @@
---
- name: "gemma-4-26b-a4b-it"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls:
- https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF
description: |
Hugging Face |
GitHub |
Launch Blog |
Documentation

License: Apache 2.0 | Authors: Google DeepMind

Gemma is a family of open models built by Google DeepMind. Gemma 4 models are multimodal, handling text and image input (with audio supported on small models) and generating text output. This release includes open-weights models in both pre-trained and instruction-tuned variants. Gemma 4 features a context window of up to 256K tokens and maintains multilingual support in over 140 languages.

Featuring both Dense and Mixture-of-Experts (MoE) architectures, Gemma 4 is well-suited for tasks like text generation, coding, and reasoning. The models are available in four distinct sizes: **E2B**, **E4B**, **26B A4B**, and **31B**. Their diverse sizes make them deployable in environments ranging from high-end phones to laptops and servers, democratizing access to state-of-the-art AI.

Gemma 4 introduces key **capability and architectural advancements**:

* **Reasoning** – All models in the family are designed as highly capable reasoners, with configurable thinking modes.

...
license: "apache-2.0"
tags:
- llm
- gguf
- gemma
icon: https://ai.google.dev/gemma/images/gemma4_banner.png
overrides:
backend: llama-cpp
function:
automatic_tool_parsing_fallback: true
grammar:
disable: true
known_usecases:
- chat
mmproj: llama-cpp/mmproj/gemma-4-26B-A4B-it-GGUF/mmproj-F32.gguf
options:
- use_jinja:true
parameters:
min_p: 0
model: llama-cpp/models/gemma-4-26B-A4B-it-GGUF/gemma-4-26B-A4B-it-UD-Q4_K_M.gguf
repeat_penalty: 1
temperature: 1
top_k: 64
top_p: 0.95
template:
use_tokenizer_template: true
files:
- filename: llama-cpp/models/gemma-4-26B-A4B-it-GGUF/gemma-4-26B-A4B-it-UD-Q4_K_M.gguf
sha256: b8707e57f676d8dd1b80f623b45200cc92e6966b0e95275e606f412095a49fde
uri: https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/resolve/main/gemma-4-26B-A4B-it-UD-Q4_K_M.gguf
- filename: llama-cpp/mmproj/gemma-4-26B-A4B-it-GGUF/mmproj-F32.gguf
sha256: ce12ca17e5f479ff292cd66817960e4d3f1b09671f744e415c98a55d7725c9ed
uri: https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF/resolve/main/mmproj-F32.gguf
- name: "qwen3.6-35b-a3b-apex"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls:
Expand Down Expand Up @@ -1026,7 +1080,7 @@
- "offload_to_cpu:false"
- "offload_dit_to_cpu:false"
- "init_lm:true"
- "lm_model_path:acestep-5Hz-lm-0.6B" # or acestep-5Hz-lm-4B

Check warning on line 1083 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

1083:45 [comments] too few spaces before comment: expected 2
- "lm_backend:pt"
- "temperature:0.85"
- "top_p:0.9"
Expand Down Expand Up @@ -1405,7 +1459,7 @@
known_usecases:
- tts
tts:
voice: Aiden # Available speakers: Vivian, Serena, Uncle_Fu, Dylan, Eric, Ryan, Aiden, Ono_Anna, Sohee

Check warning on line 1462 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

1462:20 [comments] too few spaces before comment: expected 2
parameters:
model: Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice
- !!merge <<: *qwen-tts
Expand All @@ -1417,7 +1471,7 @@
known_usecases:
- tts
tts:
voice: Aiden # Available speakers: Vivian, Serena, Uncle_Fu, Dylan, Eric, Ryan, Aiden, Ono_Anna, Sohee

Check warning on line 1474 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

1474:20 [comments] too few spaces before comment: expected 2
parameters:
model: Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice
- &fish-speech
Expand Down Expand Up @@ -5925,7 +5979,7 @@
- gemma3
- gemma-3
overrides:
#mmproj: gemma-3-27b-it-mmproj-f16.gguf

Check warning on line 5982 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

5982:6 [comments] missing starting space in comment
parameters:
model: gemma-3-27b-it-Q4_K_M.gguf
files:
Expand All @@ -5943,7 +5997,7 @@
description: |
google/gemma-3-12b-it is an open-source, state-of-the-art, lightweight, multimodal model built from the same research and technology used to create the Gemini models. It is capable of handling text and image input and generating text output. It has a large context window of 128K tokens and supports over 140 languages. The 12B variant has been fine-tuned using the instruction-tuning approach. Gemma 3 models are suitable for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes them deployable in environments with limited resources such as laptops, desktops, or your own cloud infrastructure.
overrides:
#mmproj: gemma-3-12b-it-mmproj-f16.gguf

Check warning on line 6000 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

6000:6 [comments] missing starting space in comment
parameters:
model: gemma-3-12b-it-Q4_K_M.gguf
files:
Expand All @@ -5961,7 +6015,7 @@
description: |
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. Gemma-3-4b-it is a 4 billion parameter model.
overrides:
#mmproj: gemma-3-4b-it-mmproj-f16.gguf

Check warning on line 6018 in gallery/index.yaml

View workflow job for this annotation

GitHub Actions / Yamllint

6018:6 [comments] missing starting space in comment
parameters:
model: gemma-3-4b-it-Q4_K_M.gguf
files:
Expand Down
Loading