Skip to content

Commit 753371b

Browse files
feat: update granite library examples to use Granite 4.1 3B adapters. (#981)
* Bumps docs/examples/intrinsics to 4.1. The commented-out code in intrinsics.py still needs to be changed. Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * change query intrinsic examples to use 4.1 Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * Change back to SWITCH to make later search/replace easier. Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * Updates examples for 3 intrinsics * context_relevance: use 4.0 and leave comment explaining why. * requirement check: switch to 4.1 * uncertainty: switch to 4.1 Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * update AGENDA.md to granite 4.1 Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * mention context-relevance model availability in AGENTS.md long-term it probably makes sense to add another column to the intrinsics list. Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * Adds 3b/8b/30b models to BASE_MODEL_TO_CANONICAL_NAME Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * Adds 4.1-3b to _LOCAL_BASE_MODELS for intrinsics formatter tests. Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * Changes tests and examples from 4.0 to 4.1 Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * requirement_check -> requirement-check Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> * A little bit of style cleanup. Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> --------- Signed-off-by: Nathan Fulton <gitcommit@nfulton.org> Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Co-authored-by: Jake LoRocco <jake.lorocco@ibm.com>
1 parent 6785a65 commit 753371b

19 files changed

Lines changed: 31 additions & 38 deletions

AGENTS.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ Intrinsics are specialized LoRA adapters that add task-specific capabilities (RA
178178
| `rag` | `rewrite_question(question, context, backend)` | Rewrite question into a retrieval query |
179179
| `rag` | `clarify_query(question, documents, context, backend)` | Generate clarification or return "CLEAR" |
180180
| `rag` | `find_citations(response, documents, context, backend)` | Document sentences supporting the response |
181-
| `rag` | `check_context_relevance(question, document, context, backend)` | Whether a document is relevant (0–1) |
181+
| `rag` | `check_context_relevance(question, document, context, backend)` | Whether a document is relevant (0–1); only supported for granite-4.0, not granite-4.1 |
182182
| `rag` | `flag_hallucinated_content(response, documents, context, backend)` | Flag potentially hallucinated sentences |
183183

184184
```python
@@ -187,7 +187,7 @@ from mellea.stdlib.components import Message
187187
from mellea.stdlib.components.intrinsic import core
188188
from mellea.stdlib.context import ChatContext
189189

190-
backend = LocalHFBackend(model_id="ibm-granite/granite-4.0-micro")
190+
backend = LocalHFBackend(model_id="ibm-granite/granite-4.1-3b")
191191
context = (
192192
ChatContext()
193193
.add(Message("user", "What is the square root of 4?"))
@@ -223,5 +223,5 @@ https://huggingface.co/ibm-granite/granitelib-rag-r1.0/blob/main/{intrinsic_name
223223

224224
Core and Guardian intrinsics (include model subfolder):
225225
```
226-
https://huggingface.co/ibm-granite/granitelib-{core,guardian}-r1.0/blob/main/{intrinsic_name}/granite-4.0-micro/README.md
226+
https://huggingface.co/ibm-granite/granitelib-{core,guardian,rag}-r1.0/blob/main/{intrinsic_name}/granite-4.1-{3b,8b,30b}/{lora,alora}/README.md
227227
```

docs/examples/aLora/example_readme_generator.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
if __name__ == "__main__":
77
generate_readme(
88
dataset_path="stembolt_failure_dataset.jsonl",
9-
base_model="granite-4.0-micro",
9+
base_model="granite-4.1-3b",
1010
prompt_file=None,
1111
output_path="stembolts_model_readme.md",
1212
name="your-username/stembolts-alora",

docs/examples/intrinsics/answerability.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
from mellea.stdlib.components.intrinsic import rag
1313

1414
ctx, backend = start_backend(
15-
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
15+
"hf", model_id=model_ids.IBM_GRANITE_4_1_3B, context_type="chat"
1616
)
1717
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
1818

docs/examples/intrinsics/citations.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
from mellea.stdlib.components.intrinsic import rag
1616

1717
ctx, backend = start_backend(
18-
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
18+
"hf", model_id=model_ids.IBM_GRANITE_4_1_3B, context_type="chat"
1919
)
2020
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
2121

docs/examples/intrinsics/context_attribution.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
from mellea.stdlib.components.intrinsic import core
2020

2121
ctx, backend = start_backend(
22-
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
22+
"hf", model_id=model_ids.IBM_GRANITE_4_1_3B, context_type="chat"
2323
)
2424
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
2525

docs/examples/intrinsics/context_relevance.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
ctx, backend = start_backend(
1515
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
1616
)
17-
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
17+
# NOTE: this example uses Granite 4.0 micro because there is no context_relevance intrinsic for Graniet 4.1
1818

1919
question = "Who is the CEO of Microsoft?"
2020
document = (

docs/examples/intrinsics/factuality_correction.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@
8282

8383
# Create the backend.
8484
ctx, backend = start_backend(
85-
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
85+
"hf", model_id=model_ids.IBM_GRANITE_4_1_3B, context_type="chat"
8686
)
8787
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
8888

docs/examples/intrinsics/factuality_detection.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525

2626
# Create the backend.
2727
ctx, backend = start_backend(
28-
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
28+
"hf", model_id=model_ids.IBM_GRANITE_4_1_3B, context_type="chat"
2929
)
3030
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
3131

docs/examples/intrinsics/guardian_core.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
from mellea.stdlib.components.intrinsic import guardian
2020

2121
ctx, backend = start_backend(
22-
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
22+
"hf", model_id=model_ids.IBM_GRANITE_4_1_3B, context_type="chat"
2323
)
2424
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
2525

docs/examples/intrinsics/hallucination_detection.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
from mellea.stdlib.components.intrinsic import rag
1616

1717
ctx, backend = start_backend(
18-
"hf", model_id=model_ids.IBM_GRANITE_4_MICRO_3B, context_type="chat"
18+
"hf", model_id=model_ids.IBM_GRANITE_4_1_3B, context_type="chat"
1919
)
2020
# NOTE: This example can also be run with the OpenAIBackend using a GraniteSwitch model. See docs/examples/granite-switch/.
2121

0 commit comments

Comments
 (0)