Skip to content

Commit c76dbfb

Browse files
Update to v0.2.0
1 parent 0008f31 commit c76dbfb

File tree

4 files changed

+163
-20
lines changed

4 files changed

+163
-20
lines changed

README.md

Lines changed: 76 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,18 +2,30 @@
22

33
Python framework for AI agents logic-only coding with streaming, tool calls, and multi-LLM provider support.
44

5+
Only the **"fairly stable"** versions are published on PyPi, but to get the latest experimental versions, clone this repository and install it !
6+
57
## Installation
68

79
```bash
810
pip install open-taranis --upgrade
911
```
12+
For package on **PyPi**
13+
14+
15+
**or**
16+
```bash
17+
git clone https://github.com/SyntaxError4Life/open-taranis && cd open-taranis/ && pip install .
18+
```
19+
For last version
1020

1121
## Quick Start
1222

23+
<details><summary><b>Simplest</b></summary>
24+
1325
```python
1426
import open_taranis as T
1527

16-
client = T.clients.openrouter("api_key")
28+
client = T.clients.openrouter() # API_KEY in env_var
1729

1830
messages = [
1931
T.create_user_prompt("Tell me about yourself")
@@ -22,31 +34,75 @@ messages = [
2234
stream = T.clients.openrouter_request(
2335
client=client,
2436
messages=messages,
25-
model="mistralai/mistral-7b-instruct:free",
37+
model="nvidia/nemotron-3-nano-30b-a3b:free",
2638
)
2739

2840
print("assistant : ",end="")
2941
for token, tool, tool_bool in T.handle_streaming(stream) :
3042
if token :
3143
print(token, end="")
3244
```
45+
</details>
46+
47+
<details><summary><b>To create a simple display using gradio as backend</b></summary>
3348

34-
To create a simple display using gradio as backend :
3549
```python
3650
import open_taranis as T
3751
import open_taranis.web_front as W
3852
import gradio as gr
3953

4054
gr.ChatInterface(
4155
fn=W.chat_fn_gradio(
42-
client=T.clients.openrouter(API_KEY),
56+
client=T.clients.openrouter(), # API_KEY in env_var
4357
request=T.clients.openrouter_request,
44-
model="mistralai/mistral-7b-instruct:free",
58+
model="nvidia/nemotron-3-nano-30b-a3b:free",
4559
_system_prompt="You are an agent named **Taranis**"
4660
).create_fn(),
4761
title="web front"
4862
).launch()
63+
```
64+
</details>
65+
66+
67+
68+
<details><summary><b>Make a simple agent with a context windows on the 6 last turns</b></summary>
69+
70+
```python
71+
import open_taranis as T
72+
73+
class Agent(T.agent_base):
74+
def __init__(self):
75+
super().__init__()
76+
77+
self.client = T.clients.openrouter()
78+
self._system_prompt = [T.create_system_prompt(
79+
"You're an agent nammed **Taranis** !"
80+
)]
81+
82+
83+
def create_stream(self):
84+
return T.clients.openrouter_request(
85+
client=self.client,
86+
messages=self._system_prompt+self.messages,
87+
model="nvidia/nemotron-3-nano-30b-a3b:free"
88+
)
89+
90+
def manage_messages(self):
91+
self.messages = self.messages[-12:] # Each turn have 1 user and 1 assistant
92+
93+
My_agent = Agent()
94+
95+
while True :
96+
prompt = input("user : ")
97+
98+
print("\n\nagent : ", end="")
99+
100+
for t in My_agent(prompt):
101+
print(t, end="", flush=True)
102+
103+
print("\n\n","="*60,"\n")
49104
```
105+
</details>
50106

51107
---
52108

@@ -70,19 +126,33 @@ gr.ChatInterface(
70126
- [X] v0.0.1: start
71127
- [X] v0.0.x: Add and confirm other API providers (in the cloud, not locally)
72128
- [X] v0.1.x: Functionality verifications in [examples](https://github.com/SyntaxError4Life/open-taranis/blob/main/examples/)
73-
- [ ] > v0.2.0: Add features for **logic-only coding** approach
129+
- [ ] > v0.2.0: Add features for **logic-only coding** approach, start with `agent_base`
130+
- [ ] v0.3.x: Add a full agent in **TUI** and upgrade web client **deployments**
74131
- The rest will follow soon.
75132

76133
## Changelog
77134

135+
<details><summary> <b>v0.0.x : The start</b> </summary>
136+
78137
- **v0.0.4** : Add **xai** and **groq** provider
79138
- **v0.0.6** : Add **huggingface** provider and args for **clients.veniceai_request**
139+
</details>
140+
141+
<details><summary><b>v0.1.x : Gradio, commands and TUI</b></summary>
142+
80143
- **v0.1.0** : Start the **docs**, add **update-checker** and preparing for the continuation of the project...
81144
- **v0.1.1** : Code to deploy a **frontend with gradio** added (no complex logic at the moment, ex: tool_calls)
82145
- **v0.1.2** : Fixed a display bug in the **web_front** and experimentally added **ollama as a backend**
83146
- **v0.1.3** : Fixed the memory reset in the **web_front** and remove **ollama module** for **openai front** (work 100 times better)
84147
- **v0.1.4** : Fixed `web_front` for native use on huggingface, as well as `handle_streaming` which had tool retrieval issues
85148
- **v0.1.7** : Added a **TUI** and **commands**, detection of **env variables** (API keys) and tools in the framework
149+
</details>
150+
151+
<details><summary><b>v0.2.x : Agents</b></summary>
152+
153+
- **v0.2.0** : Adding `agent_base`
154+
</details>
155+
86156

87157
## Advanced Examples
88158

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
44

55
[project]
66
name = "open-taranis"
7-
version = "0.1.7"
7+
version = "0.2.0"
88
description = "Minimalist Python framework for AI agents logic-only coding with streaming, tool calls, and multi-LLM provider support"
99
authors = [{name = "SyntaxError4Life", email = "lilian@zanomega.com"}]
1010
dependencies = ["openai", "bs4"]

src/open_taranis/__init__.py

Lines changed: 85 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
import inspect
88
from typing import Any, Callable, Literal, Union, get_args, get_origin
99

10-
__version__ = "0.1.7"
10+
__version__ = "0.2.0"
1111

1212
import requests
1313
from packaging import version
@@ -192,7 +192,7 @@ def huggingface(api_key: str=None) -> openai.OpenAI:
192192
Use `clients.generic_request` for call
193193
"""
194194
if os.environ.get('HF_API') :
195-
os.environ.get('HF_API')
195+
api_key = os.environ.get('HF_API')
196196
return openai.OpenAI(api_key=api_key, base_url="https://router.huggingface.co/v1")
197197

198198
@staticmethod
@@ -440,9 +440,89 @@ def create_user_prompt(content:str) -> dict[str, str] :
440440
return {"role":"user", "content":content}
441441

442442
# ==============================
443-
# Agents coding (v0.2.0)
443+
# Agents base
444444
# ==============================
445445

446-
class agent:
446+
class agent_base:
447447
def __init__(self):
448-
pass
448+
449+
self._system_prompt = [create_system_prompt(None)]
450+
self.messages = []
451+
452+
self.meta = {
453+
"create_stream":True
454+
}
455+
456+
def create_stream(self):
457+
"""
458+
# TO IMPLEMENT
459+
460+
like this :
461+
```python
462+
return clients.Your_request(
463+
client=clients.Your,
464+
messages=self._system_prompt+self.messages, # Need to be keep !
465+
model="Yout model"
466+
)
467+
```
468+
but with your customisation
469+
470+
Yout can define your client in `__init__()`
471+
"""
472+
473+
if self.meta["create_stream"]:
474+
raise "You MUST define 'agent_base.create_stream()'"
475+
476+
477+
478+
def manage_user_prompt(self, prompt):
479+
"""
480+
# TO IMPLEMENT if needed
481+
"""
482+
483+
return prompt
484+
485+
def manage_assistant_response(self, response):
486+
"""
487+
# TO IMPLEMENT if needed
488+
"""
489+
490+
return response
491+
492+
def manage_messages(self):
493+
"""
494+
Example to always store only the 2 lasts turns :
495+
```python
496+
self.messages = self.messages[-4:]
497+
```
498+
"""
499+
pass
500+
501+
def execute_tools(self, tool_calls):
502+
pass
503+
504+
def __call__(self, prompt):
505+
506+
507+
run = True
508+
509+
self.messages.append(create_user_prompt(
510+
self.manage_user_prompt(prompt)
511+
))
512+
513+
while run :
514+
515+
respond = ""
516+
for token, tool_calls, run in handle_streaming(self.create_stream()) :
517+
if token :
518+
yield token
519+
respond += token
520+
521+
self.messages.append(create_assistant_response(
522+
self.manage_assistant_response(respond)
523+
))
524+
525+
self.manage_messages()
526+
527+
if run:
528+
self.execute_tools(tool_calls)

src/open_taranis/web_front.py

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ def create_stream(self, messages):
2020

2121
return self.request(
2222
self.client,
23-
messages=messages,
23+
messages=self._system_prompt+messages,
2424
model=self.model
2525
)
2626

@@ -36,13 +36,6 @@ def fn(message, history, *args):
3636
messages.append(T.create_user_prompt(user))
3737
messages.append(T.create_assistant_response(assistant))
3838
messages.append(T.create_user_prompt(message))
39-
40-
41-
stream = self.request(
42-
self.client,
43-
messages=self._system_prompt+messages,
44-
model=self.model
45-
)
4639

4740
stream = self.create_stream(
4841
messages=messages

0 commit comments

Comments
 (0)