You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,6 +39,7 @@ Jupyter notebooks and interactive tutorials covering:
39
39
| agent_reasoning_demo | Interactive demo of 11 cognitive architectures (CoT, ToT, ReAct, Self-Reflection, and more) for agent reasoning | Ollama, agent-reasoning |[](./notebooks/agent_reasoning_demo.ipynb)|
40
40
| oracle_agentic_rag_hybrid_search | Agentic RAG with vector, keyword, and hybrid search in a single SQL query using LangGraph ReAct agent | Oracle AI Database, langchain-oracledb, LangGraph, OpenAI |[](./notebooks/oracle_agentic_rag_hybrid_search.ipynb)|
41
41
| f1_miami_strategy_oracle_26ai | F1 Miami GP strategy intelligence for 2026 — SQL, hybrid vector+keyword search, JSON documents, and property graph in one Oracle 26ai database using real FastF1 data | Oracle AI Database, FastF1, sentence-transformers, Plotly |[](./notebooks/f1_miami_strategy_oracle_26ai.ipynb)|
42
+
| multicloud/ | AWS, Azure, Google Cloud, and MongoDB API samples running Oracle AI Database outside OCI | Oracle AI Database + AWS / Azure / Google / MongoDB |[](./notebooks/multicloud)|
42
43
43
44
### 📚 **Guides** (`/guides`)
44
45
@@ -48,6 +49,7 @@ Comprehensive documentation, reference materials, and conference presentations c
| Building the Brain and Backbone of Enterprise AI Agents | Advanced reasoning and infrastructure strategies for enterprise AI agents. Covers the 2026 agent stack (layered architecture), reasoning patterns (Chain of Thought, Tree of Thoughts, Self-Reflection, Least-to-Most, Decomposed Prompting), and context/belief updates. Presented at DevWeek SF 2026 by Nacho Martinez. |[](./guides/brain_backbone_enterprise_agents_devweek_sf_2026.pdf)|
50
51
| Memory Engineering: The Discipline Behind Memory Augmented Agents | Deep dive into memory engineering as a discipline for AI agents — the science of helping agents remember, reason, and act. Covers the memory ecosystem, form factors, and key disciplines shaping memory-augmented agents. Presented at DevWeek SF 2026 (Keynote) by Richmond Alake. |[](./guides/memory_engineering_devweek_sf_2026.pdf)|
52
+
| Agent Memory with Oracle AI Database | Agent memory architectures and Oracle AI Database as the memory core for AI agents. Presented at the AI Developer Conference hosted by DeepLearning.AI in April 2026 by Eli Schilling. |[](./guides/dlai_aidev_agent_memory.pptx)|
<palign="center"><strong>Transform standard open-source LLMs into reliable problem solvers with 16 advanced cognitive architectures. From predicting the next token to predicting the next thought.</strong></p>
12
8
13
-
<divalign="center">
14
-
15
-
**[View Interactive Presentation](docs/slides/presentation.html)** | Animated overview of the project
The **Reasoning Layer** is the cognitive engine of the AI stack. While traditional LLMs excel at token generation, they often struggle with complex planning, logical deduction, and self-correction.
62
70
63
-
This repository transforms standard Open Source models (like `gemma3`, `llama3`) into robust problem solvers by wrapping them in advanced cognitive architectures. It implements findings from key research papers (CoT, ToT, ReAct) to give models "agency" over their thinking process.
71
+
This repository transforms standard Open Source models (like `gemma3`, `llama3`) into reliable problem solvers by wrapping them in advanced cognitive architectures. It implements findings from key research papers (CoT, ToT, ReAct) to give models "agency" over their thinking process.
64
72
65
73
>**"From predicting the next token to predicting the next thought."**
* 🧩 **Decomposition & Least-to-Most**: Planning and sub-task execution.
@@ -575,7 +583,7 @@ class MyNewAgent(BaseAgent):
575
583
576
584
* **Model Not Found**: Ensure you have pulled the base model (`ollama pull gemma3:270m`).
577
585
* **Timeout / Slow**: ToT and Self-Reflection make multiple calls to the LLM. With larger models (Llama3 70b), this can take time.
578
-
* **Hallucinations**: The default demo uses `gemma3:270m` which is extremely small and prone to logic errors. Switch to `gemma2:9b` or `llama3` for robust results.
586
+
* **Hallucinations**: The default demo uses `gemma3:270m` which is extremely small and prone to logic errors. Switch to `gemma2:9b` or `llama3` for reliable results.
579
587
580
588
---
581
589
@@ -709,8 +717,7 @@ MIT License - see [LICENSE](LICENSE) for details.
0 commit comments