Summary
This issue tracks user-facing differences between Go and Python runtimes. Solutions to the following issues can either be enhanced documentation or feature additions to the Go runtime. Please comment below if any more differences are found when you are using the Go runtime.
Action Items
Summary
This issue tracks user-facing differences between Go and Python runtimes. Solutions to the following issues can either be enhanced documentation or feature additions to the Go runtime. Please comment below if any more differences are found when you are using the Go runtime.
Action Items
Tighten embedding model compatibility for Go runtime.
Python memory embeddings support
openai,azure_openai,ollama,gemini/Vertex AI, andbedrock. Go currently supports onlyopenaiandazure_openai, even though translated embedding configs can point at other providers. Either validation or additional support could work here. Python embedding models also can use a refactor to move them out from_memory_service.py.Clarify or fix Bedrock model support in Go runtime.
Python Bedrock uses the Bedrock Converse API and supports Bedrock model families broadly. Go currently routes Bedrock through the Anthropic Bedrock client, so Bedrock is effectively Anthropic-only today.
Implement
apiKeyPassthroughin the Go runtime.Users can enable it today, but Go does not currently apply it to LLM clients.
Apply model TLS settings in the Go runtime.
ModelConfigTLS fields are translated, but Go does not currently honor them when creating model clients.Honor
ollama.optionsin the Go runtime.Python uses them; Go currently ignores them. A solution would be to use Ollama SDK if available.
Decide whether context compaction is a must-fix or a documented Go limitation.
Decide whether memory auto-save parity is required.
Python auto-saves periodically; Go currently relies more on explicit
save_memorytool usage.Decide whether skills sandboxing parity is required.
Python uses
srt; Go currently executes skills shell commands directly.