Skip to content

Commit fdc5237

Browse files
authored
feat: add .env.example-full and fix .env.example (MemTensor#1502)
## Description Please include a summary of the change, the problem it solves, the implementation approach, and relevant context. List any dependencies required for this change. Related Issue (Required): Fixes #issue_number ## Type of change Please delete options that are not relevant. - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] Refactor (does not change functionality, e.g. code style improvements, linting) - [ ] Documentation update ## How Has This Been Tested? Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration - [ ] Unit Test - [ ] Test Script Or Test Steps (please provide) - [ ] Pipeline Automated API Test (please provide) ## Checklist - [ ] I have performed a self-review of my own code | 我已自行检查了自己的代码 - [ ] I have commented my code in hard-to-understand areas | 我已在难以理解的地方对代码进行了注释 - [ ] I have added tests that prove my fix is effective or that my feature works | 我已添加测试以证明我的修复有效或功能正常 - [ ] I have created related documentation issue/PR in [MemOS-Docs](https://github.com/MemTensor/MemOS-Docs) (if applicable) | 我已在 [MemOS-Docs](https://github.com/MemTensor/MemOS-Docs) 中创建了相关的文档 issue/PR(如果适用) - [ ] I have linked the issue to this PR (if applicable) | 我已将 issue 链接到此 PR(如果适用) - [ ] I have mentioned the person who will review this PR | 我已提及将审查此 PR 的人 ## Reviewer Checklist - [ ] closes #xxxx (Replace xxxx with the GitHub issue number) - [ ] Made sure Checks passed - [ ] Tests have been provided
2 parents 96a1dd6 + 3402c38 commit fdc5237

2 files changed

Lines changed: 267 additions & 218 deletions

File tree

docker/.env.example

Lines changed: 44 additions & 218 deletions
Original file line numberDiff line numberDiff line change
@@ -1,223 +1,49 @@
1-
# MemOS Environment Variables (core runtime)
2-
# Legend: [required] needed for default startup; others are optional or conditional per comments.
3-
4-
## Base
5-
TZ=Asia/Shanghai
6-
MOS_CUBE_PATH=/tmp/data_test # local data path
7-
MEMOS_BASE_PATH=. # CLI/SDK cache path
8-
MOS_ENABLE_DEFAULT_CUBE_CONFIG=true # enable default cube config
9-
MOS_ENABLE_REORGANIZE=false # enable memory reorg
10-
# MOS Text Memory Type
11-
MOS_TEXT_MEM_TYPE=general_text # general_text | tree_text
12-
ASYNC_MODE=sync # async/sync, used in default cube config
13-
14-
## User/session defaults
15-
# Top-K for LLM in the Product API(old version)
16-
MOS_TOP_K=50
17-
18-
## Chat LLM (main dialogue)
19-
# LLM model name for the Product API
20-
MOS_CHAT_MODEL=gpt-4o-mini
21-
# Temperature for LLM in the Product API
22-
MOS_CHAT_TEMPERATURE=0.8
23-
# Max tokens for LLM in the Product API
24-
MOS_MAX_TOKENS=2048
25-
# Top-P for LLM in the Product API
26-
MOS_TOP_P=0.9
27-
# LLM for the Product API backend
28-
MOS_CHAT_MODEL_PROVIDER=openai # openai | huggingface | vllm | minimax
29-
OPENAI_API_KEY=sk-xxx # [required] when provider=openai
30-
OPENAI_API_BASE=https://api.openai.com/v1 # [required] base for the key
31-
# MiniMax LLM (when provider=minimax)
32-
# MINIMAX_API_KEY=your-minimax-api-key # [required] when provider=minimax
33-
# MINIMAX_API_BASE=https://api.minimax.io/v1 # base for MiniMax API
34-
35-
## MemReader / retrieval LLM
36-
MEMRADER_MODEL=gpt-4o-mini
37-
MEMRADER_API_KEY=sk-xxx # [required] can reuse OPENAI_API_KEY
38-
MEMRADER_API_BASE=http://localhost:3000/v1 # [required] base for the key
39-
MEMRADER_MAX_TOKENS=5000
40-
41-
## Embedding & rerank
42-
# embedding dim
1+
# Apply through Alibaba Cloud Bailian platform
2+
# https://bailian.console.aliyun.com/?spm=a2c4g.11186623.0.0.2f2165b08fRk4l&tab=api#/api
3+
# After successful application, obtain API_KEY and BASE_URL, example configuration below
4+
5+
# OpenAI API Key (use Bailian's API_KEY)
6+
OPENAI_API_KEY=you_bailian_api_key
7+
# OpenAI API Base URL
8+
OPENAI_API_BASE=https://dashscope.aliyuncs.com/compatible-mode/v1
9+
# Default model name
10+
MOS_CHAT_MODEL=qwen3-max
11+
12+
# Memory Reader LLM Model
13+
MEMRADER_MODEL=qwen3-max
14+
# Memory Reader API Key (use Bailian's API_KEY)
15+
MEMRADER_API_KEY=you_bailian_api_key
16+
# Memory Reader API Base URL
17+
MEMRADER_API_BASE=https://dashscope.aliyuncs.com/compatible-mode/v1
18+
19+
# For Embedder model names, refer to the link below
20+
# https://bailian.console.aliyun.com/?spm=a2c4g.11186623.0.0.2f2165b08fRk4l&tab=api#/api/?type=model&url=2846066
21+
MOS_EMBEDDER_MODEL=text-embedding-v4
22+
# Configure embedding backend, two options: ollama | universal_api
23+
MOS_EMBEDDER_BACKEND=universal_api
24+
# Embedder API Base URL
25+
MOS_EMBEDDER_API_BASE=https://dashscope.aliyuncs.com/compatible-mode/v1
26+
# Embedder API Key (use Bailian's API_KEY)
27+
MOS_EMBEDDER_API_KEY=you_bailian_api_key
28+
# Embedding vector dimension
4329
EMBEDDING_DIMENSION=1024
44-
# set default embedding backend
45-
MOS_EMBEDDER_BACKEND=universal_api # universal_api | ollama
46-
# set openai style
47-
MOS_EMBEDDER_PROVIDER=openai # required when universal_api
48-
# embedding model
49-
MOS_EMBEDDER_MODEL=bge-m3 # siliconflow → use BAAI/bge-m3
50-
# embedding url
51-
MOS_EMBEDDER_API_BASE=http://localhost:8000/v1 # required when universal_api
52-
# embedding model key
53-
MOS_EMBEDDER_API_KEY=EMPTY # required when universal_api
54-
OLLAMA_API_BASE=http://localhost:11434 # required when backend=ollama
55-
# reranker config
56-
MOS_RERANKER_BACKEND=http_bge # http_bge | http_bge_strategy | cosine_local
57-
# reranker url
58-
MOS_RERANKER_URL=http://localhost:8001 # required when backend=http_bge*
59-
# reranker model
60-
MOS_RERANKER_MODEL=bge-reranker-v2-m3 # siliconflow → use BAAI/bge-reranker-v2-m3
61-
MOS_RERANKER_HEADERS_EXTRA= # extra headers, JSON string, e.g. {"Authorization":"Bearer your_token"}
62-
# use source
63-
MOS_RERANKER_STRATEGY=single_turn
64-
65-
66-
# External Services (for evaluation scripts)
67-
# API key for reproducting Zep(compertitor product) evaluation
68-
ZEP_API_KEY=your_zep_api_key_here
69-
# API key for reproducting Mem0(compertitor product) evaluation
70-
MEM0_API_KEY=your_mem0_api_key_here
71-
# API key for reproducting MemU(compertitor product) evaluation
72-
MEMU_API_KEY=your_memu_api_key_here
73-
# API key for reproducting MEMOBASE(compertitor product) evaluation
74-
MEMOBASE_API_KEY=your_memobase_api_key_here
75-
# Project url for reproducting MEMOBASE(compertitor product) evaluation
76-
MEMOBASE_PROJECT_URL=your_memobase_project_url_here
77-
# LLM for evaluation
78-
MODEL=gpt-4o-mini
79-
# embedding model for evaluation
80-
EMBEDDING_MODEL=nomic-embed-text:latest
81-
82-
83-
## Internet search & preference memory
84-
# Enable web search
85-
ENABLE_INTERNET=false
86-
# Internet search backend (bocha | tavily)
87-
INTERNET_SEARCH_BACKEND=bocha
88-
# API key for BOCHA Search
89-
BOCHA_API_KEY= # required if ENABLE_INTERNET=true and backend=bocha
90-
# API key for Tavily Search
91-
TAVILY_API_KEY= # required if ENABLE_INTERNET=true and backend=tavily
92-
# default search mode
93-
SEARCH_MODE=fast # fast | fine | mixture
94-
# Slow retrieval strategy configuration, rewrite is the rewrite strategy
95-
FINE_STRATEGY=rewrite # rewrite | recreate | deep_search
96-
# Whether to enable preference memory
97-
ENABLE_PREFERENCE_MEMORY=true
98-
# Preference Memory Add Mode
99-
PREFERENCE_ADDER_MODE=fast # fast | safe
100-
# Whether to deduplicate explicit preferences based on factual memory
101-
DEDUP_PREF_EXP_BY_TEXTUAL=false
102-
103-
## Reader chunking
104-
MEM_READER_BACKEND=simple_struct # simple_struct | strategy_struct
105-
MEM_READER_CHAT_CHUNK_TYPE=default # default | content_length
106-
MEM_READER_CHAT_CHUNK_TOKEN_SIZE=1600 # tokens per chunk (default mode)
107-
MEM_READER_CHAT_CHUNK_SESS_SIZE=10 # sessions per chunk (default mode)
108-
MEM_READER_CHAT_CHUNK_OVERLAP=2 # overlap between chunks
109-
110-
## Scheduler (MemScheduler / API)
111-
# Enable or disable the main switch for configuring the memory scheduler during MemOS class initialization
112-
MOS_ENABLE_SCHEDULER=false
113-
# Determine the number of most relevant memory entries that the scheduler retrieves or processes during runtime (such as reordering or updating working memory)
114-
MOS_SCHEDULER_TOP_K=10
115-
# The time interval (in seconds) for updating "Activation Memory" (usually referring to caching or short-term memory mechanisms)
116-
MOS_SCHEDULER_ACT_MEM_UPDATE_INTERVAL=300
117-
# The size of the context window considered by the scheduler when processing tasks (such as the number of recent messages or conversation rounds)
118-
MOS_SCHEDULER_CONTEXT_WINDOW_SIZE=5
119-
# The maximum number of working threads allowed in the scheduler thread pool for concurrent task execution
120-
MOS_SCHEDULER_THREAD_POOL_MAX_WORKERS=10000
121-
# The polling interval (in seconds) for the scheduler to consume new messages/tasks from the queue. The smaller the value, the faster the response, but the CPU usage may be higher
122-
MOS_SCHEDULER_CONSUME_INTERVAL_SECONDS=0.01
123-
# Whether to enable the parallel distribution function of the scheduler to improve the throughput of concurrent operations
124-
MOS_SCHEDULER_ENABLE_PARALLEL_DISPATCH=true
125-
# The specific switch to enable or disable the "Activate Memory" function in the scheduler logic
126-
MOS_SCHEDULER_ENABLE_ACTIVATION_MEMORY=false
127-
# Control whether the scheduler instance is actually started during server initialization. If false, the scheduler object may be created but its background loop will not be started
128-
API_SCHEDULER_ON=true
129-
# Specifically define the window size for API search operations in OptimizedScheduler. It is passed to the ScherderrAPIModule to control the scope of the search context
130-
API_SEARCH_WINDOW_SIZE=5
131-
# Specify how many rounds of previous conversations (history) to retrieve and consider during the 'hybrid search' (fast search+asynchronous fine search). This helps provide context aware search results
132-
API_SEARCH_HISTORY_TURNS=5
133-
MEMSCHEDULER_USE_REDIS_QUEUE=false
134-
135-
## Graph / vector stores
136-
# Neo4j database selection mode
137-
NEO4J_BACKEND=neo4j-community # neo4j-community | neo4j | polardb | postgres
138-
# Neo4j database url
139-
NEO4J_URI=bolt://localhost:7687 # required when backend=neo4j*
140-
# Neo4j database user
141-
NEO4J_USER=neo4j # required when backend=neo4j*
142-
# Neo4j database password
143-
NEO4J_PASSWORD=12345678 # required when backend=neo4j*
144-
# Neo4j database name
145-
NEO4J_DB_NAME=neo4j # required for shared-db mode
146-
# Neo4j database data sharing with Memos
30+
# Reranker Backend (http_bge | etc.)
31+
MOS_RERANKER_BACKEND=cosine_local
32+
33+
# Neo4j Connection URI
34+
# Available options: neo4j-community | neo4j | nebular | polardb
35+
NEO4J_BACKEND=neo4j-community
36+
# Required when backend=neo4j*
37+
NEO4J_URI=bolt://localhost:7687
38+
NEO4J_USER=neo4j
39+
NEO4J_PASSWORD=12345678
40+
NEO4J_DB_NAME=neo4j
14741
MOS_NEO4J_SHARED_DB=false
148-
QDRANT_HOST=localhost
149-
QDRANT_PORT=6333
150-
# For Qdrant Cloud / remote endpoint (takes priority if set):
151-
QDRANT_URL=your_qdrant_url
152-
QDRANT_API_KEY=your_qdrant_key
153-
# milvus server uri
154-
MILVUS_URI=http://localhost:19530 # required when ENABLE_PREFERENCE_MEMORY=true
155-
MILVUS_USER_NAME=root # same as above
156-
MILVUS_PASSWORD=12345678 # same as above
157-
158-
# PolarDB endpoint/host
159-
POLAR_DB_HOST=localhost
160-
# PolarDB port
161-
POLAR_DB_PORT=5432
162-
# PolarDB username
163-
POLAR_DB_USER=root
164-
# PolarDB password
165-
POLAR_DB_PASSWORD=123456
166-
# PolarDB database name
167-
POLAR_DB_DB_NAME=shared_memos_db
168-
# PolarDB Server Mode:
169-
# If set to true, use Multi-Database Mode where each user has their own independent database (physical isolation).
170-
# If set to false (default), use Shared Database Mode where all users share one database with logical isolation via username.
171-
POLAR_DB_USE_MULTI_DB=false
172-
# PolarDB connection pool size
173-
POLARDB_POOL_MAX_CONN=100
174-
175-
## Related configurations of Redis
176-
# Reddimq sends scheduling information and synchronization information for some variables
177-
MEMSCHEDULER_REDIS_HOST= # fallback keys if not using the global ones
178-
MEMSCHEDULER_REDIS_PORT=
179-
MEMSCHEDULER_REDIS_DB=
180-
MEMSCHEDULER_REDIS_PASSWORD=
181-
MEMSCHEDULER_REDIS_TIMEOUT=
182-
MEMSCHEDULER_REDIS_CONNECT_TIMEOUT=
183-
18442

185-
## Nacos (optional config center)
186-
# Nacos turns off long polling listening, defaults to true
187-
NACOS_ENABLE_WATCH=false
188-
# The monitoring interval for long rotation training is 60 seconds, and the default 30 seconds can be left unconfigured
189-
NACOS_WATCH_INTERVAL=60
190-
# nacos server address
191-
NACOS_SERVER_ADDR=
192-
# nacos dataid
193-
NACOS_DATA_ID=
194-
# nacos group
195-
NACOS_GROUP=DEFAULT_GROUP
196-
# nacos namespace
197-
NACOS_NAMESPACE=
198-
# nacos ak
199-
AK=
200-
# nacos sk
201-
SK=
43+
# Whether to use Redis schedule
44+
DEFAULT_USE_REDIS_QUEUE=false
20245

203-
# chat model for chat api
204-
CHAT_MODEL_LIST='[{
205-
"backend": "deepseek",
206-
"api_base": "http://localhost:1234",
207-
"api_key": "your-api-key",
208-
"model_name_or_path": "deepseek-r1",
209-
"support_models": ["deepseek-r1"]
210-
}]'
46+
# Enable Chat API
47+
ENABLE_CHAT_API=true
21148

212-
# RabbitMQ host name for message-log pipeline
213-
MEMSCHEDULER_RABBITMQ_HOST_NAME=
214-
# RabbitMQ user name for message-log pipeline
215-
MEMSCHEDULER_RABBITMQ_USER_NAME=
216-
# RabbitMQ password for message-log pipeline
217-
MEMSCHEDULER_RABBITMQ_PASSWORD=
218-
# RabbitMQ virtual host for message-log pipeline
219-
MEMSCHEDULER_RABBITMQ_VIRTUAL_HOST=memos
220-
# Erase connection state on connect for message-log pipeline
221-
MEMSCHEDULER_RABBITMQ_ERASE_ON_CONNECT=true
222-
# RabbitMQ port for message-log pipeline
223-
MEMSCHEDULER_RABBITMQ_PORT=5672
49+
CHAT_MODEL_LIST=[{"backend": "qwen", "api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1", "api_key": "you_bailian_api_key", "model_name_or_path": "qwen3-max-preview", "extra_body": {"enable_thinking": true} ,"support_models": ["qwen3-max-preview"]}]

0 commit comments

Comments
 (0)