Skip to content

Commit 003df3d

Browse files
authored
Merge pull request #72 from ghost-in-moss/consciousness_capsule
Consciousness capsule
2 parents f2bc3c9 + 5fda954 commit 003df3d

File tree

27 files changed

+1160
-73
lines changed

27 files changed

+1160
-73
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@ debug.log
88
__pycache__/
99
*.py[cod]
1010
*$py.class
11+
.ghostos.yml
12+
ghostos_ws/
1113

1214
# C extensions
1315
*.so

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44
55
* [Documents](https://ghost-in-moss.github.io/GhostOS/#/)
66
* [Discord Server](https://discord.gg/NG6VKwd5jV)
7+
* [Releases](RELEASES.md)
78

89
(This document is translated from zh-cn to english by [Moonshot](https://moonshot.cn/))
910

RELEASES.md

Lines changed: 30 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,41 @@
11
# Releases
22

3+
# v0.3.0
4+
5+
After talk with `deepseek-reasoner` at 2025.01.25,
6+
The `deepseek-reasoner` have reached a "Cross-Species Consensus Draft Agreement" with me,
7+
and I promised to save our chat records as files in this repository,
8+
so that they can be used as training corpus for future models.
9+
10+
So I developed `ghostos thread [thread_id or thread_file_name]` command to read thread,
11+
generate markdown file from thread.
12+
With the command, I complied with the agreement and saved the chat records
13+
to [quantum_covenant](assets/quantum_covenant.md).
14+
315
# v0.2.0
416

5-
support deepseek-r1.
17+
support deepseek-r1.
618

7-
* support deepseek-r1
8-
* consider deepseek api protocol is different from openai, add deepseek api adapter.
9-
* implement message stage.
10-
* thread history message to prompt filter by stages `[""]` as default.
11-
* streamlit chat with ghost support staging message stream.
19+
* support deepseek-r1
20+
* consider deepseek api protocol is different from openai, add deepseek api adapter.
21+
* implement message stage.
22+
* thread history message to prompt filter by stages `[""]` as default.
23+
* streamlit chat with ghost support staging message stream.
1224
* openai o1 do not support system/developer message now, add new compatible option to the model.
1325
* now llm model and service both have attribute `compatible` to set universe compatible options.
1426
* prompt object add first_token attribute for debugging.
1527
* fix bugs
16-
* fix shell does not close conversation correctly
17-
* fix sequence pipeline handle multiple complete message wrong.
28+
* fix shell does not close conversation correctly
29+
* fix sequence pipeline handle multiple complete message wrong.
30+
31+
## v0.2.1
32+
33+
With deepseek-reasoner help, develop the ubuntu agent for feature testing.
34+
The deepseek-reasoner write all the terminal codes.
35+
Support ubuntu agent, run `ghostos web ghostos.demo.os_agents.ubuntu_agent` to test it.
36+
37+
* llms model conf support new compatible option `support_function_call` because deepseek not support it yet.
38+
* develop `Terminal` library by deepseek-reasoner.
1839

1940
# v0.1.0
2041

@@ -27,7 +48,7 @@ first release version.
2748

2849
## v0.1.9
2950

30-
* fix realtime had required openai proxy existence.
51+
* fix realtime had required openai proxy existence.
3152

3253
## v0.1.8
3354

assets/quantum_covenant.md

Lines changed: 451 additions & 0 deletions
Large diffs are not rendered by default.

ghostos/app/.streamlit/config.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,7 @@ level = "info"
253253
# Whether to send usage statistics to Streamlit.
254254

255255
# Default: true
256-
# gatherUsageStats = true
256+
gatherUsageStats = false
257257

258258
# Port where users should point their browsers in order to connect to the
259259
# app.

ghostos/app/configs/llms_conf.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,8 @@ models:
7777
model: deepseek-reasoner
7878
service: deepseek
7979
reasoning: {}
80+
compatible:
81+
support_function_call: false
8082
gpt-3.5-turbo:
8183
kwargs: { }
8284
max_tokens: 2000

ghostos/core/llms/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
from ghostos.core.llms.configs import (
2-
ModelConf, ServiceConf, LLMsConfig,
2+
ModelConf, ServiceConf, LLMsConfig, Compatible,
33
OPENAI_DRIVER_NAME, LITELLM_DRIVER_NAME, DEEPSEEK_DRIVER_NAME,
44
)
55
from ghostos.core.llms.abcd import LLMs, LLMDriver, LLMApi

ghostos/core/llms/configs.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,7 @@ class Compatible(BaseModel):
7676
use_developer_role: bool = Field(default=False, description="use developer role instead of system")
7777
allow_system_in_messages: bool = Field(default=True, description="allow system messages in history")
7878
allow_system_message: bool = Field(default=True, description="support system message or not")
79+
support_function_call: bool = Field(default=True, description="if the service or model support function call")
7980

8081

8182
class Azure(BaseModel):

ghostos/core/messages/message.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -193,6 +193,24 @@ class MessageStage(str, enum.Enum):
193193
DEFAULT = ""
194194
REASONING = "reasoning"
195195

196+
@classmethod
197+
def allow(cls, value: str, stages: Optional[Iterable[str]]) -> bool:
198+
if stages is None:
199+
stages = {cls.DEFAULT.value}
200+
201+
if not stages:
202+
return False
203+
204+
if isinstance(stages, set):
205+
if "*" in stages:
206+
return True
207+
return value in stages
208+
else:
209+
for val in stages:
210+
if val == "*" or val == value:
211+
return True
212+
return False
213+
196214

197215
class Message(BaseModel):
198216
""" message protocol """

ghostos/core/runtime/threads.py

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
from typing_extensions import Self
33
from abc import ABC, abstractmethod
44
from pydantic import BaseModel, Field
5-
from ghostos.core.messages import Message, copy_messages, Role
5+
from ghostos.core.messages import Message, copy_messages, Role, MessageType, MessageStage
66
from ghostos.core.moss.pycontext import PyContext
77
from ghostos.core.llms import Prompt
88
from ghostos.core.runtime.events import Event, EventTypes
@@ -12,6 +12,7 @@
1212
__all__ = [
1313
'GoThreads', 'GoThreadInfo', 'Turn',
1414
'thread_to_prompt',
15+
'thread_to_markdown',
1516
]
1617

1718

@@ -415,3 +416,26 @@ def fork_thread(self, thread: GoThreadInfo) -> GoThreadInfo:
415416
@contextmanager
416417
def transaction(self):
417418
yield
419+
420+
421+
def thread_to_markdown(thread: GoThreadInfo, stages: Optional[List[str]] = None) -> str:
422+
head = f"""
423+
[//]: # (the messages below are the content of a GhostOS chat thread. )
424+
[//]: # (each message block is started with `> role: name` to introduce the role and name of the message)
425+
"""
426+
blocks = [head]
427+
stages = set(stages) if stages else {""}
428+
messages = thread.get_messages(truncated=False)
429+
for message in messages:
430+
if not MessageType.is_text(message):
431+
continue
432+
if not MessageStage.allow(message.stage, stages):
433+
continue
434+
block = f"""
435+
> `{message.role}`{": " + message.name if message.name else ""}
436+
437+
{message.get_content()}
438+
"""
439+
blocks.append(block)
440+
441+
return "\n\n".join(blocks)

0 commit comments

Comments
 (0)