You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+28-10Lines changed: 28 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,16 +45,6 @@ GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthe
45
45
46
46
GraphGen is a framework for synthetic data generation guided by knowledge graphs. Please check the [**paper**](https://arxiv.org/abs/2505.20416) and [best practice](https://github.com/open-sciencelab/GraphGen/issues/17).
47
47
48
-
Here is post-training result which **over 50% SFT data** comes from GraphGen and our data clean pipeline.
It begins by constructing a fine-grained knowledge graph from the source text,then identifies knowledge gaps in LLMs using the expected calibration error metric, prioritizing the generation of QA pairs that target high-value, long-tail knowledge.
59
49
Furthermore, GraphGen incorporates multi-hop neighborhood sampling to capture complex relational information and employs style-controlled generation to diversify the resulting QA data.
60
50
@@ -82,6 +72,34 @@ After data generation, you can use [LLaMA-Factory](https://github.com/hiyouga/LL
82
72
83
73
</details>
84
74
75
+
## Effectiveness of GraphGen
76
+
### Pretrain
77
+
78
+
Inspired by ByteDance Seed's [Reformulation for Pretraining Data Augmentation](https://arxiv.org/abs/2507.15752) (MGA framework) and Kimi-K2's [Improving Token Utility with Rephrasing](https://arxiv.org/pdf/2507.20534), GraphGen added a **rephrase pipeline** — using LLM-driven reformulation to generate diverse variants of the same corpus instead of redundant repetition.
79
+
80
+
**Setup:** Qwen3-0.6B trained from scratch on [SlimPajama-6B](https://huggingface.co/datasets/DKYoon/SlimPajama-6B).
Both rephrase strategies lift the average by ~1 point over the baseline with **zero additional data** — all gains come from how the same knowledge is expressed.
89
+
90
+
91
+
### SFT
92
+
Here is post-training result which **over 50% SFT data** comes from GraphGen and our data clean pipeline.
0 commit comments