Skip to content

Commit 1d02c37

Browse files
Copilotanxiangsir
andcommitted
Add YouTube video to Introduction section
Co-authored-by: anxiangsir <31175974+anxiangsir@users.noreply.github.com>
1 parent b1c1f21 commit 1d02c37

1 file changed

Lines changed: 6 additions & 0 deletions

File tree

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,12 @@
4444

4545
## 🔍 Introduction
4646

47+
<div align="center">
48+
<a href="https://www.youtube.com/watch?v=PPBmfUoIxJ8">
49+
<img src="https://img.youtube.com/vi/PPBmfUoIxJ8/maxresdefault.jpg" alt="OneVision Encoder Video" width="800" style="max-width: 100%;">
50+
</a>
51+
</div>
52+
4753
**Hypothesis.** Artificial general intelligence is, at its core, a compression problem. Effective compression demands resonance: deep learning scales best when its architecture aligns with the fundamental structure of the data. These are the fundamental principles. Yet, modern vision architectures have strayed from these truths: visual signals are highly redundant, while discriminative information, the surprise, is sparse. Current models process dense pixel grids uniformly, wasting vast compute on static background rather than focusing on the predictive residuals that define motion and meaning. We argue that to solve visual understanding, we must align our architectures with the information-theoretic principles of video, i.e., Codecs.
4854

4955
**Method.** OneVision-Encoder encodes video by compressing predictive visual structure into semantic meaning. By adopting Codec Patchification, OneVision-Encoder abandons uniform computation to focus exclusively on the 3.1%-25% of regions rich in signal entropy. To unify spatial and temporal reasoning under irregular token layouts, OneVision-Encoder employs a shared 3D RoPE and is trained with a large-scale cluster discrimination objective over more than one million semantic concepts, jointly capturing object permanence and motion dynamics.

0 commit comments

Comments
 (0)