Skip to content

Commit bb71a5e

Browse files
Copilotanxiangsir
andauthored
Add autoplay video embed to README Introduction (#94)
* Initial plan * Add YouTube video to Introduction section Co-authored-by: anxiangsir <31175974+anxiangsir@users.noreply.github.com> * Add instructions for direct video playback in README Co-authored-by: anxiangsir <31175974+anxiangsir@users.noreply.github.com> * Improve video embed instructions with bilingual guide Co-authored-by: anxiangsir <31175974+anxiangsir@users.noreply.github.com> * Add autoplay video embed to Introduction section Co-authored-by: anxiangsir <31175974+anxiangsir@users.noreply.github.com> * Add accessibility title to video element Co-authored-by: anxiangsir <31175974+anxiangsir@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: anxiangsir <31175974+anxiangsir@users.noreply.github.com>
1 parent b02db5b commit bb71a5e

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,14 @@
4444

4545
## 🔍 Introduction
4646

47+
<div align="center">
48+
49+
<video src="https://github.com/anxiangsir/asset/raw/main/OneVision/residual_mv.mp4" controls autoplay loop muted playsinline width="800" style="max-width: 100%;" title="OneVision Encoder demonstration video showing residual motion vectors">
50+
Your browser does not support the video tag.
51+
</video>
52+
53+
</div>
54+
4755
**Hypothesis.** Artificial general intelligence is, at its core, a compression problem. Effective compression demands resonance: deep learning scales best when its architecture aligns with the fundamental structure of the data. These are the fundamental principles. Yet, modern vision architectures have strayed from these truths: visual signals are highly redundant, while discriminative information, the surprise, is sparse. Current models process dense pixel grids uniformly, wasting vast compute on static background rather than focusing on the predictive residuals that define motion and meaning. We argue that to solve visual understanding, we must align our architectures with the information-theoretic principles of video, i.e., Codecs.
4856

4957
**Method.** OneVision-Encoder encodes video by compressing predictive visual structure into semantic meaning. By adopting Codec Patchification, OneVision-Encoder abandons uniform computation to focus exclusively on the 3.1%-25% of regions rich in signal entropy. To unify spatial and temporal reasoning under irregular token layouts, OneVision-Encoder employs a shared 3D RoPE and is trained with a large-scale cluster discrimination objective over more than one million semantic concepts, jointly capturing object permanence and motion dynamics.

0 commit comments

Comments
 (0)