You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use the `Picture` component to display images. If you have a figure in PDF format, you can pass in its path as the `src` prop and it will be converted into a web-friendly image automatically.
146
-
147
-
<Picture src={dogsDiffc} alt="Photo of two running dogs, lossily compressed using the DiffC algorithm" />
148
-
149
-
### Wide visual
150
-
151
-
I constrained the maximum width of the text column to make sure it's readable even when the screen is wide. But images and other visuals should "break out" and take up more horizontal space: for this, try wrapping them in the `Wide` component.
152
-
153
-
<Wide>
154
-
<Picture src={dogsDiffc} alt="Photo of two running dogs, lossily compressed using the DiffC algorithm" />
155
-
</Wide>
156
-
157
-
### Figure with caption
158
-
159
-
Wrap a visual in a `Figure` component to add a caption.
160
-
161
-
<Figure>
162
-
<Picture slot="figure" src={dogsDiffc} alt="Photo of two running dogs, lossily compressed using the DiffC algorithm" />
163
-
<Fragment slot="caption">A photo of two dogs running side-by-side in shallow water, lossily compressed using the [DiffC algorithm](https://jeremyiv.github.io/diffc-project-page/).</Fragment>
164
-
</Figure>
165
-
166
-
### Comparison slider
167
-
168
-
Use the `Comparison` component to compare two visuals with an interactive slider.
169
-
170
-
<Comparison client:idle >
171
-
<Picture slot="itemOne" src={dogsDiffc} alt="Photo of two running dogs, lossily compressed using the DiffC algorithm" />
172
-
<Picture slot="itemTwo" src={dogsTrue} alt="True photo of two dogs running side-by-side in shallow water" />
173
-
</Comparison>
174
-
175
-
### Tab panels
176
-
177
-
Tab panels are a great way to display a labeled collection of related visuals without taking up too much space.
alt="True photo of two dogs running side-by-side in shallow water"
189
-
/>
190
-
</TabsContent>
191
-
<TabsContent value="diffc">
192
-
<Picture
193
-
src={dogsDiffc}
194
-
alt="Photo of two running dogs, lossily compressed using the DiffC algorithm"
195
-
/>
196
-
</TabsContent>
197
-
<TabsContent value="msillm">
198
-
<Picture
199
-
src={dogsMsillm}
200
-
alt="Photo of two running dogs, lossily compressed using MS-ILLM"
201
-
/>
146
+
<TabsContentvalue="proceedings">
147
+
```bibtex
148
+
@inproceedings{yu2026usf,
149
+
title = {Unified {Spherical} {Frontend}: {Learning} {Rotation}-{Equivariant} {Representations} of {Spherical} {Images} from {Any} {Camera}},
150
+
author = {Yu, Mukai and Dabhi, Mosam and Xie, Liuyue and Scherer, Sebastian and Jeni, László A.},
151
+
year = {2026},
152
+
month = jun,
153
+
booktitle = {2026 {IEEE}/{CVF} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},
154
+
publisher = {IEEE}
155
+
}
156
+
```
202
157
</TabsContent>
203
-
</Tabs>
204
-
205
-
### Carousel
206
-
207
-
A carousel is another useful pattern for a collection of visuals that don't necessarily have meaningful labels.
208
-
209
-
<Carousel>
210
-
<CarouselSlide>
211
-
<Picture src={dogsTrue} alt="True photo of two dogs running side-by-side in shallow water" />
212
-
</CarouselSlide>
213
-
<CarouselSlide>
214
-
<Picture src={dogsDiffc} alt="Photo of two running dogs, lossily compressed using the DiffC algorithm" />
215
-
</CarouselSlide>
216
-
<CarouselSlide>
217
-
<Picture
218
-
src={dogsMsillm}
219
-
alt="Photo of two running dogs, lossily compressed using MS-ILLM"
220
-
/>
221
-
</CarouselSlide>
222
-
</Carousel>
223
-
224
-
## Two columns
225
-
226
-
Use the two columns component to display two columns of content. In this example, the first column contains a YouTube video embed and the second column contains an interactive 3D model viewer. By default, they display side by side, but if the screen is narrow enough (for example, on mobile), they're arranged vertically.
<ModelViewer slot="right" src="/BoxVertexColors.glb" alt="A cube colored with a rainbow gradient" />
231
-
</TwoColumns>
232
-
233
-
## $\LaTeX$
234
-
235
-
You can also add $\LaTeX$ formulas, rendered during the build process using [$\KaTeX$](https://katex.org/) so they're quick to load for visitors of your project page. You can write them inline, like this: $a^2 + b^2 = c^2$. Or, you can write them as a block:
236
-
237
-
$$
238
-
\int_a^b f(x) dx
239
-
$$
240
-
241
-
## Citation
242
-
243
-
If you copy your bibliography into `./bibliography.bib`, you can cite papers using their BibTeX keys, and they'll be automatically formatted and included in the bibliography section at the end. For example: [@vaswani2017attention]. Note that these citations are distinct from footnotes[^mynote], which show up in a separate section if you use them.
244
-
245
-
[^mynote]: On the web, they should really be called "endnotes", since each page constitutes a full document.
246
-
247
-
## Table
248
-
249
-
Add simple tables using [GitHub Flavored Markdown syntax](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/organizing-information-with-tables):
250
-
251
-
| Model | Accuracy | F1 score | Training time (hours) |
252
-
| :--- | :---: | :---: | :---: |
253
-
| BERT-base | 0.89 | 0.87 | 4.5 |
254
-
| RoBERTa-large | 0.92 | 0.91 | 7.2 |
255
-
| DistilBERT | 0.86 | 0.84 | 2.1 |
256
-
| XLNet | 0.90 | 0.89 | 6.8 |
257
-
258
-
## BibTeX citation
259
-
260
-
Displaying the BibTeX entry for your paper in a code block makes it easy to copy and paste.
title = {Unified {Spherical} {Frontend}: {Learning} {Rotation}-{Equivariant} {Representations} of {Spherical} {Images} from {Any} {Camera}},
162
+
author = {Yu, Mukai and Dabhi, Mosam and Xie, Liuyue and Scherer, Sebastian and Jeni, László A.},
163
+
year = {2025},
164
+
month = nov,
165
+
publisher = {arXiv},
166
+
doi = {10.48550/arXiv.2511.18174},
167
+
url = {https://arxiv.org/abs/2511.18174},
168
+
eprint = {2511.18174},
169
+
archiveprefix = {arXiv},
170
+
primaryclass = {cs.CV},
171
+
abstract = {Modern perception increasingly relies on fisheye, panoramic, and other wide field-of-view (FoV) cameras, yet most pipelines still apply planar CNNs designed for pinhole imagery on 2D grids, where pixel-space neighborhoods misrepresent physical adjacency and models are sensitive to global rotations. Traditional spherical CNNs partially address this mismatch but require costly spherical harmonic transform that constrains resolution and efficiency. We present Unified Spherical Frontend (USF), a distortion-free lens-agnostic framework that transforms images from any calibrated camera onto the unit sphere via ray-direction correspondences, and performs spherical resampling, convolution, and pooling canonically in the spatial domain. USF is modular: projection, location sampling, value interpolation, and resolution control are fully decoupled. Its configurable distance-only convolution kernels offer rotation-equivariance, mirroring translation-equivariance in planar CNNs while avoiding harmonic transforms entirely. We compare multiple standard planar backbones with their spherical counterparts across classification, detection, and segmentation tasks on synthetic (Spherical MNIST) and real-world (PANDORA, Stanford 2D-3D-S) datasets, and stress-test robustness to extreme lens distortions, varying FoV, and arbitrary rotations. USF scales efficiently to high-resolution spherical imagery and maintains less than 1\% performance drop under random test-time rotations without training-time rotational augmentation, and enables zero-shot generalization to any unseen (wide-FoV) lenses with minimal performance degradation.}
0 commit comments