Skip to content

Commit c36b8bd

Browse files
authored
Docs cleanup (#115)
* Cleaning up markdown getting it to conform to markdownlint. Signed-off-by: Sam.Richards@taurich.org <Sam.Richards@taurich.org> * Adding a lint config line. I'm not going with all of them. Signed-off-by: Sam.Richards@taurich.org <Sam.Richards@taurich.org> * More MD cleanup. Signed-off-by: Sam.Richards@taurich.org <Sam.Richards@taurich.org> * Removing warning, we think the HDR section is pretty good. Signed-off-by: Sam.Richards@taurich.org <Sam.Richards@taurich.org> * Updated the link. Signed-off-by: Sam.Richards@taurich.org <Sam.Richards@taurich.org> --------- Signed-off-by: Sam.Richards@taurich.org <Sam.Richards@taurich.org>
1 parent 3cea459 commit c36b8bd

27 files changed

+476
-499
lines changed

.markdownlint.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Disable line length rule
2+
MD013: false
3+
4+
# Allow inline HTML
5+
MD033: false
6+
7+
#Images requiring alt text.
8+
MD045: false
9+
10+
#Its incorrectly flagging multiple root headings.
11+
MD025: false

CONTRIBUTING.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# Contributing to the ORI Encoding Guidelines
22

3-
## Get Connected.
3+
## Get Connected
44

55
Please reach out to us, particularly if you are having a problem, since its likely somebody else is sharing that problem, and if we have not documented it, it may be something that needs to be updated.
66

77
Reach out using one of the following approaches:
8-
* ASWF Slack - [#open-review-initiative](#open-review-initiative)
9-
* [Github ORI Encoding Guidelines - discussions ](https://github.com/AcademySoftwareFoundation/EncodingGuidelines/discussions)
10-
* If there are particular issues - [Github Issues](https://github.com/AcademySoftwareFoundation/EncodingGuidelines/issues)
118

9+
* ASWF Slack - [#open-review-initiative](#open-review-initiative)
10+
* [Github ORI Encoding Guidelines - discussions](https://github.com/AcademySoftwareFoundation/EncodingGuidelines/discussions)
11+
* If there are particular issues - [Github Issues](https://github.com/AcademySoftwareFoundation/EncodingGuidelines/issues)
1212

1313
## Documentation Style
1414

@@ -20,13 +20,10 @@ The test suite has been developed using python-3, and has been designed to work
2020

2121
## How to Contribute a Bug Fix or Change or additional Documentation
2222

23-
To contribute code to the project, first read over the [governance policies] page to understand the roles involved.
23+
To contribute code to the project, first read over the [governance policies] page to understand the roles involved.
2424

2525
ORI Encoding Guidelines is licensed under the [Apache License 2.0](LICENSE.md) license. Contributions should abide by that standard license.
2626

2727
Project committers will review the contribution in a timely manner, and advise of any changes needed to merge the request.
2828

29-
30-
[governance policies]: GOVERNANCE.md
31-
[copyright and license headers]: https://github.com/AcademySoftwareFoundation/tac/blob/main/process/contribution_guidelines.md#license
32-
[Developer Certificate of Origin signoff]: https://github.com/AcademySoftwareFoundation/tac/blob/main/process/contribution_guidelines.md#contribution-sign-off
29+
[governance policies]: GOVERNANCE.md

ColorPreservation.md

Lines changed: 18 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ parent: Encoding Overview
99

1010

1111
# RGB to YCrCb Conversion <a name="yuv"></a>
12+
1213
We would like ffmpeg to do as little as possible in terms of color space conversion. i.e. what comes in, goes out. The problem is that most of the codecs prefer to convert from RGB to YUV conversion (technically YCrCb). Do be aware that a number of codecs do support native RGB encoding (including h264, hevc, vp9, av1), but they are not typically supported in web browsers.
1314

1415
The main problem is that ffmpeg by default assumes that any unknown still image format has a color space of [rec601](https://en.wikipedia.org/wiki/Rec._601) which is very unlikely to be the color space your source media was generate in. So unless you tell it otherwise it will attempt to convert from that colorspace producing a color shift.
@@ -22,9 +23,11 @@ For more information, see: [https://trac.ffmpeg.org/wiki/colorspace](https://tra
2223
For examples comparing these see: [here](https://academysoftwarefoundation.github.io/EncodingGuidelines/tests/chip-chart-yuvconvert/compare.html)
2324

2425
## colormatrix filter
25-
```
26+
27+
```console
2628
-vf "colormatrix=bt470bg:bt709"
2729
```
30+
2831
This is the most basic colorspace filtering. bt470bg is essentially part of the bt601 spec. See: [https://www.ffmpeg.org/ffmpeg-filters.html#colormatrix](https://www.ffmpeg.org/ffmpeg-filters.html#colormatrix)
2932

3033
Example:
@@ -46,7 +49,7 @@ comparisontest:
4649
value: max_error
4750
between: 0.37125, 0.37126
4851
-->
49-
```
52+
```console
5053
ffmpeg -y -i ../sourceimages/chip-chart-1080-noicc.png \
5154
-pix_fmt yuv444p10le -vf "colormatrix=bt470bg:bt709" \
5255
-c:v libx264 -preset placebo -qp 0 -x264-params "keyint=15:no-deblock=1" \
@@ -55,13 +58,16 @@ ffmpeg -y -i ../sourceimages/chip-chart-1080-noicc.png \
5558
```
5659

5760
There are a couple of issues with this filter:
58-
* only supports 8bpc (8-bit per component) pixel formats
59-
* Its slower than the alternatives.
61+
62+
* only supports 8bpc (8-bit per component) pixel formats
63+
* Its slower than the alternatives.
6064

6165
## colorspace filter
62-
```
66+
67+
```console
6368
-vf "colorspace=bt709:iall=bt601-6-625:fast=1"
6469
```
70+
6571
Using colorspace filter, better quality filter, SIMD so faster too, can support 10-bit too. The second part `-vf "colorspace=bt709:iall=bt601-6-625:fast=1"` encodes for the output being bt709, rather than the default bt601 matrix. iall=bt601-6-625 says to treat all the input (colorspace, primaries and transfer function) with the bt601-6-625 label). fast=1 skips gamma/primary conversion in a mathematically correct way. See: [https://ffmpeg.org/ffmpeg-filters.html#colorspace](https://ffmpeg.org/ffmpeg-filters.html#colorspace)
6672

6773
Example:
@@ -78,18 +84,17 @@ comparisontest:
7884
value: max_error
7985
less: 0.00195
8086
-->
81-
```
87+
```console
8288
ffmpeg -y -i ../sourceimages/chip-chart-1080-noicc.png \
8389
-pix_fmt yuv444p10le -vf "colorspace=bt709:iall=bt601-6-625:fast=1" \
8490
-c:v libx264 -preset placebo -qp 0 -x264-params "keyint=15:no-deblock=1" \
8591
-color_range tv -colorspace bt709 -color_primaries bt709 -color_trc iec61966-2-1 \
8692
./chip-chart-yuvconvert/spline444colorspace.mp4
8793
```
8894

89-
9095
## zscale filter
9196

92-
```
97+
```console
9398
-vf "zscale=m=709:min=709:rangein=full:range=limited"
9499
```
95100

@@ -111,7 +116,7 @@ comparisontest:
111116
value: max_error
112117
less: 0.00195
113118
-->
114-
```
119+
```console
115120
ffmpeg -y -i ../sourceimages/chip-chart-1080-noicc.png \
116121
-pix_fmt yuv444p10le -vf "zscale=m=709:min=709:rangein=full:range=limited" \
117122
-c:v libx264 -preset placebo -qp 0 -x264-params "keyint=15:no-deblock=1" \
@@ -121,9 +126,10 @@ ffmpeg -y -i ../sourceimages/chip-chart-1080-noicc.png \
121126

122127
## libswscale filter
123128

124-
```
129+
```console
125130
-vf "scale=in_range=full:in_color_matrix=bt709:out_range=tv:out_color_matrix=bt709"
126131
```
132+
127133
Using the libswscale library. Seems similar to colorspace, but with image resizing, and levels built in. [https://www.ffmpeg.org/ffmpeg-filters.html#scale-1](https://www.ffmpeg.org/ffmpeg-filters.html#scale-1)
128134

129135
This is the recommended filter.
@@ -142,7 +148,7 @@ comparisontest:
142148
value: max_error
143149
less: 0.00195
144150
-->
145-
```
151+
```console
146152
ffmpeg -y -i ../sourceimages/chip-chart-1080-noicc.png \
147153
-pix_fmt yuv444p10le \
148154
-vf "scale=in_range=full:in_color_matrix=bt709:out_range=tv:out_color_matrix=bt709" \
@@ -151,5 +157,4 @@ ffmpeg -y -i ../sourceimages/chip-chart-1080-noicc.png \
151157
./chip-chart-yuvconvert/spline444out_color_matrix.mp4
152158
```
153159

154-
155-
Note, there are a lot of other flags often used with the swscale filter (such as -sws_flags spline+full_chroma_int+accurate_rnd ) which really have minimal impact in the RGB to YCrCb conversion, if you are not resizing the image. For more details on this see [SWS Flags](EncodeSwsScale.html) section.
160+
Note, there are a lot of other flags often used with the swscale filter (such as -sws_flags spline+full_chroma_int+accurate_rnd ) which really have minimal impact in the RGB to YCrCb conversion, if you are not resizing the image. For more details on this see [SWS Flags](EncodeSwsScale.html) section.

EditorialWorkflow.md

Lines changed: 13 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@ parent: Encoding Overview
1414

1515
# Timecode and Editorial Workflow
1616

17-
1817
<details open markdown="block">
1918
<summary>
2019
Table of contents
@@ -30,22 +29,20 @@ This can be done with the -timecode flag:
3029

3130
E.g.
3231

33-
34-
```
32+
```console
3533
ffmpeg -r 24 -start_number <STARTFRAME> -i inputfile.%04d.png \
3634
-vf "scale=in_color_matrix=bt709:out_color_matrix=bt709" \
3735
-frames:v 100 -c:v libx264 -preset slow -pix_fmt yuv420p \
3836
-timecode <STARTFRAMETIMECODE>
3937
outputfile.mp4
4038
```
4139

42-
4340
There are three approaches for what to use for the timecode:
41+
4442
* [Convert the start frame number](#start-frame-as-timecode) to the related timecode.
4543
* Use the timecode from the [original plate](#start-frame-as-original-plate-timecode)
4644
* A "fixed" timecode for all deliverables
4745

48-
4946
## Start Frame as Timecode
5047

5148
It's extremely common to use a start frame of 1001 for a shot at the beginning of production, rather than frame 0. The three big reasons for this are:
@@ -60,7 +57,7 @@ By remapping the frame number to a timecode number, e.g. frame 1001 to 00:00:41:
6057

6158
Converting the frame number to timecode can be done using OTIO:
6259

63-
```
60+
```console
6461
import opentimelineio as otio
6562
start_frame = 1001
6663
frame_rate = 24.0
@@ -73,22 +70,19 @@ Another scenario is that the client is delivering a single clip, that your facil
7370

7471
This has a similar benefit in terms of conform, you can add or remove frames, and the conform will do the right thing, but it does require *a lot* more tracking, since if the frames are trimmed off the beginning, you will need to calculate the new timecode. Equally problematic is if you have multiple plates since you would need to track which clip is the baseline in terms of timecode and make sure any deliveries for the shot are appropriately using that timecode.
7572

76-
7773
## Reel Name
7874

7975
While tracking the timecode for dailies may be too complex, it can be extremely useful for making proxies for source camera files. But the timecode alone is not enough, you also would need the reel-name, which typically is closely mapped to the filename of the original camera files.
8076

8177
For a QuickTime the reel name can be defined with the -metadata:s:v:0 flag:
8278

83-
```
79+
```console
8480
ffmpeg -f lavfi -i testsrc -t 1 -timecode 01:00:00:00 -metadata:s:v:0 reel_name=ABCD123 OUTPUT.mov
8581
```
8682

87-
8883
For a Op1a mxf file it can be defined with a -metadata flag:
8984

90-
91-
```
85+
```console
9286
ffmpeg -f lavfi -i testsrc -t 1 -timecode 01:00:00:00 -metadata reel_name=ABCD123 OUTPUT.mxf
9387
```
9488

@@ -100,7 +94,7 @@ However, metadata for reel-name is not consistently supported across the applica
10094
</td>
10195
<td>Resolve
10296
</td>
103-
<td>AVID MC
97+
<td>AVID MC
10498
</td>
10599
<td>Premiere
106100
</td>
@@ -157,17 +151,14 @@ However, metadata for reel-name is not consistently supported across the applica
157151
</tr>
158152
</table>
159153

160-
161154
To get resolve to import the Reel-name you need to change how the reel name is defined, which is set under the project settings (see below). NB this can be done after the media has been added to the media pool.
162155

163156
![Resolve Project Settings](sourceimages/ResolveProjectSettings.png)
164157

165-
166158
For media composer you will find much more flexibility wrapping the MXF file in an AAF (see below).
167159

168160
For examples of the conform workflow, see: [VFX Subclipping relink](https://www.youtube.com/watch?app=desktop&v=gbReqyofLLE).
169161

170-
171162
## AVID Media Composer Workflows
172163

173164
Deciding on whether to create Op1a vs. OpAtom does depend on which version of media composer you are using. Newer ones tend to prefer op-atom, but you should check with your editor.
@@ -176,7 +167,7 @@ For details on creating MXF files, see [OpAtom](EncodeDNXHD.html#op-atom-mxf) an
176167
Part of the decision is whether you want a single file to also contain the audio, and whether you want to additionally use AAF files (see below).
177168

178169
If an AVID imports a media file with no timecode, it will default to 01:00:00:00.
179-
For this reason it can be desirable to do one of the above approaches, but do work with editorial to confirm what they would like.
170+
For this reason it can be desirable to do one of the above approaches, but do work with editorial to confirm what they would like.
180171

181172
[OpAtom](EncodeDNXHD.html#op-atom-mxf) files do not get directly imported into the AVID, instead you copy them directly into the /Users/Shared/AvidMediaComposer/Avid MediaFiles/MXF/{NUMBER} folder (e.g. /Users/Shared/AvidMediaComposer/Avid MediaFiles/MXF/2) on OSX or C:\Avid MediaFiles\MXF\{NUMBER} on windows. You can make a higher number, but Media Composer will also scan existing folders. Media composer will scan for new files and create (or update) a msmMMOB.mdb file, which is a database of the MOB ID's of the files. This can then be dragged into a Avid Bin to import the new files.
182173

@@ -189,7 +180,8 @@ If you are tightly integrating your pipeline into an AVID workflow, you should c
189180
Ideally with AAF files, you would be importing MXF files (like the example above) to minimize the import time to the AVID (so it doesn't require any media transcoding).
190181

191182
A simple example of this is to convert all your clips to raw DNxHD files, e.g.:
192-
```
183+
184+
```console
193185
ffmpeg -y -i <INPUTFILE> -pix_fmt yuv422p \
194186
-sws_flags lanczos -pix_fmt yuv422p \
195187
-vf "scale=in_range=full:in_color_matrix=bt709:out_range=tv:out_color_matrix=bt709" \
@@ -200,6 +192,7 @@ ffmpeg -y -i <INPUTFILE> -pix_fmt yuv422p \
200192
```
201193

202194
and then to wrap these resulting files in an AAF with:
195+
203196
```python
204197
import aaf2
205198
import os, sys
@@ -245,8 +238,8 @@ for filename in sys.argv[1:]:
245238
# mob.import_audio_essence("sample.wav", edit_rate) #Modify if you have audio too.
246239
```
247240

248-
In this simplistic example, I'm overwriting the Shot and Scene metadata columns, which should then show up in the bin, when the resulting AAF files are dragged into a bin. For a more complex version of this see: [aaf_embed_media_tool](https://github.com/markreidvfx/pyaaf2/blob/main/examples/aaf_embed_media_tool.py).
241+
In this simplistic example, I'm overwriting the Shot and Scene metadata columns, which should then show up in the bin, when the resulting AAF files are dragged into a bin. For a more complex version of this see: [aaf_embed_media_tool](https://github.com/markreidvfx/pyaaf2/blob/main/examples/aaf_embed_media_tool.py).
249242

250243
## See Also
251-
* [Feature Turnover Guide](https://www.evanschiff.com/articles/feature-turnover-guide-vfx/)
252-
244+
245+
* [Feature Turnover Guide](https://www.evanschiff.com/articles/feature-turnover-guide-vfx/)

0 commit comments

Comments
 (0)