Purple Writes

Can you say that 12-bit compressed RAW formats (BRAW, ARRIRAW, etc) are true 16-bit? (No, but that's ok)

Last updates:
15 Jul, 2025 - Corrected highlight recovery to recovery of dark areas
07 Jun, 2025 - Updated some incorrect information about RED's R3D format

Introduction

While I was looking for video camera options, I often heard YouTubers talk about how their 12-bit files become "16-bit" when uncompressed (in DaVinci Resolve). This made them pull the conclusion that their camera is actually recording in 16-bit, leading to claims like "Don't worry guys, once the video is imported you can access the full 16-bit".

The thing is, this is not actually true, a 12-bit pixel format can't magically become full 16-bit. 4 bits have been lost in this process... but what's actually going on behind the scenes? In this post I'll do a "more than shallow, less than deep"-dive into how the pixel data is stored, and explain how camera manufacturers still try to get the most out of their 12-bit formats.

Basics

Lots of high(er)-end video cameras use 12-bit compressed non-linear1 "raw" formats (such as ARRI, with ARRIRAW, and BlackMagic with BRAW2), I don't believe any true 16-bit formats exist. ARRI has recently released a camera that is advertised to use a 13-bit non-linear format though.

Refresher on bit-depth

While looking for cameras, bit-depth is probably something you've seen mentioned before. However just in case, here is a very quick refresher.

Bit-depth refers to the accuracy of a colour or value, the more bits a value has the more precise gradations are supported.

Bit-depth Range
8-bit 0-255
9-bit 0-511
10-bit 0-1023
12-bit 0-4095
16-bit 0-65535

It makes sense to record in a higher bit-depth format so we have some room to make Gain/Exposure modifications without causing colour banding.

If we were to increase the exposure with +1 stop in post, we lose one bit.3 Meaning that 8-bit footage would only be left with 7-bits (not great!).

However the downside of a high bit-depth is it requires more space, so it's a tradeoff manufacturers have to make.

Linear and non-Linear

Before we dig into the ARRIRAW format, I also have to introduce you to the concept of non-linear and linear storage formats. The below examples use a 12-bit space, which means a pixel is has a value between 0 and 4095

linear: The light that hits the sensor is stored linearly, with 50% grey being represented as 2047 (right in the middle of 0-4095). Pure black would be 0, Pure white 4095.
non-linear: The stored lightness value does not directly map to a linear space, it likely has undergone some transformation formula. Although pure black and white still are 0 and 4095, lightness in-between might follow different ratios. This means 50% grey could for example be well above (or below) the 2047 value it's linear companion would be.

Diving into how ARRIRAW 12-bit non-linear files work

Camera manufacturers use 12-bit instead of 16-bit to save space, however computers prefer powers of 2, especially when you're processing a lot of data it often makes sense to convert your data to align with that.
Additionally, working on non-linear pixel data can introduce additional complexities, so before we can use our footage we (or well our software) needs to convert this to 16-bit linear. This will allow the software to manipulate the video data like we're used to.

Converting 12-bit non-linear to 16-bit linear

SMPTE RDD 31:2024 is a technical document describing how ARRIRAW files work, this is what I used in my research. My description below is a slightly simplified version of it as it does not go into other transformations happening in the colour space (such as related to colour balance and ISO).
The concepts explained below should also apply to BRAW and other similar formats, possibly in slightly different proportions (I didn't check)

The 12-bit non-linear (0-4095) data gets converted to 16-bit linear (0-65535) using the following formula:

vp={vi,if vi<1024(Linear)((1024+2o+1)(q2))1,if vi1024(Non-linear)

where

q=vi512o=vimod512

If maths is not your strong suit, that's ok! I'll explain what's happening here.

There are two ways the data can get transformed:

  1. If the pixel data is less than 1024 (eg. 1023 and below), the data does not get transformed and used as is.
  2. If the pixel data is equal or more than 1024, the data will undergo transformation.

If the map this out to a graph, each pixel is represented this way: graph showing how the 12-bit ARRIRAW input is mapped to 16-bit linear

There are a few observations to be made here:

Why store non-linearly?

As we can see above, because of this transformation it's changed how much precision we use for each light value. The brighter something was when it was recorded, the less data is spent on storing it.

You might be aware that the human eye is rather good at distinguishing very minor differences in dark areas, but is less good at bright areas.

The difference between 0% black and 1% black can be noticeable for the eye especially in gradients, but the difference between 99% white and 100% white is basically imperceivable.

Another noteworthy benefit is that dark areas are way less likely (read: near impossible) to cause visible colour banding with how it's stored in ARRIRAW. The 0-1023 non-linear range (0-25%) represents 0 - 1.56% of the linear space, a 16 times difference! In comparison, the 3072-4095 non-linear range (75-100%) represents 75% (16399 - 65534) of the linear space.

This also means that the really dark areas (0-1.56% brightness) are in fact stored with 16-bit accuracy, so there is some truth to the claim... However this also means that the really bright areas are effectively stored with less precision than 12-bit.
The camera internally (pre-ARRIRAW) is reading out the sensor at 16-bit precision though

Conclusion

12-bit RAW formats such as ARRIRAW and BRAW are what they say they are, 12-bit RAW formats.

However the bits it has are used a lot more effectively than linear 12-bit video formats use them, making it nearly indistinguishable from footage shot on linear 16-bit cameras.

So why does DaVinci show 16-bit? Because DaVinci is working with 16-bit footage internally, post non-linear to linear conversion.
However technically some data has been lost, making it not "true" 16-bit, I don't believe it would be entirely honest to call the format 16-bit for that reason.


Personal opinion: I think using 12-bit non-linear is a great choice to save some space, and I really wouldn't bother with higher bit-depths, you are very unlikely to actually benefit from it. This is shown by people all over the world still using 10-bit non-linear cameras and creating great footage with them :)

Any questions or input on this subject? Contact me over at mastodon: @Purple@woof.tech

🏳️‍⚧️


  1. Note: Logarithmic footage (log) is a type of non-linear format, and what we're about to discuss here is essentially how log formats work

  2. Note: BRAW is not true raw, as additional lossy compression is used in the pipeline. This is however outside the scope of this blog post and not relevant for the concepts described on this page

  3. This depends on some factors, but to keep things simple let's assume a stop is always equal to a bit and Gamma curves aren't used

#Videography