Untonemapping, and other stupid tricks October 2nd, 2016
I've been meaning to write something about this for years but never got around to it. I don't claim there's any great use for this stuff; it's just one of those little oddities us graphics programmers like to collect.
You all know what tonemapping is - converting a HDR (High Dynamic Range) image into an LDR (Low Dynamic Range) image for display. What might not be immediately obvious is that it's reversible.
We can formalize this relationship using the following notation:
L(x) = 1 - exp2(-k*x)
That's the standard formula for an exponential tone mapper. There's a lot of other functions you can use (Reinhard etc) but for the purposes of today's article it doesn't matter, so let's just pick the simplest one to work with. For final display of course it may well matter, but we're not talking about final display here. The reason it doesn't matter is that it cancels out.
Note that I'll always be using exp2, not exp/log/ln etc, and you should too. GPUs often only support exp2, with the others requiring an extra multiply to convert bases. So if we work in base 2 ourselves we can save that multiply.
So if the formula above is tonemapping, what's untonemapping? Well it's just a simple inverse:
H(x) = log2(1 - x)/-k
Ok so far. But what use is it? Let's try an example.
Let's say you want to render a nice image of a wooden teapot (hey why not). So you make a beautiful Photoshop image, like so:
Then you slap it on a basic lit mesh. First let's compare how it looks when you render it using an old LDR engine, then using a standard HDR engine with tonemapping:
// LDR:
float3 diffuse = tex2D(diffuseTex, uv).rgb;
float3 color = diffuse * lighting;
return color; // no tonemap
// HDR:
float3 diffuse = tex2D(diffuseTex, uv).rgb;
float3 color = diffuse * lighting;
float k = 2.0f;
return 1 - exp2(color * -k); // exponential tonemap
Eurgh. Both of these images kinda suck. Our texture looked so nice in Photoshop, but now it's been distorted in both renderings. The LDR one preserves the vibrant orange colors well, but because it's an LDR engine it can't light the thing properly, and clips the colors badly.
However, the HDR engine has captured the full lighting range, but at the expense of draining the contrast and saturation from the texture. This does depend on which tonemapping curve you use, some fare better than others. But they all tend to exhibit this problem. Why is this?
The problem is that we're using this photo as a diffuse map, but it isn't a diffuse map. What the photo really is, is the output of another renderer. (In this case, the renderer was the real world and a camera)
This means the source photo is already tonemapped. We need to reverse the process to recover the original diffuse map. We can do this by assuming the photo was taken under some standardized lighting conditions, and simply running it through the untonemapping operator.
But, you ask, how can I untonemap it if I don't know the value of k to use? That's the cool part: it doesn't matter. Just pick one (1.0 works well). That'll be our reference exposure value. The exposure values we then use to render our scene will then be defined relative to our base.
// HDR using untonemapping to correct the diffuse texture
float3 diffuse = tex2D(diffuseTex, uv).rgb;
float k = 2.0f;
diffuse = -log2(1-diffuse); // untonemap
color = diffuse * lighting;
return 1 - exp2(color * -k); // tonemap
So how does that look now?
Well that's a lot better. It now matches the source map exactly - the output of our renderer is identical to the artists image, and we now have a good mathematical framework for taking our output results and working on them.
Before we go any further I'm going to make one small but important tweak. There's a lot of "1-" going on here, and it's kinda annoying. Let's get rid of it. We don't need it.
L(x) = exp2(-k*x)
H(x) = log2(x)/-k
This means all our LDR images will be inverted, but we can just flip it back before display. I'll call this space inverted-LDR, and that's what I'll be using for the rest of the article.
This now means that in LDR space, black represents infinitely bright. This turns out to be surprisingly useful. In fact, it makes me wonder if that isn't in fact the natural image representation we should all use by default.
And now for my next trick
So what are the consequences of this? Well, now that we have a more rigorous definition of how to convert to/from LDR space, we can convert some common HDR operations so that they work in LDR space directly.
So, for instance. In HDR space, if you want to add two colors together, you just add them. Let's write that out:
Ah(x, y) = x+y
We can get the LDR equivalent by tone mapping it:
Al(x, y) = L(Ah(x, y))
Expanding that out:
Al(x, y) = L(x+y)
Al(x, y) = exp2(-k*(x+y))
= exp2(-k*x + -k*y)
Now here's the trick. We can use the laws of logarithms to split that apart:
= exp2(-k*x) * exp2(-k*y)
Do you see what's happened here? It's equivalent to tone mapping the two colors individually, then multiplying them. Just to spell that out for you:
Given two inverted-LDR images, you add them together by just multiplying them.
ADDinv(x, y) = x * y
What would happen if we were using regular-LDR instead of inverted-LDR? Let's write it out with the 1-x's in:
ADDreg(x, y) = 1-((1-x)*(1-y))
Oh look, that's the Photoshop 'screen' blend mode. I don't know if that's something the Photoshop designers intentionally thought of; if not, it's certainly an interesting co-incidence.
But wait! There's more!
That's addition taken care of. What about multiply?
In HDR-space, a multiply looks like this:
Mh(x, y) = x * y
We can do the same tricks as before. First let's tonemap it to get it into LDR.
Ml(x, y) = L(Mh(x, y))
Ml(x, y) = L(x*y)
Ml(x, y) = exp2(-k*x*y)
And then apply logarithms to split apart again:
Ml(x, y) = exp2(-k*x)^y
What does this mean? It means that if you have an inverted-LDR image, and a HDR image, you can multiply those together by raising the LDR image to the power of the HDR one.
e.g.
MULinv(xl, yh) = xl^yh
An Example
Here's an example of how you might throw all this together. Let's imagine you're starting with an inverted-LDR diffuse texture, and you want to do some HDR lighting with it. We can use the "multiply" rule to do the diffuse lighting, then the "add" rule to add on the specular lighting. Note that the diffuse texture remains in inverted-LDR space throughout, and the final result needs no tonemapping, because it is already in LDR space.
float3 diffuse = tex2D(diffuseTex, uv).rgb;
float3 diff_lighting = calculateHdrDiffuseLighting();
float3 spec_lighting = calculateHdrSpecularLighting();
float3 ldr_output = pow(diffuse, diff_lighting) * tonemap(spec_lighting);
I'll summarize the inverted-LDR-space rules in a table here:
Rule | Formula |
---|---|
HDR to LDR | exp2(-k*x) |
LDR to HDR | log2(x)/-k |
HDR + HDR | x*y |
LDR * HDR | x^y |
So there it is. As I said, I don't know if this is going to be especially useful to anyone, but I thought it was interesting how you can do mathematics in LDR-space and yet get the correct results of HDR lighting.
Written by Richard Mitton,
software engineer and travelling wizard.
Follow me on twitter: http://twitter.com/grumpygiant