High Dynamic Range (HDR)IntroductionDynamic range has two parts to it. The range defines the smallest and largest values that can be quantified. When the quantifiable range is large, it is called high dynamic range (abbr. HDR). When it is low, it is called low dynamic range (abbr. LDR). Often these ranges are to large to experience both the low end and high end at the same time. We need time to adjust ourselves to accommodate the environment. For example, your vision goes black when you turn bright lights off in a room. After a moment, your eyes begin to adjust and you can make out details. This is the dynamic nature of our senses. Digital equipment works in the same way through exposure control, however it often records only a portion of the luminosity, making it impossible to recover details lost in the recording. When you work with HDR data, you have greater control over the luminosity and can apply a tone mapping algorithm to improve the dynamic range of the LDR image that gets rendered. This article explores the inner workings of HDR, what it's for, and how you can implement it to improve the quality of your renders. What is HDR?In computer graphics, dynamic range refers to the luminosity of the scene. Luminosity values are encoded 0.0 for black, 1.0 for white, and anything in between as shades of grey. The problem is defining what is black and what is white. A light bulb is bright, but so is the sun. There's a significant difference in luminosity between the two that both cannot be visibly defined within a limited range. Take any camera for example and adjust the exposure settings.
It's the same environment, but due to the limited dynamic range of the camera, you either show details in the sky at the expense of making everything else darker, or you brighten the subject at the expense of overexposing the sky. The problem comes after the fact. Once you take a photograph, what you see is what you get. This is where HDR comes into play. Our digital devices are limited in the ranges of luminosity they can display, so simply put our devices are in the low dynamic range. That doesn't mean your workflow should be limited to that range as well. HDR is not just about having a greater range of luminosity information to work with, it's also a process for post-processing the result according to your needs. This process is called tone mapping. The tone mapped image exposes more detail in the shadows and highlights, allowing you to see more of what was recorded. What is Tone Mapping?HDR data by itself is not useful. We have a limited brightness range we can see with. What tone mapping does is compress the HDR image down into the low dynamic range in such a way to maximize detail. Any sort of compression can result in quality loss and in the case with tone mapping, it can create very surreal images. Sometimes this is done for artistic reasons, but generally a good tone mapping algorithm with careful use of its parameters will help you make the most of your HDR data. An ExampleFigure 1: LDR Image The burned out (overexposed) centre in the LDR image is a result of only a portion of the range being visible. Let's take a look at the histogram. From left to right, the white line that divides the histogram into two parts is where the LDR ends. You can see that the rest of the image was clipped automatically to white. If you only had the LDR data, you would not be able to discover what lies beyond that point. With the HDR data, you can apply a tone mapping algorithm to compress the HDR data into an LDR image to expose as much detail as possible. Figure 2: HDR Tone Mapped Image The beauty in HDR imaging lies in its configurability. Not only does it give you greater control over how the data gets compressed into LDR, but you can use this information to simulate the nature of our senses by auto adjusting the luminosity as you enter brighter or darker areas, known as auto exposure. Reinhard Tone MappingThe following focuses on the minimal version of Reinhard's tone mapping algorithm. This version only factors in luminosity and does not discuss the post-recovery stage where dodge and burn is applied to the image to restore more detail. A link to his paper is provided in the references should you wish to learn more about this tone mapping algorithm. Reinhard's tone mapping algorithm has 4 parts. Equation 1 Where Equation 2 This formula iterates over all the pixels in the image and calculates the average luminance, where Equation 3 Where Equation 4 Where Colour SpaceThe final step is to apply Equation 5 Converting from RGB to XYZ to xyY. Equation 6 Once you have your xyY value, multiply your luminance scale as follows. Equation 7 Converting from xyY to XYZ to RGB. ImplementationStep 1: Floating Point TexturesTo implement HDR, you will need floating point texture support. You no longer need to clamp your pixels into the 0.0 and 1.0 range, just let them take any value. The sun in your game could have a pixel brightness of 1000 for instance. Floating point textures do pose a problem. They consume 4 times the texture memory, some video cards are incapable of generating mipmaps with this format, and most importantly is that not all platforms support floating point textures. WebGL in particular only supports floating point textures as an extension, which means it's not guaranteed to be there if you use it. As an alternative, the RGBE format can be used. The WebGL demo that accompanies this article uses this format. Simply put, RGBE is a standard 32 bit image that stores compressed floating point RGB values. It was created by Gregory Larson for his Radiance renderer at a time when floating point textures were not feasible. RGBE does have its limitations though. RGBE compresses the floating point values, so there will be some quality loss. You also have to manually generate your mipmaps, and you have to manually filter the pixels. It can also create rings and bands as shown below. Figure 3: HDR Encodings You can see how RGBE and LogLuv (another format to store floating point pixels into a 32 bit image) can affect image quality. Most often though these defects will be covered up by the amount of other visual detail provided in your render. This article does not go into detail about the RGBE format, but you can learn more about it a Cornell Univeristy's website or you can read the source code for this article. One thing to note about the RGBE file format is that it's not good format to use. It uses RLE compression, which is a weak compression algorithm. The file format is also not supported in web browsers, making it a more difficult format to support in your WebGL games. Instead, you should encode the RGBE values into a PNG file. PNG offers much better compression and can be loaded directly into WebGL. The download for this article comes with a command line executable that you can use to convert RGBE files generated from Blender 3D into PNGs. Step 2: Adjusting your WorkflowImplementing HDR requires you to conform to a fully HDR pipeline. This means that if you use lightmaps, these textures must be in HDR format. You don't need to convert your standard colour textures, normal maps, or other such textures. If you don't use lightmaps, then you can ignore this step. Step 3: Create Two Floating Point Framebuffer ObjectsThe first framebuffer object will store the results of the rendered scene in an HDR texture. The second framebuffer object will store the log luminance values from your HDR texture. This data is required for the tone mapping stage. To calculate the log luminance, use a simple shader to calculate the log value of equation 1 for each pixel.
Converting from HDR to log luminance. Step 4: Find the Average and Maximum LuminanceTo find the average and maximum luminance values, you need to create a mipmap of your log luminance texture down to a 1x1 pixel that will contain your answer.
Mipmapping the luminance texture To find the average luminance, you bilinearly filter adjacent pixels until you reach a 1x1 pixel. To find the maximum luminance, you keep the pixel with the highest luminance value. This requires you to convert the RGB pixel into a floating point luminance value using equation 1. You then return the pixel with the highest luminance. Step 5: Tone MappingApply Reinhard tone mapping to your HDR image. Substitute the average luminance calculated in step 4 into ConclusionHDR improves the quality of your renders by better simulating human vision. With access to a higher range of luminosity data, you can apply a tone mapping algorithm to compress that data into an LDR image that provides the most detail. You also get a free auto-exposure system that better mimics how our vision reacts to varying light levels. HDR can be implemented using a platform independent format such as RGBE, but you are also free to take direct advantage of floating point textures that have been supported in mainstream video cards for quite some time. You just need to evaluate the expansive memory requirements in using them. References
|
|