JA EN

HDR Image Processing - Understanding and Working with High Dynamic Range

· 9 min read

Understanding Dynamic Range - The Spectrum of Light and Dark

Dynamic range refers to the ratio between the darkest and brightest portions an image can represent. Human eyes possess approximately 20 stops (EV) of dynamic range, simultaneously perceiving wide brightness variations from dark interiors to bright outdoors. Conventional SDR (Standard Dynamic Range) images can only represent approximately 6-8 stops of dynamic range.

This limitation forces photographers to choose between blown-out skies or silhouetted subjects in backlit scenes. HDR (High Dynamic Range) imaging breaks this constraint, capturing detail in both bright and dark areas within a single image.

Numerical understanding of dynamic range:

HDR imaging's essence is "more faithfully recording real-world light intensity." Direct sunlight is tens of thousands of times brighter than indoor lighting, and HDR images can accurately preserve this difference as numerical values. This information dramatically increases freedom in post-processing exposure adjustment and tone mapping operations.

HDR Image File Formats - Choosing by Use Case

Multiple file formats exist for storing HDR images, each with distinct characteristics and intended uses. Selecting the appropriate format based on project requirements is essential for optimal workflow efficiency.

OpenEXR (.exr):

Developed by ILM (Industrial Light & Magic), this is the film and VFX industry standard. Stores each channel in 16-bit or 32-bit floating point, representing theoretically infinite dynamic range. Supports multi-layer storage (depth, normals, albedo in one file), tiled structure (partial loading), and diverse compression methods (ZIP, PIZ, DWAA). File sizes are large but quality is uncompromised.

Radiance HDR (.hdr):

A historic format developed by Greg Ward. Stores each pixel in 4 bytes using RGBE (Red, Green, Blue, Exponent) encoding. Less precise than EXR but smaller file sizes, widely used for distributing environment maps (IBL: Image-Based Lighting). Frequently used as environment light sources in web 3D rendering (Three.js, Babylon.js).

AVIF (HDR-capable):

An AV1 codec-based image format supporting 10-bit and 12-bit HDR content. Compatible with PQ (Perceptual Quantizer) and HLG (Hybrid Log-Gamma) transfer functions, suitable for HDR display output. Browser support is advancing, making it a strong candidate for web HDR image delivery.

JPEG XL (.jxl):

Designed as a next-generation image format with native HDR content support. Enables lossless transcoding from existing JPEG and features Gain Map capability storing both HDR and SDR in a single file. Browser support remains limited but technically the most advanced format available.

Tone Mapping - Displaying HDR on SDR Displays

HDR image dynamic range exceeds standard display (SDR) capabilities, causing blown highlights or crushed shadows when displayed directly. Tone mapping compresses HDR's wide dynamic range into SDR display's limited range while preserving visual quality.

Global tone mapping:

Applies identical transformation functions across the entire image. Fast computation and simple implementation, but tends to lose local contrast. Representative operators include:

Local tone mapping:

Varies transformation parameters based on local image brightness. Brightens dark regions and darkens bright regions, maintaining local contrast while compressing dynamic range. Closer to human eye adaptation mechanisms but computationally expensive and prone to unnatural halos (light bleeding artifacts) around high-contrast edges.

HDR Capture Techniques - Bracket Shooting and Merge Processing

Camera sensors have limited dynamic range capturable in a single exposure. The most common method for creating HDR images is "bracket shooting + merge" - capturing multiple exposures and combining them in software.

Bracket shooting basics:

Capture 3-7 frames at different exposures with identical composition. Typically at 2EV intervals: underexposed (for highlight detail), properly exposed, and overexposed (for shadow detail) as minimum three shots. Use a tripod to fix camera position, keep aperture (depth of field) constant, and vary only shutter speed. Changing aperture alters depth of field, causing bokeh mismatch during compositing.

HDR merge algorithms:

Algorithms generating HDR images from multiple exposures select and combine the most reliable exposure value for each pixel. The representative method is Debevec & Malik's (1997) camera response function estimation, which reverse-engineers the camera's response curve from pixel values across exposures to recover true radiance values.

Deghosting:

When subjects move during bracket shooting, ghost artifacts (semi-transparent afterimages) appear in composited images. Motion detection algorithms identify moved regions, using information from only one exposure for those areas. Software like Adobe Lightroom and Photomatix Pro include automatic deghosting features.

Single-shot HDR:

Modern smartphones and mirrorless cameras generate HDR images from single shutter presses. Technologies include assigning different exposure times to individual sensor pixels (Quad Bayer, Dual Gain) and computational photography techniques that composite rapidly captured multiple frames in real-time. Google's HDR+ and Apple's Smart HDR exemplify this approach.

HDR Images on the Web - Browser and CSS Support Status

As HDR displays proliferate, web browsers are gaining HDR content display capabilities. However, support remains evolving, making fallback strategies essential for production deployment.

HDR display adoption:

As of 2026, MacBook Pro (Liquid Retina XDR), iPhone (Super Retina XDR), many OLED TVs, and select Windows laptops support HDR display. Displays with peak brightness above 1000 nits benefit most from HDR content. SDR display peak brightness is typically 300-400 nits.

CSS color-gamut and dynamic-range media queries:

@media (dynamic-range: high) detects HDR-capable displays for conditional HDR content delivery. @media (color-gamut: p3) detects Display P3 gamut support for more vivid colors. Combining these media queries enables progressive enhancement delivering optimal images based on device capabilities.

Gain Map HDR:

Proposed by Apple and adopted by Google, Gain Map embeds HDR information as additional data within SDR images. SDR devices display the normal SDR image while HDR devices apply the Gain Map for HDR rendering. Available in both JPEG and AVIF, this practical approach provides HDR experiences while maintaining backward compatibility.

Canvas API HDR rendering:

Canvas API's getContext('2d', { colorSpace: 'display-p3' }) enables wide-gamut rendering. WebGL/WebGPU use floating-point textures for HDR rendering with tone mapping applied at final output. Adoption is advancing in games and 3D visualization applications.

Practical HDR Image Processing - Tools and Workflows

Concrete tools and workflows for handling HDR images in real projects. Building a consistent HDR workflow from capture to final output is key to maintaining quality throughout the pipeline.

Desktop tools:

Programmatic HDR processing:

Python's OpenCV provides cv2.createMergeDebevec() for HDR merge and cv2.createTonemap() for tone mapping. Node.js's sharp library supports HEIF (HDR) read/write operations. C++ uses the OpenEXR library as the industry standard for production pipelines.

Web delivery workflow:

Recommended workflow for web HDR delivery: maintain master images in EXR or 16-bit TIFF; generate SDR versions in JPEG/WebP; generate HDR versions in AVIF (10-bit, PQ) or Gain Map JPEG; use <picture> elements with media attributes to serve HDR/SDR appropriately; ensure automatic SDR fallback for non-HDR environments.

Performance considerations:

HDR images tend toward larger file sizes than SDR (10-bit is approximately 1.25x 8-bit). CDN caching strategies, lazy loading, and appropriate compression settings minimize performance impact. Gain Map approach adds only a few KB of additional data to SDR images, representing the smallest file size increase option available.

Related Articles

Color Space Fundamentals - Understanding the Differences Between sRGB, Display P3, and Adobe RGB

Learn the essential concepts of color spaces in web and design, with detailed comparisons of sRGB, Display P3, and Adobe RGB characteristics.

Image Format Comparison - JPEG/PNG/WebP/AVIF/GIF/BMP Features and Use Cases

Compare technical characteristics of 6 major image formats. Organized comparison of compression methods, color depth, transparency, animation, and browser support with optimal format selection by use case.

HDR Tone Mapping Types and Selection Guide - Global to Local Comparison

Systematic comparison of HDR tone mapping operators including Reinhard, Drago, and Mantiuk. Covers principles, characteristics, and use-case recommendations with examples.

RAW vs JPEG - Choosing the Right Format for Your Photography

Compare RAW and JPEG formats in terms of image quality, file size, and editing flexibility. Learn which format to choose for different shooting scenarios.

What is HEIC? How to Convert iPhone Photos to JPG

Learn about the HEIC format used by iPhones and how to convert to JPG. Understand why Apple uses HEIC, compatibility issues, and solutions.

Layer Compositing Fundamentals - Complete Blend Mode Guide with Practical Techniques

Explains image layer blend modes at the mathematical formula level. Covers the principles of Multiply, Screen, Overlay and other key modes with practical use cases and examples.

Related Terms