JA EN

Disparity Map

An image recording the horizontal pixel displacement (disparity) between corresponding points in a stereo image pair. Depth can be computed from disparity via triangulation, making it an intermediate representation for 3D reconstruction.

A disparity map is the output of stereo matching, storing for each pixel the horizontal offset (disparity) between its position in the left image and its corresponding position in the right image. The relationship between disparity d and depth Z is given by Z = f * B / d, where f is the focal length in pixels and B is the baseline distance between cameras.

Disparity maps are commonly visualized as grayscale images where brighter pixels indicate larger disparity (closer to the camera). They are stored as 16-bit integers or floating-point values for sub-pixel precision. OpenCV's StereoBM and StereoSGBM return values scaled by 16 (fixed-point), so actual disparity requires dividing by 16.

Disparity maps are essential in autonomous driving (obstacle detection and distance estimation), AR/VR (scene understanding and occlusion handling), and robotics (grasp planning and navigation). Commercial depth cameras like Intel RealSense and Stereolabs ZED internally compute disparity maps through hardware-accelerated stereo matching before converting to depth.

Related Terms

Related Articles