JA EN

Camera Calibration Fundamentals - Practical Guide to Intrinsic Parameters and Distortion Correction

· 9 min read

What Is Camera Calibration - Why It Is Needed

Camera calibration is the process of estimating mathematical parameters that describe a camera's optical and geometric characteristics. It is essential for accurately predicting where 3D world points project onto 2D images, forming the foundation for virtually all computer vision tasks.

Where calibration is needed:

Camera parameter classification:

Intrinsic parameters remain constant as long as the camera-lens combination is unchanged, though zoom lenses require calibration at each focal length. Extrinsic parameters change whenever the camera moves. Standard calibration captures a known pattern (chessboard) from multiple angles to simultaneously estimate all parameters.

The Pinhole Camera Model - Mathematical Description of Projection

The pinhole camera model mathematically describes an ideal camera where light passes through a single point (pinhole) to project onto the image plane, ignoring lens thickness. It serves as the widely-used approximation model for real cameras in computer vision.

Projection equation: The relationship between a 3D point (X, Y, Z) and its image projection (u, v) is expressed as:

s[u, v, 1]^T = K[R|t][X, Y, Z, 1]^T

where K is the intrinsic parameter matrix (camera matrix), [R|t] is the extrinsic parameters (3x4 matrix), and s is a scale factor.

Intrinsic parameter matrix K:

K = [[fx, 0, cx], [0, fy, cy], [0, 0, 1]]

Typical intrinsic parameter examples: iPhone 14 Pro (main camera): fx ≈ 3000px, cx ≈ 2016px, cy ≈ 1512px (4032x3024 resolution). GoPro Hero 11: fx ≈ 1500px (shorter focal length due to wide angle). Industrial camera (5MP, 8mm lens): fx ≈ 2800px. Larger focal length means telephoto (narrow FOV), smaller means wide-angle (wide FOV).

Lens Distortion Models - Radial and Tangential Distortion

Real lenses produce deviations (distortion) from the ideal pinhole model. Accurately modeling and correcting distortion dramatically improves measurement precision. OpenCV standardly uses 5 distortion coefficients (k1, k2, p1, p2, k3).

Radial Distortion: Distortion that increases with distance from the lens center, affecting pixels more at image periphery:

x_distorted = x(1 + k1*r² + k2*r⁴ + k3*r⁶)

y_distorted = y(1 + k1*r² + k2*r⁴ + k3*r⁶)

where r² = x² + y² (distance from center in normalized coordinates). Typical values: wide-angle k1 ≈ -0.3, standard lens k1 ≈ -0.05, telephoto k1 ≈ 0.01.

Tangential Distortion: Occurs when lens and sensor are not perfectly parallel, depending on manufacturing precision and typically smaller than radial distortion:

x_distorted = x + 2*p1*x*y + p2*(r² + 2*x²)

y_distorted = y + p1*(r² + 2*y²) + 2*p2*x*y

Typical values: p1, p2 ≈ ±0.001. Negligible for high-quality lenses but correction needed for inexpensive lenses and webcams. Ultra-wide lenses like GoPro may require higher-order coefficients (k4, k5, k6) or thin prism models.

Zhang's Method Calibration Procedure

Zhang's method (2000) estimates all camera parameters by simply photographing a planar pattern (chessboard) from multiple angles. Requiring no special equipment and fully implemented in OpenCV, it is the most widely used calibration method in practice.

Required equipment:

Capture guidelines:

OpenCV implementation steps:

Reprojection error evaluation: Calibration quality is assessed by reprojection error - reprojecting 3D points using estimated parameters and computing distance to detected points. Good calibration achieves 0.1-0.5 pixels; acceptable is below 1.0 pixel. Above 1.0 requires reviewing capture conditions or corner detection.

Distortion Correction Implementation and Applications

This section covers how to remove image distortion using calibration-derived distortion coefficients. Distortion correction is a mandatory preprocessing step for 3D measurement, with correction accuracy determining downstream processing precision.

OpenCV distortion correction:

Corrected image size: Distortion correction causes edge cropping, so cv2.getOptimalNewCameraMatrix() computes a new camera matrix. alpha=0 completely removes black regions (smaller image), alpha=1 retains all pixels (black regions remain). Adjust alpha between 0-1 based on application requirements.

Fisheye lens correction: Standard distortion models cannot handle fisheye lenses with 180°+ field of view. OpenCV's cv2.fisheye module uses equidistant projection model for ultra-wide lens calibration and correction via cv2.fisheye.calibrate() and cv2.fisheye.undistortImage().

Saving calibration results: Save estimated parameters in YAML or JSON format for application use. OpenCV's cv2.FileStorage class provides easy save/load functionality. Results remain valid indefinitely as long as the camera-lens combination is unchanged.

High-Precision Calibration Techniques and Troubleshooting

This section covers techniques for achieving high-precision calibration (reprojection error below 0.1 pixels) required in industrial and research applications, along with solutions to common problems.

High-precision techniques:

Troubleshooting:

Auto-calibration: SLAM systems use self-calibration that automatically estimates camera parameters from feature point tracking. ORB-SLAM3 and COLMAP can estimate intrinsic parameters from natural scene features. Precision is lower than pattern-based methods but requires no preparation, making it practical for many applications.

Related Articles

Monocular Depth Estimation Technology and Applications - Inferring Depth from a Single Image

Systematic guide to depth map generation from MiDaS and DPT models to autonomous driving and AR applications. Covers principles through practical implementation.

Panorama Stitching Algorithm Deep Dive - From Feature Detection to Seamless Blending

Detailed explanation of panorama synthesis from multiple images. Covers feature matching, homography estimation, image warping, and multi-band blending at implementation level.

Stereo Vision and Distance Measurement - Recovering 3D Information from Disparity

Complete guide to stereo vision from principles to implementation. Covers epipolar geometry, stereo matching, and depth calculation from disparity maps with code examples.

Image Deblurring Principles and Practice - From Motion Blur to Defocus Recovery

Systematic guide to image deblurring techniques covering Wiener filtering, blind deconvolution, and state-of-the-art deep learning methods with implementation details.

Image Processing for Industrial Inspection - From Visual Inspection to Dimensional Measurement

Systematic guide to image processing in manufacturing quality control covering defect detection, dimensional measurement, pattern matching, and deep learning anomaly detection.

Perspective Correction Principles and Practice - Accurately Fixing Architectural Photo Distortion

From the mathematical principles of projective transformation to practical software correction procedures. Learn to accurately fix perspective distortion in architectural photos and document scans.

Related Terms