Harris Corner Detection
An algorithm for detecting corners in images by analyzing the eigenvalues of the local gradient autocorrelation matrix, identifying points with significant intensity changes in two directions.
Harris corner detection, proposed by Chris Harris and Mike Stephens in 1988, identifies interest points in images where intensity changes significantly in multiple directions. A corner is distinguished from an edge (change in one direction only) and a flat region (no change) by examining how a local patch's intensity varies under small shifts in all directions.
The algorithm centers on the structure tensor (autocorrelation matrix) M, a 2x2 matrix constructed from Gaussian-weighted sums of gradient products Ix and Iy within a local window. When both eigenvalues λ1 and λ2 of this matrix are large, the point exhibits strong variation in two independent directions and qualifies as a corner.
- Corner response function: Rather than computing eigenvalues directly, Harris uses
R = det(M) - k * trace(M)^2, where k is typically set between 0.04 and 0.06. Points with R above a threshold are corner candidates - Non-maximum suppression: Among neighboring response values, only local maxima within a window (commonly 3x3 or 5x5) are retained, eliminating duplicate detections in close proximity
- Rotation invariance: Since eigenvalues of the structure tensor are invariant under rotation, Harris corners remain stable when the image is rotated. However, the detector is not inherently scale-invariant
In OpenCV, cv2.cornerHarris() implements this algorithm with parameters for block size, Sobel kernel size, and the k constant. The Harris detector remains widely used in real-time tracking, camera calibration, and as a preprocessing step for feature matching pipelines due to its computational efficiency and detection stability.