Kernel
A small numerical matrix used in convolution operations. The kernel's values determine the filter type - blur, edge detection, sharpening, or emboss.
A kernel (also called a filter kernel or convolution matrix) is a small numerical matrix used in image convolution. Typically sized 3×3, 5×5, or 7×7 (odd dimensions), each value defines the weight applied to surrounding pixels. By changing kernel values alone, entirely different effects - blur, sharpen, edge detection, emboss - can be achieved.
Fundamental kernel design principles:
- Normalisation: Kernels whose elements sum to 1 preserve image brightness (blur-type filters). Kernels summing to 0 detect edges
- Symmetry: Centre-symmetric kernels produce isotropic (direction-independent) filters
- Size: Larger kernels affect wider areas but increase computational cost quadratically
Practical kernel examples:
- 3×3 Gaussian blur:
[[1,2,1],[2,4,2],[1,2,1]]normalised by 1/16. Natural smoothing - 3×3 Sharpen:
[[0,-1,0],[-1,5,-1],[0,-1,0]]. Emphasises the centre pixel to enhance edges - 3×3 Sobel (horizontal):
[[-1,0,1],[-2,0,2],[-1,0,1]]. Detects vertical edges - 3×3 Emboss:
[[-2,-1,0],[-1,1,1],[0,1,2]]. Creates a raised relief effect
In code, OpenCV applies custom kernels via cv2.filter2D(img, -1, kernel). Define the kernel as a NumPy array and pass it to the function - creating custom filters requires no specialised libraries beyond basic linear algebra.
In deep learning, kernel values are not hand-designed but learned automatically from training data. Each CNN layer contains dozens to hundreds of kernels: lower layers learn edge and colour detectors, while higher layers recognise object parts and semantic concepts.