Keypoint Matching

What is Keypoint Matching?

Matching: Find corresponding features between images

Applications:

  • Image stitching: Panoramas
  • Object recognition: Find object in scene
  • 3D reconstruction: Stereo vision
  • Visual tracking: Follow objects
  • Image registration: Align images

Process: Detect → Describe → Match → Filter

Matching Approaches

Brute-Force (BF) Matching:

  • Compare each descriptor in set1 with all in set2
  • Find closest match
  • Simple but slow for large sets

FLANN Matching:

  • Fast Library for Approximate Nearest Neighbors
  • Tree-based search (KD-tree, LSH)
  • Much faster, slightly less accurate

Distance Metrics

For float descriptors (SIFT, SURF):

  • L1 (Manhattan): Σ|d1[i] - d2[i]|
  • L2 (Euclidean): √(Σ(d1[i] - d2[i])²)

For binary descriptors (ORB, BRIEF):

  • Hamming: Count of differing bits
  • Hamming2: Hamming for multi-byte

Lower distance = better match

Brute-Force Matcher

  1. import cv2
  2. # BF matcher for ORB (binary)
  3. #ans: bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
  4. # BF matcher for SIFT (float)
  5. #ans: bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
  6. # crossCheck: mutual best match
  7. #ans: ensures A's best match is B AND B's best match is A
  8. # match descriptors
  9. #ans: matches = bf.match(des1, des2)
  10. # returns DMatch objects
  11. #ans: each has .distance, .queryIdx, .trainIdx

DMatch Object

  1. matches = bf.match(des1, des2)
  2. #ans: m = matches[0]
  3. # properties
  4. #ans: distance = m.distance # lower is better
  5. #ans: queryIdx = m.queryIdx # index in des1
  6. #ans: trainIdx = m.trainIdx # index in des2
  7. # sort by distance
  8. #ans: matches = sorted(matches, key=lambda x: x.distance)
  9. #ans: best matches first (lowest distance)

Drawing Matches

  1. # simple match drawing
  2. orb = cv2.ORB_create()
  3. kp1, des1 = orb.detectAndCompute(gray1, None)
  4. kp2, des2 = orb.detectAndCompute(gray2, None)
  5. bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
  6. matches = bf.match(des1, des2)
  7. matches = sorted(matches, key=lambda x: x.distance)
  8. # draw top 30 matches
  9. #ans: img_matches = cv2.drawMatches(img1, kp1, img2, kp2, matches[:30], None, flags=2)
  10. #ans: flags=2 means cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS

kNN Matching

k-Nearest Neighbors: Find k best matches for each descriptor

Purpose: Enable ratio test (Lowe's method)

  1. # kNN with k=2 (2 best matches)
  2. bf = cv2.BFMatcher(cv2.NORM_HAMMING)
  3. #ans: matches = bf.knnMatch(des1, des2, k=2)
  4. #ans: returns list of lists: [[best, 2nd_best], ...]
  5. # each descriptor gets 2 matches
  6. #ans: for matches_for_one in matches:
  7. #ans: m, n = matches_for_one # best and 2nd best

Ratio Test (Lowe's Test)

Purpose: Filter ambiguous matches

Method:

  • If best match much closer than 2nd best → good match
  • If similar distances → ambiguous, reject

Threshold: Typically 0.7-0.8

  1. bf = cv2.BFMatcher(cv2.NORM_HAMMING)
  2. matches = bf.knnMatch(des1, des2, k=2)
  3. # apply ratio test
  4. good_matches = []
  5. #ans: for m, n in matches:
  6. #ans: if m.distance < 0.75 * n.distance:
  7. #ans: good_matches.append(m)
  8. #ans: keeps only distinctive matches

FLANN Matcher

Advantages:

  • Much faster for large descriptor sets
  • Approximate search (trade accuracy for speed)
  • Different algorithms for different descriptor types

Algorithms:

  • KD-tree: For SIFT, SURF (float)
  • LSH: For ORB, BRIEF (binary)

FLANN for SIFT

  1. # FLANN parameters for SIFT (KD-tree)
  2. FLANN_INDEX_KDTREE = 1
  3. index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
  4. search_params = dict(checks=50)
  5. #ans: flann = cv2.FlannBasedMatcher(index_params, search_params)
  6. # trees: number of KD-trees
  7. #ans: more trees = better accuracy, slower
  8. # checks: number of leaf nodes to search
  9. #ans: higher = better accuracy, slower
  10. # match
  11. #ans: matches = flann.knnMatch(des1, des2, k=2)

FLANN for ORB

  1. # FLANN parameters for ORB (LSH)
  2. FLANN_INDEX_LSH = 6
  3. index_params = dict(algorithm=FLANN_INDEX_LSH, table_number=6, key_size=12, multi_probe_level=1)
  4. search_params = dict(checks=50)
  5. #ans: flann = cv2.FlannBasedMatcher(index_params, search_params)
  6. # convert descriptors to float32 (FLANN requirement)
  7. #ans: des1 = np.float32(des1)
  8. #ans: des2 = np.float32(des2)
  9. # match
  10. matches = flann.knnMatch(des1, des2, k=2)

Homography Filtering

Homography: Perspective transformation matrix

RANSAC: Random Sample Consensus

  • Iteratively find transformation
  • Identify inliers (good matches) vs outliers

Use case: Filter geometric outliers after matching

Finding Homography

  1. # need at least 4 matches
  2. # extract matched point coordinates
  3. src_pts = np.float32([kp1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
  4. dst_pts = np.float32([kp2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)
  5. # find homography with RANSAC
  6. #ans: M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
  7. # M: 3x3 transformation matrix
  8. #ans: mask: inliers (1) vs outliers (0)
  9. # filter matches using mask
  10. #ans: inliers = [m for m, msk in zip(good_matches, mask.ravel()) if msk == 1]

Exercises - Part 1 (Concepts)

  1. # what is keypoint matching?
  2. #ans: finding corresponding features between images
  3. # what distance for binary descriptors?
  4. #ans: Hamming distance
  5. # what is crossCheck in BFMatcher?
  6. #ans: ensures mutual best match (bidirectional)
  7. # what is ratio test?
  8. #ans: compare 1st and 2nd best match, reject if similar
  9. # typical ratio test threshold?
  10. #ans: 0.75 (or 0.7-0.8 range)

Exercises - Part 2 (Concepts)

  1. # BF vs FLANN?
  2. #ans: BF exact but slow, FLANN approximate but fast
  3. # what is kNN matching?
  4. #ans: find k nearest neighbors for each descriptor
  5. # why k=2 for ratio test?
  6. #ans: need 2nd best match to compare with best
  7. # what is RANSAC?
  8. #ans: random sampling to find inliers, filter outliers
  9. # what is homography?
  10. #ans: perspective transformation matrix (3x3)

Exercises - Part 3 (Coding)

  1. # BF matcher for ORB
  2. #ans: bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
  3. #ans: matches = bf.match(des1, des2)
  4. #ans: matches = sorted(matches, key=lambda x: x.distance)
  5. # kNN matching
  6. bf = cv2.BFMatcher(cv2.NORM_HAMMING)
  7. #ans: matches = bf.knnMatch(des1, des2, k=2)
  8. # ratio test
  9. good = []
  10. #ans: for m, n in matches:
  11. #ans: if m.distance < 0.75 * n.distance:
  12. #ans: good.append(m)

Exercises - Part 4 (Coding)

  1. # FLANN for SIFT
  2. FLANN_INDEX_KDTREE = 1
  3. index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
  4. search_params = dict(checks=50)
  5. #ans: flann = cv2.FlannBasedMatcher(index_params, search_params)
  6. #ans: matches = flann.knnMatch(des1, des2, k=2)
  7. # FLANN for ORB
  8. FLANN_INDEX_LSH = 6
  9. index_params = dict(algorithm=FLANN_INDEX_LSH, table_number=6, key_size=12, multi_probe_level=1)
  10. #ans: flann = cv2.FlannBasedMatcher(index_params, search_params)
  11. #ans: matches = flann.knnMatch(np.float32(des1), np.float32(des2), k=2)

Exercises - Part 5 (Mixed)

  1. # complete matching pipeline
  2. orb = cv2.ORB_create()
  3. kp1, des1 = orb.detectAndCompute(gray1, None)
  4. kp2, des2 = orb.detectAndCompute(gray2, None)
  5. bf = cv2.BFMatcher(cv2.NORM_HAMMING)
  6. matches = bf.knnMatch(des1, des2, k=2)
  7. #ans: good = [m for m, n in matches if m.distance < 0.75 * n.distance]
  8. #ans: img_matches = cv2.drawMatches(img1, kp1, img2, kp2, good, None, flags=2)

Exercises - Part 6 (Advanced)

  1. # homography filtering
  2. good_matches = [] # from ratio test
  3. src_pts = np.float32([kp1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
  4. dst_pts = np.float32([kp2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)
  5. #ans: M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
  6. #ans: inliers = [m for m, msk in zip(good_matches, mask.ravel()) if msk == 1]
  7. #ans: print(f"Inliers: {len(inliers)}/{len(good_matches)}")

Google tag (gtag.js)