Surf vs. sift feature matching

21 %
79 %
Information about Surf vs. sift feature matching

Published on March 7, 2014

Author: adelkhawaji



surf and sift matching features , Scale Invariant Feature Transform (SIFT) descriptor and Speeded-Up Robust features (SURF)

SIFT vs. SURF Feature Matching 101 SIFT vs. SURF Feature Matching Adel Abdulrahman Khwaji Institute of Information & Mathematical Sciences Massey University at Albany, Auckland, New Zealand Email: Abstract: In this study I compare Scale Invariant Feature Transform (SIFT) descriptor and SpeededUp Robust features (SURF) descriptor. For their comparison, I first detect the features. For SIFT I use OpenCV based difference of Gaussian detector, while for SURF I use OpenCV based Hessian detector. The data used for the comparison is a sub-image from KITI training dataset. For the evaluation of descriptors, I use a single image from this dataset, and manipulate it to form the test dataset. The test images consist of rotated images at 90 degree, 180 degree, and 270 degree. Also, a scaled image of twice the size of original image. Due to the known manipulations, I can compute whether a feature point matched between the original image and manipulated image is correct or not. If the matched feature point is no farther than 1 pixel from the actual position, then its marked as a correct match. For evaluating the descriptors percentage of good matches is computed. Keywords: SIFT; SURF; feature detectors; feature descriptors; feature point matching. 1 Introduction In order to match images some unique features have to be searched in images. Each of such features at some location in the image is denoted as a feature point. Given a greyscale image, the objective is to search for unique features which are robust to detection errors. [4] highlights some of the common feature detectors being used including Shi-Tomassi-Kanade Feature Detector [9], difference of Gaussians (DoG) [6], and fast Hessian [1]. In this study, I will only cover DoG and Hessian detectors. DoG first constructs a scale space. To do so, it uses Gaussian filters of different sizes. By convolving each of these filters a on the input image, the output Gaussian images at different scales are computed. Later, Laplacian of these scaled images gives the DoG images at various scales. It then looks for candidate points within DoG images, and does not consider points that are either on edges or are with low contrast [10]. Fast Hessian detector creates an integral image by computing the sum of all pixel intensities in a local neighbourhood. The detector is based on the Hessian matrix and is used to detect blob like structures. The blobs are detected specially in regions with maximum determinant of the Hessian matrix which leads to a good scale selection. To match feature points among images, each is first described by a descriptor. A descriptor basically describes some unique or salient features also called as attributes. A feature point can be described by a number of attributes. The larger the number of unique attributes better would the distinctiveness. Often to avoid the effect of image brightness or shadows hue, saturation, value (HSV) colour space is used [3]. However, I only limit myself to greyscale images. Another alternate to avoid the illumination changes is to apply gradient operator to input images before detecting features points. Examples of such operators are Canny edge detector [2], census transform [11] etc.

102 Adel Abdulrahman Khwaji Table 1: DoG detector and SIFT descriptor inputs Input type Input Minimum # of feature points 400 Octave layers 5 Contrast threshold 0.03 Edge threshold 10 Sigma 1.6 SIFT constructs the descriptor at a feature point detected by DoG detector as a 3-Dimensional histogram. The 3-dimensions are location, orientation and gradient. The magnitude of gradient is used for assigning confidence to the location and orientation bins [7]. The SURF descriptor is quite similar to SIFT. However, it does not need to apply Gaussian filter of different sizes to produce different sized images. Within a scale dependent neighbourhood, SURF describes the distribution of pixel intensities around the neighbourhood of a feature point detected by Fast Hessian. 2 Methodology I implemented the SIFT descriptor with DoG detector and SURF descriptor with Fast Hessian detector using OpenCV v2.4.0. For the evaluation I used a sub-image from KITTI training dataset as an input image [5]. Figure 1 illustrates the input image. Figure 1: Input image used for evaluation of descriptors. Because SIFT and SURF are renowned to be scale and rotation invariant, therefore to evaluate that I created my own test data. To match the features between input and test image, I implemented OpenCV based Fast Library for Approximate Nearest Neighbours (FLANN) [8].

SIFT vs. SURF Feature Matching 2.1 103 Inputs For the OpenCV DoG implementation in SIFT, the used inputs for the detector are shown in Table 1. Table 2: Fast Hessian detector and SURF descriptor inputs Input type Input Hessian threshold 400 Number of octaves 4 Octave layers 3 Extended True Upright False Table 3: Detection and matching performance of detectors and descriptors SIFT SURF # of detected features % correct # of detected features % correct 90 1071 98.88 1856 89.55 180 1071 95.24 1856 87.17 270 1071 96.73 1856 91.50 Scaled 1071 73.76 1856 56.52 Similarly, for OpenCV implementation of SURF descriptor and Fast Hessian detector, the used inputs are shown in Table 2. 2.2 Test Images The test data consists of four different images, all created from the input image:  Input image rotated by 90 , giving test image A  Input image rotated by 180 , giving test image B  Input image rotated by 270 , giving test image C  Input image scaled to twice the original size, giving test image D Figure 2 illustrates the test images A, B, C, and D. 2.3 Algorithm for Evaluating the Matching Accuracy of a Descriptor Algorithm 1 shows the main methodology followed for the evaluation of a descriptor.

104 2.4 Adel Abdulrahman Khwaji Results Table 3 shows the matching accuracy differences between my input image and test images. Rotation or scaling does not affect the detection of features. However, a scaled object is much more di cult to match with both SIFT and SURF descriptors. In all scenarios, SIFT descriptor always outperforms SURF descriptor. Figures 3, 4, 5, and 6 illustrate the matching points with lines between the input image (on the left) and the test image (on the right). Top row in each Figure is for SIFT descriptor and the bottom one is for SURF. Algorithm 1 Match features between input image and test image. Load the input image I; w = width(I); h = height(I); Compute the test image; Detect the feature points and compute their descriptor for input image; Detect the feature points and compute their descriptor for input image; Match feature points of input image with feature point of input image using FLANN matcher; Initialize fnc = 0 and fn = 0; for each feature point i detected in test image do Feature point j is matched the feature point in input image; fn = fn + 1; {Reposition i in test image to input image as feature point k} if test image is A then k x = i y; ky = h - ix - 1; end if if test image is B then kx = w - ix - 1; ky = h - iy - 1; end if if test image is C then kx = w - ix - 1; ky = h - iy - 1; end if if test image is D then kx = ix / 2; ky = iy / 2; end if if abs(jx - kx) <= 1 AND abs(jy - ky) <= 1 then fnc = fnc + 1; end if end for (fnc ∙ 100) / fn percent of points were matched correctly;

SIFT vs. SURF Feature Matching 3 105 Discussion The advantage of fast Hessian detector for SURF descriptor is that it detects relatively more feature points compared to DoG detector. This can be advantageous in scenarios where the object to be matched is either too small or highly blurred. Then in such scenarios a DoG may not be able to locate strong features whereas a Hessian detector is more likely to detect features. The likelihood of a blurred matching object or even a small object, increases with the distance of object from the camera. A common application for this scenario occurs is object tracking in real-time. In such cases too, a Hessian detector will be comparable with other detectors like Lucas Kanade detector and Harris Corner detector. In the future work, it will be interesting to use the Hessian detector with SIFT descriptor and DoG detector with SURF descriptor. This would lead to a better understanding of whether only SIFT descriptor outperforms SURF descriptor or it is also due to the underlying detectors. 4 Conclusions and Summery Clearly, although DoG detector detects lesser feature points 1071 compared to Fast Hessian detecting 1856 feature points, however, SIFT descriptor is also a better matching descriptor compared to SURF. However, the downfall is that SIFT is slow. So, depending on the application, the use of a feature point matcher may vary. For example, for object recognition, SIFT descriptor may be preferred over SURF. Whereas, for real-time applications like object tracking, SURF might be preferred over SIFT.

106 Adel Abdulrahman Khwaji References [1] H. Bay, T. Tuytelaars, and L. V. Gool. Surf: Speeded up robust features. In Proc. European Conf. Computer Vision (ECCV), pages 408{417, 2006. [2] J. Canny. A computational approach to edge detection. J. Pattern Analysis and Machine Intelligence (PAMI), no. 6, pages 679{698, 1986. [3] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, and S. Sirotti. Improving shadow suppression in moving object detection with HSV color information. In Proc. IEEE Intell. Transportation Systems (ITS), pages 334{339, 2001. [4] Evaluation of interest point detectors and feature descriptors for visual tracking. S. Gauglitz, T. H•ollerer, and M. Turk. Int. J. of Computer Vision, vol. 94, no. 3, pages 335{360, Springer, 2011. [5] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), 2012. [6] D. G. Lowe. Object recognition from local scale-invariant features. In Proc. IEEE Int. Conf. Computer Vision (ICCV), pages 1150{1157, 1999. [7] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. J. Pattern Analysis and Machine Intelligence (PAMI), vol. 27, no 10, pages 1615{1630, 2005. [8] M. Muja and D. G. Lowe. Fast Approximate Nearest Neighbors with Automatic Algorithm Con guration. In Proc. Int. Conf. Computer Vision Theory and Application (VISSAPP), pages 331{340, 2009. [9] J. Shi and C. Tomasi. Good features to track. In Proc. IEEE Computer Vision and Pattern Recognition (CVPR), pages 593{600, 1994. [10] A. Yilmaz, O. Javed, and M. Shah. Object tracking: A survey. J. Acm Computing Surveys (CSUR), vol. 38, no. 4, pages 13, 2006. [11] R. Zabih and J. Wood ll. Non-parametric local transforms for computing visual correspondence. In Proc. IEEE Int. European Conf. on Computer Vision (ECCV), pages 151{158, 1994.

SIFT vs. SURF Feature Matching 107 Figure 2: Test images used for evaluation of descriptors. A: top left, C : top right, B: middle, and D: bottom.

108 Adel Abdulrahman Khwaji Figure 3: Test image A.

SIFT vs. SURF Feature Matching 109 Figure 4: Test image B.

110 Adel Abdulrahman Khwaji Figure 5: Test image C.

SIFT vs. SURF Feature Matching 111 Figure 6: Test image D.

Add a comment

Related presentations

Related pages

Local feature matching OpenCV C++ SIFT, FAST ... - YouTube

... between the OpenCV implementations of SIFT, ... Local feature matching OpenCV C++ SIFT, FAST ... Feature Transform (SIFT ...
Read more

Speeded-Up Robust Features SURF - SCI Home

Feature vectors SURF vs Sift . Assumptions ... Feature detection Feature matching ... SURF vs SIFT SURF is roughly 3 ...
Read more

A Comparative Study of Three Image Matcing Algorithms ...

A Comparative Study of Three Image Matcing Algorithms: Sift, Surf, ... 4.2.1 SIFT feature matching ... light background vs. light on dark background), ...
Read more

SIFT Vs SURF: Quantifying the Variation in Transformations

SIFT Vs SURF: Quantifying the Variation in Transformations ... C. Feature Matching The SIFT and SURF descriptors were matched using the FAST
Read more

Feature Matching with FLANN — OpenCV documentation

Feature Matching with ... /** * @file SURF_FlannMatcher * @brief SURF detector + descriptor + FLANN ... Here is the result of the feature detection applied ...
Read more

Introduction to SURF (Speeded-Up Robust Features) — OpenCV ...

... H., Tuytelaars, T. and Van Gool, L, published another paper, “SURF: Speeded Up Robust Features ... SURF feature descriptor has an ... to SIFT. SURF ...
Read more

Scale-invariant feature transform - Wikipedia

Scale-invariant feature transform (or SIFT) ... SIFT outperforms SURF. ... SIFT feature matching can be used in image stitching for fully automated ...
Read more

SIFT and SURF Performance Evaluation against Various Image ...

SIFT and SURF Performance Evaluation Against Various Image ... of SIFT and SURF ... use k-d trees to speed up nearest neighbor matching. 64D SURF feature ...
Read more

International Journal of Innovative Research in Computer ...

International Journal of Innovative Research in Computer and Communication Engineering Vol. 1, Issue 2, April 2013 ... Feature matching, SIFT, SURF, NCC. I.
Read more

Feature Descriptors, Detection and Matching

Feature Descriptors, Detection and Matching Goal: Encode distinctive local structure at a collection of image points for matching between images, despite ...
Read more