Demystifying OpenCV keypoint in Python

OpenCV Library in python, which stands for Open Source Computer Vision, is a trendy library used for achieving artificial intelligence through python. Using the OpenCV library, we can process real-time images and videos for recognition and detection.

When humans see a particular human or animal’s image, we can recognize that image even when its orientation is changed or rotated. The reason behind it is that, as humans, we can easily identify the crucial keypoints in a given image. Thus, even if the image is rotated, we can still identify and relate it with the original picture because our brain can figure out the key points. Similarly, we need key points while processing images on the computer to identify the important points from an image. We achieve that by using OpenCV keypoint.

Importance of keypoints

While a computer is performing image processing, the computer should be able to identify similar characteristics in a given image irrespective of the transformations and rotations it goes through.

The computer should even be able to find similarities between different images belonging to a similar category. This is possible by observing the key points in a given image.

For example, for a human face, the key points are the two eye corners, two mouth corners, the chin, and the nose tip.

The main idea is that despite all the changes in an image, the computer should find the same key points in the new image.

The computer studies the pixel values surrounding a given key point and identifies the same when the images are modified.

OpenCV Keypoint functions

The OpenCV KeyPoint() function is used to store the keypoint found by several frameworks. The keypoint has characteristics such as orientation, scale, and 2D position.

The syntax of the OpenCV KeyPoint() is:

cv.KeyPoint(x, y, size[, angle[, response[, octave[, class_id]]]])

The Parameters are:

x: It is the position of the keypoint with respect to the x-axis

y: It is the position of the keypoint with respect to the y-axis

size: It is the diameter of the keypoint

angle: It is the orientation of the keypoint

response: Also known as the strength of the keypoint, it is the keypoint detector response on the keypoint

octave: It is the pyramid octave in which the keypoint has been detected

class_id: It is the object id

Apart from octave and class_id parameters, which have integer values, the other arguments have float values.

For drawing the KeyPoint, we use drawKeypoints() function. Its syntax is:

 cv.drawKeypoints(image, keypoints, outImage[, color[, flags]])

Parameters:

image: It is the image from which keypoints have to be obtained

keypoints: These are the obtained keypoints from the source image

outImage: It is the output image

color: The color for the keypoints.

flags: Depending on the value set for flag, the keypoint is drawn accordingly.

Keypoint algorithms

SIFT, SURF, and ORB are algorithms that are used for finding keypoints while processing an image. There are several keypoints detected on an image, but we only use the important keypoints. We use a keypoint descriptor to identify which keypoint is important.

SIFT, SURF, and ORB detect keypoints as well as describe them. These techniques are invariant to transformations, scale changes, distortion, and rotation.

SIFT stands for Scale-Invariant Feature Transform. For SIFT, we use the ‘x’ coordinate, the ‘y’ coordinate, and the scale to detect the keypoints. We first resample the image by blurring the image and shrinking its size. Then, we perform scale invariance followed by rotation invariance. In rotation invariance, we calculate the orientation of the pixel around the keypoints. Then, we match the features of the prominent keypoints with other points in other images.

SURF stands for Speeded Up Robust Features. SURF, compared to SIFT, is more robust and faster while processing images. It uses a square-shaped filter because filtering an image with a square makes it a faster process. In addition, it uses a blob detector to find the keypoints. The blob detector is based on the hessian matrix. Overall, the surf algorithm has the same basic steps as the sift algorithm, but certain techniques make it faster.

ORB stands for Oriented FAST and Rotated BRIEF. First, it uses the FAST algorithm to find the keypoints. Then, we apply the Harris corner measure to find top N points. Finally, ORB uses a pyramid to produce multiscale features. 

Example of OpenCV KeyPoint

KeyPoint determination is the fundamental concept behind all the applications of computer vision. Here, we will plot keypoints on a given image. We shall use the ORB algorithm for the same. First, we will import the cv2 library and import the cv2_imshow() function.

from google.colab.patches import cv2_imshow
import cv2

Now, we shall read the image using the imread() function. We have used a monkey’s image. It is a colored image. So, we will be converting it into black and white by passing the flag value as zero.

Monkey Image
Photo by Jamie Haughton on Unsplash
img = cv2.imread('monkey.jpg',0)

Now, we will use cv2.ORB_create() function. We shall pass 200 as the number of points we want.

orb = cv2.ORB_create(200)

Now we will use orb.detectAndCompute() to detect the keypoints and compute the descriptors. Finally, we will pass the image as an argument. It returns two values, the keypoints, and the descriptors.

keypoint, des = orb.detectAndCompute(img, None)

Using the drawKeypoints() function, we will plot all the keypoints. Then, we will pass the image, the keypoints, and the flag value as the arguments.

img_final = cv2.drawKeypoints(img, keypoint, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

Finally, we will plot all the keypoints in out image by using cv2_imshow().

cv2_imshow(img_final)

The output image will be:

opencv keypoint

As we can see above, the important keypoints such as the mouth, two eyes, nose, right ear, and fingers have been detected by the computer.

FAQ’s on OpenCV Keypoint

What are the different applications of opencv keypoint?

OpenCV keypoint is used in several computer vision applications such as human pose detection, human face detection, hand gesture detection,etc.

What are detectors and descriptors in keypoint algorithms?

The detector is used to locate the keypoints from a given image. Whereas descriptors are used the describe the area around a potential keypoint.


That was everything about OpenCV Keypoints. If you have anything to share, let us know in the comments.

Till then, Keep Learning!

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments