banner



How To Attach Camera Filter To A Board

In this tutorial, I'll show you how to convert camera pixels to real-world coordinates (in centimeters). A common use example for this is in robotics (e.g. forth a conveyor chugalug in a factory) where you want to pick upward an object from one location and place it in another location using nothing but a robotic arm and an overhead photographic camera.

Prerequisites

To consummate this tutorial, it is helpful if you take completed the following prerequisites. If y'all oasis't that is fine. Y'all can still follow the process I will explicate beneath.

  • You take ready up Raspberry Pi with the Raspbian Operating System.
  • You have OpenCV and a Raspberry Camera Module Installed.
  • You know how to determine the pixel location of an object in a real-time video stream.
  • You lot completed this tutorial to build a two degree of freedom robotic arm (Optional). My eventual goal is to be able to move this robotic arm to the location of an object that is placed somewhere in its workspace. The location of the object will be adamant past the overhead Raspberry Pi camera.

You Will Demand

Here are some extra components you'll need if yous desire to follow forth with the concrete setup we put together in the prerequisites (above).

  • ane x VELCRO Brand Thin Clear Mounting Squares 12-pack | 7/8 Inch (you lot tin can also use Scotch tape).
  • 1 x Overhead Video Stand up Phone Holder
  • x x 1cm Grid Write N Wipe Boards (check eBay….also search Amazon for 'Centimeter Filigree Dry Erase Board')
  • one x Ruler that can measure in centimeters (or measuring record)

Mount the Photographic camera Module on the Overhead Video Stand up Phone Holder (Optional)

Catch the Overhead Video Stand Telephone Holder and identify it above the grid similar this.

1-above-the-grid

Using some Velcro adhesives or some record, attach the camera to the holder's end effector so that it is pointing downward towards the center of the grid.

2-use-some-tape
3-pointing-downward


Here is how my video feed looks.

4-live-video-feedJPG

I am running the program on this folio (test_video_capture.py). I'll retype the lawmaking here:

# Credit: Adrian Rosebrock # https://www.pyimagesearch.com/2015/03/thirty/accessing-the-raspberry-pi-photographic camera-with-opencv-and-python/  # import the necessary packages from picamera.assortment import PiRGBArray # Generates a 3D RGB assortment from picamera import PiCamera # Provides a Python interface for the RPi Camera Module import time # Provides fourth dimension-related functions import cv2 # OpenCV library  # Initialize the photographic camera camera = PiCamera()  # Set the camera resolution camera.resolution = (640, 480)  # Set the number of frames per second photographic camera.framerate = 32  # Generates a 3D RGB assortment and stores it in rawCapture raw_capture = PiRGBArray(photographic camera, size=(640, 480))  # Wait a sure number of seconds to allow the camera time to warmup time.sleep(0.ane)  # Capture frames continuously from the camera for frame in photographic camera.capture_continuous(raw_capture, format="bgr", use_video_port=True):          # Grab the raw NumPy assortment representing the paradigm     image = frame.array          # Display the frame using OpenCV     cv2.imshow("Frame", prototype)          # Wait for keyPress for 1 millisecond     key = cv2.waitKey(1) & 0xFF          # Clear the stream in preparation for the side by side frame     raw_capture.truncate(0)          # If the `q` cardinal was pressed, break from the loop     if key == ord("q"):         break          

What is Our Goal?

Assuming you've completed the prerequisites, yous know how to detect the location of an object in the field of view of a photographic camera, and you know how to express that location in terms of the pixel location along both the x-axis (width) and y-axis (elevation) of the video frame.

In a existent use case, if we want a robotic arm to automatically choice up an object that enters its workspace, we need some way to tell the robotic arm where the object is. In order to do that, nosotros have to convert the object's position in the camera reference frame to a position that is relative to the robotic arm's base frame.

Once we know the object's position relative to the robotic arm'southward base frame, all we need to do is to calculate the changed kinematics to set the servo motors to the angles that will enable the stop effector of the robotic arm to reach the object.

What is the Field of View?

Before nosotros become started, permit'south take a look at what field of view ways.

The field of view for our Raspberry Pi camera is the extent of the observable earth that it can come across at a given point in time.

In the figure below, you lot tin run across a schematic of the setup I have with the Raspberry Pi. In this perspective, we are in front of the Raspberry Pi camera.

3b-camera-field-of-viewJPG

In the Python code, nosotros set the size of the video frame to be 640 pixels in width and 480 pixels in height. Thus, the matrix that describes the field of view of our photographic camera has 480 rows and 640 columns.

From the perspective of the camera (i.e. camera reference frame), the first pixel in an epitome is at (x=0, y=0), which is in the far upper-left. The last pixel (x = 640, y = 480) is in the far lower-right.

Calculate the Centimeter to Pixel Conversion Factor

The showtime affair you need to do is to run test_video_capture.py.

At present, grab a ruler and measure the width of the frame in centimeters. It is difficult to see in the image below, but my video frame is about 32 cm in width.

5-ruler-measureJPG

We know that in pixel units, the frame is 640 pixels in width.

Therefore, nosotros accept the following conversion cistron from centimeters to pixels:

32 cm / 640 pixels = 0.05 cm / pixel

Nosotros volition assume the pixels are foursquare-shaped and the camera lens is parallel to the underlying surface so nosotros tin can use the same conversion gene for both the ten (width) and y (height) axes of the camera frame.

When you're washed, you lot can close down test_video_capture.py.

Test Your Conversion Factor

Now, let's test this conversion factor of 0.05 cm / pixel.

Write the following code in your favorite Python IDE or text editor (I'1000 using Gedit).

This program is the absolute_difference_method.py code we wrote on this post with some pocket-sized changes. This code detects an object and then prints its middle to the video frame. I called this programme absolute_difference_method_cm.py.

# Author: Addison Sears-Collins # Description: This algorithm detects objects in a video stream #   using the Accented Difference Method. The idea behind this  #   algorithm is that we get-go take a snapshot of the groundwork. #   We and so identify changes by taking the absolute deviation  #   between the current video frame and that original  #   snapshot of the background (i.eastward. first frame).   # import the necessary packages from picamera.array import PiRGBArray # Generates a 3D RGB array from picamera import PiCamera # Provides a Python interface for the RPi Camera Module import fourth dimension # Provides time-related functions import cv2 # OpenCV library import numpy equally np # Import NumPy library  # Initialize the camera photographic camera = PiCamera()  # Set the photographic camera resolution camera.resolution = (640, 480)  # Set up the number of frames per second photographic camera.framerate = thirty  # Generates a 3D RGB assortment and stores it in rawCapture raw_capture = PiRGBArray(camera, size=(640, 480))  # Expect a certain number of seconds to let the camera time to warmup time.slumber(0.i)  # Initialize the beginning frame of the video stream first_frame = None  # Create kernel for morphological operation. You can tweak # the dimensions of the kernel. # east.g. instead of 20, 20, you can effort xxx, thirty kernel = np.ones((20,20),np.uint8)  # Centimeter to pixel conversion cistron # I measured 32.0 cm across the width of the field of view of the camera. CM_TO_PIXEL = 32.0 / 640  # Capture frames continuously from the photographic camera for frame in photographic camera.capture_continuous(raw_capture, format="bgr", use_video_port=True):          # Catch the raw NumPy array representing the image     paradigm = frame.assortment      # Convert the image to grayscale     gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)          # Close gaps using closing     greyness = cv2.morphologyEx(gray,cv2.MORPH_CLOSE,kernel)            # Remove salt and pepper noise with a median filter     gray = cv2.medianBlur(grayness,five)          # If first frame, we demand to initialize information technology.     if first_frame is None:                first_frame = greyness              # Clear the stream in grooming for the next frame       raw_capture.truncate(0)              # Become to top of for loop       continue            # Summate the absolute difference betwixt the current frame     # and the start frame     absolute_difference = cv2.absdiff(first_frame, gray)      # If a pixel is less than ##, it is considered black (background).      # Otherwise, it is white (foreground). 255 is upper limit.     # Change the number after absolute_difference as y'all see fit.     _, absolute_difference = cv2.threshold(absolute_difference, 50, 255, cv2.THRESH_BINARY)      # Find the contours of the object inside the binary image     contours, hierarchy = cv2.findContours(absolute_difference,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[-2:]     areas = [cv2.contourArea(c) for c in contours]       # If there are no countours     if len(areas) < one:         # Display the resulting frame       cv2.imshow('Frame',epitome)         # Wait for keyPress for 1 millisecond       key = cv2.waitKey(1) & 0xFF         # Clear the stream in preparation for the adjacent frame       raw_capture.truncate(0)            # If "q" is pressed on the keyboard,        # go out this loop       if key == ord("q"):         pause            # Go to the top of the for loop       keep       else:                # Find the largest moving object in the epitome       max_index = np.argmax(areas)            # Draw the bounding box     cnt = contours[max_index]     x,y,w,h = cv2.boundingRect(cnt)     cv2.rectangle(image,(x,y),(ten+w,y+h),(0,255,0),3)       # Draw circle in the eye of the bounding box     x2 = 10 + int(w/two)     y2 = y + int(h/2)     cv2.circumvolve(prototype,(x2,y2),4,(0,255,0),-ane) 	     # Calculate the center of the bounding box in centimeter coordinates     # instead of pixel coordinates     x2_cm = x2 * CM_TO_PIXEL     y2_cm = y2 * CM_TO_PIXEL       # Print the centroid coordinates (we'll apply the center of the     # bounding box) on the paradigm     text = "ten: " + str(x2_cm) + ", y: " + str(y2_cm)     cv2.putText(paradigm, text, (x2 - 10, y2 - 10),       cv2.FONT_HERSHEY_SIMPLEX, 0.v, (0, 255, 0), ii)               # Brandish the resulting frame     cv2.imshow("Frame",epitome)          # Expect for keyPress for one millisecond     central = cv2.waitKey(1) & 0xFF       # Clear the stream in training for the adjacent frame     raw_capture.truncate(0)          # If "q" is pressed on the keyboard,      # exit this loop     if key == ord("q"):       break  # Close downwardly windows cv2.destroyAllWindows()          

To become the object's center in centimeter coordinates rather than pixel coordinates, we had to add together the cm-to-pixel conversion factor to our lawmaking.

When you start launch the code, be sure there are no objects in the field of view and that the camera is non moving. Also, brand sure that the level of light is fairly uniform across the lath with no moving shadows (e.g. such as from the lord's day shining through a nearby window). So place an object in the field of view and record the object's x and y coordinate.

Here is the camera output when I first run the code:

6-before-placing-walletJPG

Here is the output after I identify my wallet in the field of view:

7-after-placing-walletJPG
  • x-coordinate of the wallet in centimeters: 12.one cm
  • y-coordinate of the wallet in centimeters: 12.75 cm

Go a ruler, and measure the object's x coordinate (measure from the left-side of the camera frame) in centimeters, and run across if that matches upward with the x-value printed to the camera frame.

8-measuring-x-in-cmJPG

Get a ruler, and measure the object's y coordinate (mensurate from the top of the photographic camera frame) in centimeters, and see if that matches up with the y-value printed to the photographic camera frame.

9-measuring-y-in-cmJPG

The measurements should lucifer up pretty well.

That'southward it. Proceed edifice!

References

Credit to Professor Angela Sodemann for teaching me this stuff. Dr. Sodemann is an excellent instructor (She runs a course on RoboGrok.com).

Source: https://automaticaddison.com/how-to-convert-camera-pixels-to-real-world-coordinates/

Posted by: murphyotibitepar.blogspot.com

0 Response to "How To Attach Camera Filter To A Board"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel