Print
Hits: 2784

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

Normal 0 false 21 false false false FR X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tableau Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

Sight detection project

 

Introduction

 

The global aim of the project is to servo control the position of a concrete 3D printer. 3D printing refers to processes used to create a three-dimensional object. The principle remains close to a conventional printer because it consists in adding several layers that creates the volume. The goal of the university Clermont Auvergne with the participation of SIGMA and Polytech is to realize a concrete 3D printer. This new 3D printer will be used in buildings construction.

 

This technological breakthrough needs a camera to work. Therefore, this project is well integrated in the digital imaging multi-skills.

 

example of a realisation made by a concrete 3D printer

Source: www.parentgalactique.fr

 

This picture shows an example of a concrete 3D printer created by the American Andrey Rudenko. Nowadays, it does not exist industrial concrete 3D printer. But in the near future, we should probably see some constructions built thanks to this technology.

 

To make this project, and so to control the position of a concrete print head, the vision assisted by computer had been chosen.

 

The objective is to be able to detect a fixed sight on the ground. In this way, in fixing a camera on the mobile part of the 3D printer, we will be able to inferred the position of the printer in its environment.

 

In order to make easy the localization of the camera in a space, a sight is placed on the ground. So, we have to detect this sight. To do this, we have to implement an interest point description method in a picture. And then, we have to matched these points with the sight model.

 

Tools choice

Opencv’s logo

 

We chose the OpenCV library to realize the project. Indeed, OpenCV is a free library specialized in the image processing.

 

In order to well understand the detection principle, we have to use a script provided by the openCV library. By the way, we used github website to download the library and so to have all the scripts we needed.

Python’s logo

 

Moreover, we used the 3.0 version of Python. Indeed, we have chosen to work with Python in the whole project, because this language has the advantage to be a high level language. Therefore, it is easy to understand the main ideas of the examples.

Spyder’s logo

 

Finally, we used the Spyder IDE version 2 to open our project.

 

Script plane_ar

 

We found in this library an augmented reality code.

Named plane_ar, this script permit to add a 3D element in augmented reality.

example of our objective

 Source: https://www.youtube.com/watch?v=pzVbhxx6aog

As we can see on the video, the user of the code has to choose an area by selecting a region with the green rectangle.

 

Then, the script analyze this under part of picture by looking for the interest points. After that, it fix the base of the added 3D element by each corner of the rectangle previously drawn.

 

By analyzing the interest points position evolution, the code will deduce the orientation of the selected object beside its initial position.

 

By computing rotation matrix and translation matrix, the code will calculate the projection matrix. This part of the code is exactly the most interesting part in regard to our goal, since the code is able to follow the position of the object.

 

To remind you our objective, we want to make the autodetection of a sight known by the script at the beginning. We will need to make a detection/description method of interest points in a picture, and then match the found points on an image (using the previous method) with the known model of the sight to insure the presence or not of the sight on the camera view.

 

Choice of the sight

 

For the choice of the sight, we have to find a logo that will be, for the future, used in the simulation. The first simulation will be realized the 14th of December 2017 and have for main goal to simulate the movement of a concrete print head. This concrete print head has at the end of the crane a video camera. The main objective of the video sensor will be to detect the logo as soon as it will identify this one thanks to an image processing adapted. The chosen logo is the following one:

 Logo Polytech Genie Electrique

 

Thanks to a video made by us for a first experiment, we saw that one time sectioned, the logo is well detected and the augmented reality is well done for reasonable angles. The chosen sight needs to have an important number of interest points like outlines to insure the well detection of this one even if the external conditions come to disturb the work environment (like the wind which can produce a pendulum effect at the end of the crane).

 

Reading problem of a video

 

After an update of the library opencv, the used script do no longer arrive to read the video. Indeed, since the updating of the library, the server (master mécatronique) which we use to develop in python on the Spyder software do no longer allow to read videos. It is due to a bug between the python’s libraries and the decoding of the video.

We needed to search another way to read our video and we found a script capable of read a sequence of images. The difficulty was to apply this script and to analyze it, in order to understand his principle of running.  

 

The first step was to convert our video. To do this, we used the following website: www.filezigzag.com/. This website allows, in our case, to convert a video in format .mp4  in a sequence of images (format .jpg) extracted from the initial video.

website’s logo

 

After that, we tested the following script with our test video:

script of the reading of an images sequences

 

After analyzed this script, we noticed that the line allowing to read our “video” is the following one : cap = cv2.VideoCapture(‘im/img%04d.jpg’).

Therefore, we replaced in our main script (plane_ar.py) the line which cause us issue (self.cap = video.create_capture(“video.mov”)) by the line defined in the script of reading a video:  self.cap = cv2.VideoCapture(‘the_video_mp4/img%04d.jpg’)

 

Analysis of the script and our test :

 

One time the video was converted, we used the modified script plane_ar.py to take into account the sequence of images. We could launch the video (in reality it was the sequence of pictures put end-to-end) in order to select the logo in the video.

Then, the script allows to bring a house in 3D out, using the augmented reality on the selected area, therefore, in our case, on our logo. If we select an area with not much or no interest points, the script is not able to detect the selected area in the pictures’s row and it can not bring the house out. Because, the principle of running of our script is based on the comparison of the interest points between each image of the video and the selected model. If there are no much interest points, the script will not be able to find similarity of these one (interest points of the sectioned area at the beginning) with those of the next image.  

 

We can see on the following extract of the video the house added by augmented reality and the different interest points detected in our logo by the script:  

extract of the video showing the house (augmented reality) and the interest points on the sectioned area

 

Conclusion :

 

At the moment, we arrive to detect in a video the selected area at the beginning if this zone have enough interest points.

During the development, we will isolate the script parts which allow to find the sight and see its evolution in the 3D space. The next goal is to save the sight model and use it in the script to detect it automatically in the video without the initial selection.

After the test, which will simulate the real condition, of the 14th of December, we will be able to validate our logo. Indeed, if our script well detects and makes the augmented reality with the house (after an initial selection of the logo) during all the video of the simulation of a concrete print head movement, this will demonstrate that our logo will be detected in all possible case of the print head movement. In another words, we will validate our logo, if the script arrives to follow the logo during all the video of the simulation. For now, we are again forced to select the logo at the beginning of the video but we work in parallel on this point to find a solution to detect it automatically basing on a known model of the logo.