Computer Vision Detection and Controlling
Faculty of Mechtronic Engineering, University Malaysia Pahang, Malaysia
.
Project Supervisor: Dr. Ng Liang Shing
Degree Researcher: Clement Lai Tun Hao
Abstract
The
innovation of technologies such as robot able to perform human tasks that being
programmed to make up a better life for human. The computer vision simulates a
data output for the respond command and allows the robot to detect the
specified objects. The data output detected by the robot can be uploaded to the
server and send to the users for further investigation. Image processing method
is implemented to improve the vision of the robot. It is based on the
similarity of colors, edges, and shape compared with the real time images. Users
can monitor the robot from computer or phones and download and upload the
pictures data from
the Local host. For
this research, computer vision is applied to acquiring, processing, analyzing
and understanding the image of the detected object to simulate a data output
for the respond command. The methodology of this project is firstly set the RGB
and chooses the targeted picture before turning on robot; secondly the
threshold method will be set up to identify and searching object distance;
thirdly the coordination’s on pixel will make the movement decision toward the
required object and pick the object.
Identification
and detection of specified image of the objects by the designed robot will be
more precise with the installation of camera and image processing.
Computer
vision will able to solve the issues problem by human eyes limitation. Such as,
traffic cctv, security camera, object tracking, etc. The police will able to
track the wanted persons.
Furthermore,
the robot can be tested in the mitigation strategies place such as radioactive
area, earthquake, disaster, etc. The robot will able to detect human life form
and let ease the job of the rescue’s teams.
The
data of the robot (pictures and videos) can be uploaded to the server and send
to another expert to investigate the problems.
Problem statement
To
improve the vision of the robot by implement the image processing method base
on the similarity of colors, edges, and shape compared with the real time
images.
From the Local host network,
users can monitor the robot from computer or phones. Besides that, the users
can download and upload the pictures data from local host.
Methodology
1).Using
Visual Studio C/C+ programming to test the OpenCV software.
2).Local
Host by using PHP, HTML, CSS, and Java.
3).Arduino
assemble language to program the Arduino Mega microcontroller chip.
Conclusion:
By implement a camera for the robot and image processing, the robot will be more precise in identify the image and recognize the object.
Literature Reviews:
1).
Title: Self-Organizing Incremental
Associative Memory-Based Rbot Navigation (10 October 2012).
Author: Sirinart
TANGRUAMSUB, A.L, M.T, & O.H.
Method:
Input pattern as memory and clustered in the layer. The node hold pattern data.
S1
= arg max || x – Wc ||
S2
= arg max ||x – Wc ||
2).
Title: Perceiving, learning, and
exploiting object affordances for autonomous pile manipulation (26 September
2013).
Author:
Dov Katz, A.V, M.K, J.A.B & A.S.
Method:
Facet segmentation by computing depth discontinuities, estimating surface
normal, and color-based image segmentation.
No comments:
Post a Comment