Home|Journals|Articles by Year|Audio Abstracts
 

Original Article

JJCIT. 2018; 4(1): 58-79


Unmanned Ground Vehicle with Virtual Reality Vision

Mahmood Al-Khalil, Rami Abu-Rhayyem, Ahmad Hammoudeh, Talal A.Edwan.




Abstract

The paper aims to describe the design and implementation of a smart phone virtual reality (VR) head mounted display (HMD) for search and rescue (SAR) robots which enables visual situation awareness by giving the operator the feel of ``head on rover" while sending the video feeds to separate operator computer for object detection and 3-D model creation of the robot surrounding objects. A smart phone based HMD captures head movements in real time using its inertial measurement unit (IMU) and transmits it to three motors mounted on a rover to provide the movement about three axes (pitch, yaw, and roll). The operator controls the motors via the HMD or a gamepad. Three on-board cameras provide video feeds which are transmitted to the HMD and operator computer. A software performs object detection and builds a 3-D model from the captured 2-D images. The realistic design constraints were identified, then the hardware/software functions that meet the constraints were listed.
The robot was designed and implemented in a laboratory environment, it was tested over soft and rough terrain. Results showed that the robot has higher visual-inspection capabilities compared to other existing SAR robots, furthermore, it was found that the maximum speed of 3.3 m/s, six-wheel differential-drive chassis, and spiked air-filled rubber tires of the rover gave it high manoeuvrability in open rough terrain compared to other SAR robots found in literature. The high visual inspection capabilities and relatively high speed of the robot make it a good choice for planetary exploration and military reconnaissance. The three-motors and stereoscopic camera can be easily mounted as a separate unit on a chassis that uses different locomotion mechanism (e.g. leg type or tracked type) to extend the functionality of a SAR robot. The design can be used in building disparity maps and in constructing 3-D models, or in real time face recognition, real time object detection, and autonomous driving based on disparity maps.

Key words: UGV, Virtual Reality, Search and Rescue, Robotics, Human-Robot Interaction






Full-text options


Share this Article


Online Article Submission
• ejmanager.com




ejPort - eJManager.com
Refer & Earn
JournalList
About BiblioMed
License Information
Terms & Conditions
Privacy Policy
Contact Us

The articles in Bibliomed are open access articles licensed under Creative Commons Attribution 4.0 International License (CC BY), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.