Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/monsiw/object-detection-yolo
The project aims to use a trained model in the YOLO network to detect objects that will be detected by the robot structure with a computer on which ROS has been installed. ROS manages the individual packages used in the project.
https://github.com/monsiw/object-detection-yolo
darknet detection-model linux robotics ros ros-noetic yolov3
Last synced: 3 months ago
JSON representation
The project aims to use a trained model in the YOLO network to detect objects that will be detected by the robot structure with a computer on which ROS has been installed. ROS manages the individual packages used in the project.
- Host: GitHub
- URL: https://github.com/monsiw/object-detection-yolo
- Owner: monsiw
- Created: 2023-01-12T08:19:29.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-02-22T18:00:56.000Z (almost 2 years ago)
- Last Synced: 2024-08-01T03:16:45.118Z (6 months ago)
- Topics: darknet, detection-model, linux, robotics, ros, ros-noetic, yolov3
- Language: C
- Homepage:
- Size: 9.91 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Object Detection with Robotic Platform
An idea of this project is to implement the functionality object recognition on the Intel NUC10i5FNB computer, which is part of a car-like structure (image below). This robot moves using 4 wheels, where the two front wheels are controlled by a servo and the two rear wheels are powered by a motor with a nominal voltage of 12V each. The entire facility is powered by a 5000mAh Bashing lithium polymer battery by GensAce, which can be charged by connecting an external power supply to the dashboard. At the very top of the chassis, there is a HAMA camera for image recording. The board with the programmed controller accepts signals from the computer and distributes them to the wheels.
## Start
In the workspace you need to build your packages using
*catkin_make*
*source devel/setup.bash*
## Inference with *darknet_ros* package
In order to start inference of chosen model write *roslaunch darknet_ros darknet_ros.launch* in the terminal
## Robot movement
*rosrun teleop_twist_keyboard teleop_twist_keyboard.py*
The keys represent the following maneuvers:
• __*u*__ key is responsible for driving forward with the front wheels turned to the right,
• __*i*__ key is responsible for moving forward with the front wheels in the starting position,
• __*o*__ key is responsible for driving forward with the front wheels turned to the left,
• __*j*__ key is responsible for turning the front wheels to the right,
• __*k*__ key stops movement,
• __*l*__ key is responsible for turning the front wheels to the left,
• __*m*__ key is responsible for driving backwards with the front wheels turned to the right,
• __*,*__ key is responsible for driving backwards with the front wheels in the starting position,
• __*.*__ key is responsible for driving backwards with the front wheels turned to the left.
With *rosrun rqt_graph rqt_graph* you should see the graph posted below
## Citing
[1] Arguedas M., et al.: ROS OpenCV camera driver – https://github.com/OTL/cv_camera.
[2] Baltovski T., et al.: teleop_twist_keyboard – https://github.com/ros-teleop/teleop_twist_keyboard.
[3] Bjelonic M.: YOLO ROS: Real-Time Object Detection for ROS – https://github.com/leggedrobotics/darknet_ros.