Skip to content

Commit

Permalink
Merge pull request #127 from AutoMecUA/dev
Browse files Browse the repository at this point in the history
New release
  • Loading branch information
manuelgitgomes authored Mar 15, 2022
2 parents 446e298 + 6aeb0ea commit f219e2c
Show file tree
Hide file tree
Showing 38 changed files with 1,217 additions and 776 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ MANIFEST
*.manifest
*.spec

# Installer logs
# Installer log
pip-log.txt
pip-delete-this-directory.txt

Expand Down
252 changes: 6 additions & 246 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,250 +1,10 @@
# AutoMec-AD

This project goal is to develop a fully autonomous car to compete in the National Robotics Festival of Portugal. This repository is where the main code of University of Aveiro car is hosted.
This repository serves as the main repository for the autonomous RC car team of Automec. The repository consist of ML code
for the lane detection, template matching code for signal recognition, all incorporated in a ROS framework.

![alt text](https://i.imgur.com/7FCETQ1.png)

# Dependencies
Make sure you have these installed before booting

- Python3
- Ubuntu 20.04
- Ros Noetic
- numpy==1.20.1
- opencv-python==4.5.1.48
- matplotlib==3.3.4

# How to run
The code used in this repository is meant to drive a real world car you can do so by folliwing the instructions on [Manual Driving](https://github.com/DanielCoelho112/AutoMec-AD/tree/readme/core/src/ManualDriving#manual-driving), however using ROS we can simulate an enviroment where we can test our robot (car). To do so, continue following the Setting up ROS Enviroment section.

First we need to create a catkin **workspace** to build our project. Choose an apropriate location for this.

## Setting up ROS Enviroment
```bash
$ mkdir -p ~/catkin_ws/src
$ cd ~/catkin_ws/
$ catkin_make
```
Next you need to source catkin to your setup.bash

If you are using bash terminal

```bash
$ source ~/catkin_ws/devel/setup.bash
```

Using zsh
```bash
$ source ~/catkin_ws/devel/setup.zsh
```

Next move in to the catkin src folder if you havent already and clone the repo.
```bash
cd catkin_ws
cd src
git clone https://github.com/DanielCoelho112/AutoMec-AD.git
git checkout dev
cd ..

```
Now you have the **main code** of our application but we still need to add some extra dependencies, ​this new repository that includes the turtle bot file and the arena used to simulate.

```bash
cd catkin_ws
cd src
git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs
git clone https://github.com/ROBOTIS-GIT/turtlebot3_simulations
git clone https://github.com/ROBOTIS-GIT/turtlebot3
git clone https://github.com/callmesora/AutoMec-Deppendencies
git clone https://github.com/prolo09/pari_trabalho3
cd ..

```

All that's left to do is to run catkin_make, bhis will build your ros packages and install any depencies they have. When running catkin_make any dependencies installed in previous packages will still run in your new ones so be carefull when creating a new package
.

```bash
catkin_make
```

Add the following to your .bashrc file
```bash
export TURTLEBOT3_MODEL=waffle_pi
```


If you don't know how to edit your .bashrc file type

```bash
nano ~/.bashrc
```

And add the previous line to the end of your file. Same procedure if you use zsh terminal but with .zshrc
To Save press Cntrl+O , Enter . Cntrl+X to exit

# Ackerman dependencies
sudo apt-get install ros-noetic-ros-controllers ros-noetic-ackermann-msgs

# Launch with ackerman
roslaunch ackermann_vehicle_gazebo ackermann_robot_with_arena_conversion.launch

# Running the simulation enviroment

Execute the following on one terminal
```bash
roslaunch robot_bringup bringup_gazebo.launch
roslaunch robot_bringup spawn.launch
```
On a secound terminal.

```bash

roslaunch robot_bringup spawn.launch
```
First command will launch gazebo, secound one will spawn the robotcar.
After this you should see gazebo opening with the racing track as shown.
![Gazebo](https://i.imgur.com/w7EFh7k.png)

# How to drive
To test drive the car type in gazebo run
```bash
rqt
```
Now head down to pluggins --> Robot Tools --> Robot Stearing and select /robot/cmd_vel as it's topic.

![Driving](https://i.imgur.com/ME4mgl7.png)

After this we need to add the camera Go to **Plugins --> Vizualization --> Image View** and select */robot/camera/rgb/image_raw*

# Vision Code
During this project multiple aproaches are being tested , raw computer vision with opencv and a Machine Learning aproach.

## Lane_Recognition
To test. cd into /core/src/VisionCode/Lane_Recognition
Launch gazebo and the robot as in the **Running the simulation** section

and run the script

```bash
rosrun core gazebo_lines.py
```

You should see multiple camera windows with different filters show up.

## Signal Recognition
To test. cd into /core/src/VisionCode/Signal Recognition

Open a terminal and run

```bash
roscore
```
On a secound terminal
```bash
python3 SignalRecognition.py
```


# Code structure
This project uses ROS as it's building blocks. It's divided into 4 packages

## core

This package is divided into 4 folders.

ArduinoCode: All arduino code made.

VisionCode: All attempts at vision codes made.

ManualDriving: Only contains the files for manual driving to be possible, latest version of the Arduino, twist converter.

CommonFiles: Files that do not fit elsewhere, initialization files, etc.

## robot_bringup
Code to launch the gazebo and the robot in the simulation

## robot_core
Vision code ready to be implemented in gazebo

Automatic driving using a ML Aproach

## robot_description

All files regarding robot description and stats





# ML driving

CATBOOST:
https://streamable.com/ol18mb

https://streamable.com/acich8


With images delivered to the catboost model:
https://streamable.com/kfi7j6

CNN:
https://streamable.com/ysugtn

## Dependencies for CNN
sudo apt install python3-pip

pip3 install opencv-python

pip3 install pandas

pip3 install sklearn

pip3 install tensorflow

pip3 install imgaug


## Get sample data for ml training

roslaunch ackermann_vehicle_gazebo ackermann_robot_with_arena_conversion_mltrain.launch folder:=/cleantrack1

É obrigatório o argumento folder:=/nome

O folder tem de ser criado dentro da pasta "data"

Lança tudo.

- O mundo gazebo
- O carro
- O conversor twist para ackermann
- O node de captura de dados
- O node de rqt para conduzir o carro para as voltas de treino

## Training ml model with sample data

roslaunch robot_core training.launch folder:=/cleantrack1 model:=cleantrack1.h5

São obrigatórios os argumentos:

- folder:=/nome (não esquecer a barra no inicio)
- model:=nome.h5 (Não esquecer a extensão .h5)

## Driving with ml model

Just drive, no signals.

roslaunch ackermann_vehicle_gazebo ackermann_robot_with_arena_conversion_mlsolo.launch model:=cleantrack1.h5

## Drive with two signals

roslaunch ackermann_vehicle_gazebo ackermann_robot_with_arena_conversion_mlsignal.launch model:=cleantrack1.h5

Não esquecer a extensão .h5



## License
https://streamable.com/t5thi0
![alt text](https://raw.githubusercontent.com/AutomecUA/AutoMec-AD/main/images/car.jpeg)

All the setup, commands and methods used are described in the [wiki](https://github.com/AutomecUA/AutoMec-AD/wiki). <br>
This is still a WIP. Some errors are to be expected. If you have any doubts or want to report a bug,
feel free to use the Issues or [send us an email](mailto:[email protected])!
2 changes: 2 additions & 0 deletions cnn/scripts/cnn1/training1.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,8 @@ def main():

path = s + '/../../models/cnn1_' + modelname

enter_pressed = input("\n" + "Create a new model from scratch? [Y/N]: ")

if enter_pressed.lower() == "y" or enter_pressed == "":
model = createModel(image_width, image_height)
is_newmodel = True
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
<launch>
<arg name="image_raw_topic" default="/real_camera"/>
<arg name="signal_cmd_topic" default="/signal_vel"/>
<arg name="mask_mode" default="False"/>

<include file="$(find signal_recognition)/launch/signal_panel_recognition.launch">
<arg name="image_raw_topic" value="$(arg image_raw_topic)"/>
<arg name="signal_cmd_topic" value="$(arg signal_cmd_topic)"/>
<arg name="mask_mode" value="$(arg mask_mode)"/>
</include>

</launch>
2 changes: 2 additions & 0 deletions physical_bringup/launch/signal_usbcam.launch
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@
<arg name="camera_topic" default="/real_camera"/>
<arg name="int_camera_id" default="2"/>
<arg name="signal_cmd_topic" default="/signal_vel"/>
<arg name="mask_mode" default="True"/>

<include file="$(find physical_bringup)/launch/modules/signal_panel_recognition.launch">
<arg name="signal_cmd_topic" value="$(arg signal_cmd_topic)"/>
<arg name="image_raw_topic" value="$(arg camera_topic)"/>
<arg name="mask_mode" value="$(arg mask_mode)"/>
</include>

<include file="$(find physical_bringup)/launch/modules/lane_camera.launch">
Expand Down
2 changes: 2 additions & 0 deletions physical_bringup/launch/traxxas_drive_signal1.launch
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@

<arg name="camera_topic" default="/real_camera"/>
<arg name="int_camera_id" default="2"/>
<arg name="mask_mode" default="False"/>

<group if="$(eval arg('model') != '')">

Expand All @@ -35,6 +36,7 @@
<include file="$(find physical_bringup)/launch/modules/signal_panel_recognition.launch">
<arg name="signal_cmd_topic" value="$(arg signal_cmd_topic)"/>
<arg name="image_raw_topic" value="$(arg camera_topic)"/>
<arg name="mask_mode" value="$(arg mask_mode)"/>
</include>

<include file="$(find physical_bringup)/launch/modules/lane_camera.launch">
Expand Down
2 changes: 2 additions & 0 deletions physical_bringup/launch/traxxas_drive_signal2a.launch
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@

<arg name="camera_topic" default="/real_camera"/>
<arg name="int_camera_id" default="2"/>
<arg name="mask_mode" default="False"/>

<group if="$(eval arg('model') != '')">

Expand All @@ -35,6 +36,7 @@
<include file="$(find physical_bringup)/launch/modules/signal_panel_recognition.launch">
<arg name="signal_cmd_topic" value="$(arg signal_cmd_topic)"/>
<arg name="image_raw_topic" value="$(arg camera_topic)"/>
<arg name="mask_mode" value="$(arg mask_mode)"/>
</include>

<include file="$(find physical_bringup)/launch/modules/lane_camera.launch">
Expand Down
11 changes: 3 additions & 8 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,13 +1,8 @@
controller_manager_msgs==0.19.4
roslib==1.15.7
sensor_msgs==1.13.1
tensorflow==2.6.0
cv_bridge==1.15.0
rospy==1.15.11
tensorflow==2.7.0
pandas==0.25.3
numpy==1.17.4
tf==1.13.2
numpy==1.19.2
imgaug==0.4.0
opencv_python==4.5.1.48
matplotlib==3.1.2
scikit_learn==0.24.2
pillow==8.3.2
19 changes: 19 additions & 0 deletions robot_driving/launch/physical_robot_joy.launch
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
<launch>
<arg name="twist_dir_topic" default="/android_input_dir"/>
<arg name="bool_btn_topic" default="/android_input_velin"/>
<arg name="joy_topic" default="/joy"/>
<arg name="joy_number" default="0"/>
<arg name="deadzone" default="0"/>

<param name="joy_node/dev" value="/dev/input/js$(arg joy_number)"/>
<node name="joy" pkg="joy" type="joy_node">
<param name="deadzone" value="$(arg deadzone)"/>
</node>

<node name="robot_joy" pkg="robot_driving" type="joy_teleop.py" output="screen">
<param name="twist_dir_topic" value="$(arg twist_dir_topic)"/>
<param name="bool_btn_topic" value="$(arg bool_btn_topic)"/>
<param name="joy_topic" value="$(arg joy_topic)"/>
</node>

</launch>
Loading

0 comments on commit f219e2c

Please sign in to comment.