Skip to content

backup of the codes on jetson nano platform in catkin workspace

Notifications You must be signed in to change notification settings

alexandor91/UAV-gimbal-fusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UAV_Gimbal

The UAV_Gimbal is a set of packages developed to stabilize a gimbal system through the mountain skyline. The repository packages are describe in the following list:

a. Altimeter

b. Gimbal_controller

c. Perception

d. Particle_fusion

Prerequisites

We tested the packages in Ubuntu 18.04 and ROS Melodic, the system was run on a Jetson Nano 2GB.

Datasets

The datasets used to evaluate the current system can be downloaded from this link.

Altimeter

The current package measures the altitude above mean sea level (AMSL).

1. Prerequisites

The packaged is build to read an BMP180 Sensor. Software requirements are described in the root folder of this repository

2. Build altimeter package

Clone the repository and build through catkin_make:

  cd ~/catkin_ws/src
  git clone https://github.com/hl49/uav_gimbal.git
  cd ../
  catkin_make
  source ~/catkin_ws/devel/setup.bash

Allow execution permission to the altimeter_node.py file

  cd ~/catkin_ws/src/uav_gimbal/altimeter/scripts
  chmod +x altimeter_node.py

3. Running the Altimeter node

3.1 In order to run the altimeter node run the command:

  rosrun altimeter altimeter_node.py

3.2 The BMP180 data is publish in the default topic /altimeter/BMP180, which is a custom topic with the structure described bellow:

  std_msgs/Header header
  float32 altitude
  float32 temperature

The altitud is given in meters and the temperature in Celsius degrees

Controller

The current package controls the orientation of a gimbal system built with 2 servos Dynamixel xl430-w250-t, the hardware interface used to communicate the servos with the Jetson Nano is the OpenCR 2.0.

1. Prerequisites

The servos driver and communication packages are required to run the current module. The sofware dependencies are described in the dynamixel interface respository. The current package requires the sensor information from BNO055 IMU, the sensor driver is found in the ros_imu_bno055 repository.

2. Build gimbal_controller package

Clone the respository in your default environment folder (in this example catkin_ws):

  cd ~/catkin_ws/src
  git clone https://github.com/hl49/uav_gimbal.git
  cd ../
  catkin_make
  source ~/catkin_ws/devel/setup.bash

Allow execution permission to the gimbal_controller_node.py file

  cd ~/catkin_ws/src/uav_gimbal/gimbal_controller/scripts
  chmod +x gimbal_controller_node.py

3. Running the Gimbal Controller Node

3.1 Connect the Opencr to the pc, and turn on the servo power supply.

3.2 Run the BNO055 ros node to read the data from the inertial sensor:

rosrun bno055 bno055_ros.py

3.3 Run the custom launch file required to initialize the servo driver:

roslaunch gimbal_controller dynamixel_interface_controller.launch

3.4 Run the controller node:

rosrun gimbal_controller gimbal_controller_node.py

3.5 Position reference from input topics

The gimbal controller keeps the initial position until a reference position is sent through perception node and bno055 ros. The gimbal_controller_node subcribes to the topics /perception_rpy and /imu/data. The first topic provides reference position relative to the skyline processed by the perception node, and the second topic contains the imu measurements published by the ros_imu_bno055 node.

Perception

The following package allows to identify the sky line and a ground plane from outdoor images in real time. It takes a stabilized image as a reference and detects the movement of subsequent frames to send the signal to the servos.

1. Prerequisites

Download and install the repositories HELLO AI WORLD NVIDIA JETSON for running the segmentation neural network and ROS DEEP LEARNING for ROS1 interface. Focus mainly on Semantic Segmentation with SegNet and how to run a demo with a Semantic Segmentation live camera. A Raspberry Pi camera (MIPI CSI camera) was used to record our demo video. The pretrained network model is in the model folder. It was used Skyfinder Dataset as dataset.

2. Build gimbal_perception package

Clone the respository in your default environment folder (in this example catkin_ws):

  cd ~/catkin_ws/src
  git clone https://github.com/hl49/uav_gimbal.git
  cd ../
  catkin_make
  source ~/catkin_ws/devel/setup.bash

2.1 Include pretrained network modified: First, copy the content from the model folder into the jetson-inference/python/examples directory. Then, modify the content of the segnet.ros1.launch according to the following lines:

TODO

3. Running the Gimbal Perception Node

3.1 Run the live camera segmentation network:

roslaunch ros_deep_learning segnet.ros1.launch

3.2 Run the perception node:

rosrun perception perception_node.py

Particle_Fusion

The following package allows to identify the sky line and a ground plane from outdoor images in real time. It takes a stabilized image as a reference and detects the movement of subsequent frames to send the signal to the servos.

1. Prerequisites

Download and install the repositories HELLO AI WORLD NVIDIA JETSON for running the segmentation neural network and ROS DEEP LEARNING for ROS1 interface. Focus mainly on Semantic Segmentation with SegNet and how to run a demo with a Semantic Segmentation live camera. A Raspberry Pi camera (MIPI CSI camera) was used to record our demo video. The pretrained network model is in the model folder. It was used Skyfinder Dataset as dataset.

2. Build gimbal_perception package

Clone the respository in your default environment folder (in this example catkin_ws):

  cd ~/catkin_ws/src
  git clone https://github.com/hl49/uav_gimbal.git
  cd ../
  catkin_make
  source ~/catkin_ws/devel/setup.bash

2.1 Include pretrained network modified: First, copy the content from the model folder into the jetson-inference/python/examples directory. Then, modify the content of the segnet.ros1.launch according to the following lines:

TODO

3. Running the Gimbal Perception Node

3.1 Run the live camera segmentation network:

roslaunch ros_deep_learning segnet.ros1.launch

3.2 Run the perception node:

rosrun perception perception_node.py

Appreciate a lot, if you cite the paper below when you use the code and "skyline" dataset.

@inproceedings{kang2023adaptive,
  title={Adaptive sampling-based particle filter for visual-inertial gimbal in the wild},
  author={Kang, Xueyang and Herrera, Ariel and Lema, Henry and Valencia, Esteban and Vandewalle, Patrick},
  booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={2738--2744},
  year={2023},
  organization={IEEE}
}

About

backup of the codes on jetson nano platform in catkin workspace

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published