This repository contains simulation environments and exercises designed for the "AI in Robotics" course at AUT University in 2024. The focus is on integrating artificial intelligence techniques with robotics, using Webots simulation environment.
In these exercise series, students will learn:
- How to work with Webots.
- Inverse kinematics and position control of the UR5e manipulator.
- Pick and place using a vacuum gripper.
- Object detection with a camera.
- Automatic pick-and-place of objects using the manipulator.
- Classical model-based control of UR5e.
The simulation environment features a UR5e manipulator equipped with a "ROBOTIQ 2F-140 Gripper". The UR5e is a versatile 6 DoFs robotic arm manufactured by Universal Robots. It is widely used in industrial automation and research applications due to its precision and flexibility. Try it online.
Students will be introduced to solving inverse kinematics for the UR5e manipulator. They will learn to calculate joint angles required to achieve a desired end-effector position. (numerically or analytically)
Pick up the cylindrical object from the table at position P1 = [-0.6, 0.2, 0.8] and place it on another table at position P2 = [0.65, -0.3, 0.6]. UR5e robot origin placed at P_base = [0, 0, 0.6]. Video
You can find the kinematics parameters form here.
For more simplicity, the DH parameters of UR5e are:
d = [0.1625 , 0 , 0 ,0.133 ,0.0997 , 0.101 ]
a = [0 ,-0.425 ,-0.39225 ,0 ,0 ,0 ]
alpha = [pi/2 , 0 , 0 ,pi/2 ,-pi/2 ,0 ]
In this exercise, students will use the UR5e robot equipped with a Robotiq EPick Gripper to perform a pick-and-place task. The positions of the objects are known and provided manually.
This Gripper is a Vacuum Gripper with a length of 13 cm attached to the UR5e end-effector.
In the rest of the exercises, use "worlds/UR5e_vacuumGripper_camera.wbt". python example code is available in "controllers/my_controller/ur5e_vacuumGripper_camera.py".
- Attach the Robotiq EPick Gripper to the UR5e robot in the simulation environment.
- Manually set the positions of the objects on the table.
- Program the robot to pick up each object and place it in baskets.
- Verify the robot's ability to pick and place all objects accurately.
This exercise extends the pick-and-place task by incorporating computer vision to detect and locate objects.
- Use a camera in the simulation environment to capture images of the objects on the table.
- Label the captured images and create a dataset for training a neural network.
- Train or fine-tune a pre-trained network to detect objects and estimate their positions.
- Program the robot to use the detected positions to perform the pick-and-place task.
- Validate the performance of the vision system and the robot's ability to complete the task.
Output example:
The provided Python controller "my_controller/ur5e_vacuumGripper_camera.py" simplifies the access to the UR5e robot and includes methods to interact with the robot's joints, gripper, and sensors which are explained below.
-
get_rgb_frame() -> np.ndarray
- Captures an RGB frame from the camera and converts it to a NumPy array.
-
get_depth_frame() -> np.ndarray
- Captures a depth frame from the range finder and converts it to a NumPy array.
-
UR5e Class
-
init(self, name="my_robot")
- Initializes the UR5e robot and sets up the motor devices, vacuum gripper, and GPS.
-
set_arm_torques(self, torques)
- Sets the torques for the arm joints.
-
set_gripper_pos(self, state='on')
- Activates or deactivates the vacuum gripper.
state
can be 'on' or 'off'.
- Activates or deactivates the vacuum gripper.
-
set_arm_pos(self, pos)
- Sets the positions for the arm joints.
-
get_arm_pos(self) -> list
- Returns the current positions of the arm joints.
-
get_gripper_pos(self) -> list
- Returns the current positions of the gripper joints.
-
get_EE_position(self) -> list
- Returns the current position of the end effector using GPS data.
-