Skip to content

Fusing Web3 primitives with AI, Community and Speed to compete in the first ever autonomous racing league

License

Notifications You must be signed in to change notification settings

oceanprotocol/Ocean-Autopilot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
Screenshot 2024-05-02 at 16 31 18

Ocean Autopilot Initiative 🤖

A2RL is the first autonomous racing league that brings technology and speed in a competition where 8 different AI-powered vehicles compete against each others.

Ocean Protocol Foundation sees an opportunity to leverage its existing Data Challenges program to build a crowd-sourced metamodel for autonomous racing sports. The driver behind this initiaitve is to utilize & promote the use of our technology within the automotive industry, but also to collect & make valuable datapoints available for research purposes whilst strenghening the ties with our partner Mercedes-Benz

THE TEAM

More information about Team members & resources allocated available here

VEHICLE INFORMATION

These combustion engine supercars use a Dallara chasis and are equipped with 7x Sony IMX728 360°coverage cameras, 4x ZF ProWave Radars, 3x Seyond Falcon Kinetic FK1 Lidars, 1x Neousys RGS-8805GC Computer - collecting approximately 5 GB of data per second.

OTHER RACING LEAGUES/CHALLENGES

Source: https://www.indyautonomouschallenge.com/teams

ACTIVE TEAMS

Technical University of Munich | Email
Constructor University | Email
UniversitĂ  di Modena Unimore | Email
Politecnico di Milano Polimove | Email
Humbda Lab | Email
Kinetiz | Email
Code19 | Email
Khalifa University |Email
Massachusetts Institute of Technology | Email
University of California & Berkley | Email
Auburn University | Email
Purdue University | Email
University of Virginia | Email
Korea Advanced Institute of Science and Technology | Email
Ocean Protocol coming next!

INACTIVE TEAMS

Indian Institute of Technology Madras | Email
Ariel UNIVERSITY | Email
Graz University of Technology | Email
University of Berkley | Email
Embry-Riddle Aeronautical University | Email
University of Florida
University of North Carolina at Chapel Hill | Email
California Polytechnic State University | Email
Kennesaw State University
University of Michigan-Dearborn | Email
Amrita Vishwa Vidyapeetham | Email
Colorado State University | Email
Texas A&M University | Email
University of Wisconsin-Madison | Email
Warsaw University of Technology | Email

Execution Plan

  • Data challenges to announce a new initiatve focused on Autonomous Racing, particularly around Deep Learning models

  • This initiative will be initially supported by Ocean Protocol Foundation - particularly by Stream 2 - with the goal to become a standalone, independent initiative that uses the Ocean stack and promote visibility to Ocean Protocol Foundation and its ecosystem.

  • This initiative should eventually sustains itself via sponsorships, donations and revenues generated by monetizing the hundreds of GB of data collected on track.

  • Ocean Protocol Foundation will do the first heavy lifting by creating a collaborative framework to allow crowdsourcing of Autonomous Racing models from its data science community. This will mostly be supported through internal efforts as well as by establishing at least 7 Sprints, each focusing on targeted areaas. This structure should allow for new participants to enter the challenge at any time and eventually attract more & more universities as well.

  • Ocean Protocol Foundation to propose a partnership to its collaborator Acentrik - whos' sponsored by Mercedes Benz Singapore - to form the first Web3 Racing Team and compete in the 2025 A2RL championship

  • Ocean Protocol Foundation to eventually establish a RacingDAO with the goal to democratize access to racing sports and allow more AI talents to join the program, contribute to research and possibly govern the model.

Sprints

  • Sprint 1: Planning Phase
  • Sprint 2: Static & Dynamic Object Detection
  • Sprint 3: Localization & Mapping
  • Sprint 4: Trajectory Planning
  • Sprint 5: Vehicle Control System
  • Sprint 6: Race Regulation Planning
  • Sprint 7: Creation of Racing Metamodel
  • Sprint 8: Simulations & Testing

Research Labs

Datasets

  • Udacity - Udacity driving datasets released for Udacity Challenges. Contains ROSBAG training data. (~80 GB).
  • Comma.ai - 7 and a quarter hours of largely highway driving. Consists of 10 videos clips of variable size recorded at 20 Hz with a camera mounted on the windshield of an Acura ILX 2016. In parallel to the videos, also recorded some measurements such as car's speed, acceleration, steering angle, GPS coordinates, gyroscope angles. These measurements are transformed into a uniform 100 Hz time base.
  • Oxford RobotCar - over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different combinations of weather, traffic and pedestrians, along with longer term changes such as construction and roadworks.
  • Oxford Radar RobotCar - radar extension to The Oxford RobotCar Dataset providing data from a Navtech CTS350-X Millimetre-Wave FMCW radar and Dual Velodyne HDL-32E LIDARs with optimised ground truth radar odometry for 280 km of driving.
  • Oxford Road Boundaries - contains 62605 labelled samples, of which 47639 samples are curated. Each of these samples contain both raw and classified masks for left and right lenses. The data contains images from a diverse set of scenarios such as straight roads, parked cars, and junctions.
  • KITTI Vision Benchmark Suite - 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as highresolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system.
  • University of Michigan North Campus Long-Term Vision and LIDAR Dataset - consists of omnidirectional imagery, 3D lidar, planar lidar, GPS, and proprioceptive sensors for odometry collected using a Segway robot.
  • University of Michigan Ford Campus Vision and Lidar Data Set - dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system.
  • DIPLECS Autonomous Driving Datasets (2015) - dataset was recorded by placing a HD camera in a car driving around the Surrey countryside. The dataset contains about 30 minutes of driving. The video is 1920x1080 in colour, encoded using H.264 codec. Steering is estimated by tracking markers on the steering wheel. The car's speed is estimated from OCR the car's speedometer (but the accuracy of the method is not guaranteed).
  • Velodyne SLAM Dataset from Karlsruhe Institute of Technology - two challenging datasets recorded with the Velodyne HDL64E-S2 scanner in the city of Karlsruhe, Germany.
  • SYNTHetic collection of Imagery and Annotations (SYNTHIA) - consists of a collection of photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations for 13 classes: misc, sky, building, road, sidewalk, fence, vegetation, pole, car, sign, pedestrian, cyclist, lanemarking.
  • Cityscape Dataset - focuses on semantic understanding of urban street scenes. large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. Details on annotated classes and examples of our annotations are available.
  • CSSAD Dataset - Several real-world stereo datasets exist for the development and testing of algorithms in the fields of perception and navigation of autonomous vehicles. However, none of them was recorded in developing countries and therefore they lack the particular characteristics that can be found in their streets and roads, like abundant potholes, speed bumpers and peculiar flows of pedestrians. This stereo dataset was recorded from a moving vehicle and contains high resolution stereo images which are complemented with orientation and acceleration data obtained from an IMU, GPS data, and data from the car computer.
  • Daimler Urban Segmetation Dataset - consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs with a resolution of 1024x440. 500 frames (every 10th frame of the sequence) come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference, however these are not manually annotated but computed using semi-global matching (sgm).
  • Self Racing Cars - XSens/Fairchild Dataset - The files include measurements from the Fairchild FIS1100 6 Degree of Freedom (DoF) IMU, the Fairchild FMT-1030 AHRS, the Xsens MTi-3 AHRS, and the Xsens MTi-G-710 GNSS/INS. The files from the event can all be read in the MT Manager software, available as part of the MT Software Suite, available here.
  • MIT AGE Lab - a small sample of the 1,000+ hours of multi-sensor driving datasets collected at AgeLab.
  • Yet Another Computer Vision Index To Datasets (YACVID) - a list of frequently used computer vision datasets.
  • KUL Belgium Traffic Sign Dataset - a large dataset with 10000+ traffic sign annotations, thousands of physically distinct traffic signs. 4 video sequences recorded with 8 high resolution cameras mounted on a van, summing more than 3 hours, with traffic sign annotations, camera calibrations and poses. About 16000 background images. The material is captured in Belgium, in urban environments from Flanders region, by GeoAutomation.
  • LISA: Laboratory for Intelligent & Safe Automobiles, UC San Diego Datasets - traffic sign, vehicles detection, traffic lights, trajectory patterns.
  • Multisensory Omni-directional Long-term Place Recognition (MOLP) dataset for autonomous driving It was recorded using omni-directional stereo cameras during one year in Colorado, USA. paper
  • Lane Instance Segmentation in Urban Environments Semi-automated method for labelling lane instances. 24,000 image set available. paper
  • Foggy Zurich Dataset Curriculum Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding. 3.8k High Quality Foggy images in and around Zurich. paper
  • SullyChen AutoPilot Dataset Dataset collected by SullyChen in and around California.
  • Waymo Training and Validation Data One terabyte of data with 3D and 2D labels.
  • Intel's dataset for AD conditions in India A dataset for Autonomous Driving conditions in India (road scene understanding in unstructured environments) which consists of 10k images, finely annotated with 34 classes collected from 182 drive sequences on Indian roads (by Intel & IIIT Hyderabad).
  • nuScenes Dataset A large dataset with 1,400,000 images and 390,000 lidar sweeps from Boston and Singapore. Provides manually generated 3D bounding boxes for 23 object classes.
  • German Traffic Sign Dataset A large dataset of German traffic sign recogniton data (GTSRB) with more than 40 classes in 50k images and detection data (GTSDB) with 900 image annotations.
  • Swedish Traffic Sign Dataset A dataset with traffic signs recorded on 350 km of Swedish roads, consisting of 20k+ images with 20% of annotations.
  • Argoverse 3d Tracking Dataset A large dataset with ~1M images and ~1M labeled 3d cuboids from Miami and Pittsburgh. Provides HD maps and imagery from 7 ring cameras, 2 stereo cameras, and LiDAR.
  • Argoverse Motion Forecasting Dataset A large dataset with trajectories of tracked objects across 324,557 scenes, mined from 1006 hours of driving.

Open Source Software

About

Fusing Web3 primitives with AI, Community and Speed to compete in the first ever autonomous racing league

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages