Visual model-predictive localization for computationally efficient autonomous racing of a 72-g drone

Shuo Li*, Erik van der Horst, Philipp Duernay, Christophe De Wagter, Guido C.H.E. de Croon

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

16 Citations (Scopus)
153 Downloads (Pure)

Abstract

Drone racing is becoming a popular e-sport all over the world, and beating the best human drone race pilots has quickly become a new major challenge for artificial intelligence and robotics. In this paper, we propose a novel sensor fusion method called visual model-predictive localization (VML). Within a small time window, VML approximates the error between the model prediction position and the visual measurements as a linear function. Once the parameters of the function are estimated by the RANSAC algorithm, this error model can be used to compensate the prediction in the future. In this way, outliers can be handled efficiently and the vision delay can also be compensated efficiently. Theoretical analysis and simulation results show the clear advantage compared with Kalman filtering when dealing with the occasional large outliers and vision delays that occur in fast drone racing. Flight tests are performed on a tiny racing quadrotor named “Trashcan,” which was equipped with a Jevois smart camera for a total of 72 g. An average speed of 2 m/s is achieved while the maximum speed is 2.6 m/s. To the best of our knowledge, this flying platform is currently the smallest autonomous racing drone in the world, while still being one of the fastest autonomous racing drones.

Original languageEnglish
Pages (from-to)667-692
Number of pages26
JournalJournal of Field Robotics
Volume37
Issue number4
DOIs
Publication statusPublished - 2020

Keywords

  • autonomous drone race
  • visual model-predictive localization

Fingerprint

Dive into the research topics of 'Visual model-predictive localization for computationally efficient autonomous racing of a 72-g drone'. Together they form a unique fingerprint.

Cite this