Learning visuomotor policies for autonomous systems from event-based cameras

Editor’s note: This research was conducted by Sai Vemprala, Senior Researcher, and Ashish Kapoor, Partner Researcher, of Microsoft Research along with Sami Mian, who was a PhD Researcher at the University of Pittsburgh and an intern at Microsoft at the time of the work.

Autonomous systems are composed of complex perception-action loops, where observations of the world need to be processed in real time to result in safe and effective actions. A significant amount of research has focused on creating perception and navigation algorithms for such systems, often using visual data from cameras to reason about which action to take depending on the platform and task at hand.

While there have been a lot of improvements in how this

 

 

To finish reading, please visit source site

Leave a Reply