I’m Dinesh Atchuthan, I have background in state estimation as well as pointcloud processing and filtering. During my work experiences, I have developped methods for IMU based state estimation as well as features designed to tackle very specific issues related to autonomous vehicles. The sections below will describe with more details my past experiences.
During my R&D Perception Engineer experience at EasyMile, my work was centered around 3 questions:
This part has been realised through designing specific filters to process the Pointcloud. I also designed and implemented a novel intersection validation feature in order to take into account not only obstacles in intersections but also the possibility for specific types of obstacles (pedestrian, cyclist, car) to be present but not visible with the Lidar configuration.
My current work thematic include the design of intrinsic and extrinsic calibration method Lidars and cameras. An accurate calibration of sensors is a strong requirement for the development of processing techniques from data acquired from multiple sensors.
Autonomous goods transportation solution such as the TractEasy are expected to operate with high availability in all weather conditions. In this context, it is necessary to assess the performance of the stack in terms of target detection capabilities as well as weather artifacts filtering in order to elaborate a degraded mode strategy. Several studies have been conducted (from strategy definition to data acquisition and data analysis) to provide a comprehensive overview of the capabilities of the platforms.
Weather classification using lidar sensors is still an open and challenging question. This classification is necessary to define a long term solution for the vehicles to be able to operate in degraded weather conditions. This subject is currently being treated by a PhD student that I am supervising.
I completed my PhD inside the Gepetto research group at LAAS-CNRS in Toulouse, France. My research was devoted to the use of the preintegration theory for inertial estimation making proper use of the power provided by the Lie Group Theory. On the one side, I made efforts to explain and develop the preintegration method applied to Inertial Measurement Units (IMUs) using proper derivation methods. This work is intended to be used in Humanoid Robots for real-time state estimation using least-squares optimization methods instead of filters as it is usually done. I implemented the method in a not-yet-published C++ library and successfully applied it to odometry in the context of pedestrian navigation making use of IMU data only. This work is the first building block of a more challenging research focus that is to be pursued by a new PhD student.
In November 2018, I started working as a postdoctoral researcher in the same Gepetto team. My work was devoted to use the optimal estimation methods in the context of visual-inertial localization and applying those methods on humanoid robots in order to close the control loop and provide a state estimate for the robot in its environment. I worked on the implementation of a visual SLAM estimator based on apriltags and trained the new PhD student who will continue the investigations brought by my thesis. Our work is illustrated by the video below.
My reseach is done in collaboration with: Joan Solà and Angel Santamaria-Navarro for the Vision and Inertial based estimation aspects. And with Nicolas Mansard, Olivier Stasse for the Robotics perspectives. During my postdoctoral stay, I worked with Médéric Fourmy and supervised his first months as a PhD student.
Decentralized Finance (DeFi) Deep Dive
Coursera
Deep Learning (2022, in progress)
Coursera
PhD in Robotics, 2018
LAAS-CNRS and Université de Toulouse
Engineering Degree
Télécom Physique Strasbourg, 2015
Msc Imaging, Medical and Surgical Robotics (Imagerie, Robotique Médicale et Chirurgicale - IRMC), 2015
Télécom Physique Strasbourg
MSc Imaging, Robotics and Engineernig for the Living (Imagerie, Robotique, Ingénierie pour le Vivant - IRIV), 2014
Télécom Physique Strasbourg