Mayank Sharma

Mayank Sharma

Computer Vision Engineer

University of Maryland

Biography

I am Mayank Sharma, a passionate roboticist with a strong background in robotics and computer vision, currently pursuing a Master of Engineering in Robotics at the University of Maryland, College Park. My expertise lies in developing autonomous systems, designing algorithms for robot navigation, and optimizing 3D reconstruction techniques. My recent experiences include working as a Computer Vision Engineer Intern at Kick Robotics, where I contributed to the development of an autonomous mobile robot for warehouse exploration and carbon monoxide level monitoring. I also gained substantial research experience at the Robotics Algorithms & Autonomous Systems Lab, focusing on Next-Best-View (NBV) planning and deep learning techniques for object mapping.

During my undergraduate studies at NMIMS University, I worked on innovative projects like designing a novel UAV docking mechanism at IIT Bombay, which significantly enhanced UAV flight time. I am proficient in Python, C++, and MATLAB, with a strong command of tools like ROS2, NAV2, OpenCV, and PyTorch. My passion for robotics drives me to continually explore new challenges in Autonomous Systems, Computer Vision, and Path Planning. I am actively seeking full-time robotics opportunities starting in September 2024.

Interests
  • Autonomous Systems
  • Computer Vision
  • Path Planning
  • Control Systems
Education
  • Masters in Robotics, 2024

    University of Maryland

  • B.Tech in Mechatronics, 2022

    NMIMS University

Experience

 
 
 
 
 
Kick Robotics
Computer Vision Engineer Intern
Kick Robotics
February 2024 – Present College Park, USA
  • Working on integrating Nvidia Isaac Sim with ROS2 for performing Visual Slam (VSLAM) in a warehouse simulation environment and to create large scale datasets.
  • Created a ROS2 Nav2 custom costmap plugin for warehouse navigation utilizing real-time pixel-level image classification, optimized YOLOv8n with TensorRT, injecting corrections into local cost maps, resulting in robots successfully traversing areas previously avoided and enhancing overall navigation efficiency.
  • Streamlined cross-functional CI/CD workflows by containerizing ROS2 packages using Docker and enhanced deployment efficiency and reliability by implementing unit and integration tests with the ROS2 Testing Framework (rostest, pytest) and troubleshooting issues.
  • Developed an autonomous mobile robot for mapping, navigation, and carbon monoxide monitoring in a warehouse using ROS2 Nav2.
  • Utilized EKF to fuse wheel odometry, and IMU data streams for non-linear state estimation.
 
 
 
 
 
Robotics Algorithms & Autonomous Systems(RAAS) Lab
Graduate Student Researcher
Robotics Algorithms & Autonomous Systems(RAAS) Lab
March 2024 – August 2024 College Park, USA
  • Worked on optimizing 3D reconstruction and object mapping with Next-Best-View (NBV) planning by estimating image-based uncertainty to maximize information gain, and using deep learning with Gaussian splats to predict full models from partial views.
 
 
 
 
 
Lighter than Air Systems Lab, IIT Bombay
Robotics Engineer Intern
Lighter than Air Systems Lab, IIT Bombay
July 2021 – August 2022 Mumbai, India
  • Built battery swapping mechanisms and integrated them with the UAV docking mechanism resulting in 45% less time to charge than most techniques.
  • Developed firmware for a robust arresting mechanism to lock the UAVs in all six degrees of freedom.
 
 
 
 
 
NMIMS University
Control Systems Research Intern
NMIMS University
May 2021 – July 2021 Mumbai, India
  • Researched nonlinear BLDC motor speed control methods and implemented a speed control algorithm based on sliding mode reaching law (SMRL) on MATLAB Simulink.

Projects

3D Time to Collision using Sensor Fusion
Detected and tracked objects in 3D space from the benchmark KITTI dataset based on camera and lidar measurements and computed time-to-collsion on both camera and lidar sensors by projecting 3d lidar points on to camera sensor.
3D Time to Collision using Sensor Fusion
Ariac Agility Challenge
Used MoveIt motion planning and ROS Services to pick and place bin parts using UR5 robot and submitted orders using AGVs.
Ariac Agility Challenge
Biomimicry Robotic Snake
Designed and simulated a robotic snake achieving snake like motion.
Biomimicry Robotic Snake
Camera Calibration
Tsai’s Method for Camera Calibration using QR decomposition to get camera intrinsics
Camera Calibration
Cyber Shopper
Executed pick-and-place operations for grocery items utilizing a mobile robot featuring a UR5 arm in ROS2 Gazebo and employed inverse kinematics to ensure precision in robotic manipulation.Used Peter Corke’s Toolbox in MATLAB to validate forward and inverse kinemtics of the UR5 arm. Introduced proportional control for robot’s mobile base movement.
Cyber Shopper
Human Detector and Tracker
Used Agile Software Development process to detect and track humans in a frame using HOG feature descriptor and SVM in C++ (14+)
Human Detector and Tracker
Implicit Neural Representations
Improved image reconstruction using positional encoding to achieve 29 PSNR with basic FFN over basic model.
Implicit Neural Representations
Optimal Control Robot Manipulator
Optimal Control Approach i.e LQR controller to solve the Robust Control problem of Robot Manipulator.
Optimal Control Robot Manipulator
Robotic Grasping
Implemented Proximal Policy Optimization (PPO) and Deep Q-Networks (DQN) reinforcement learning algorithms to optimize Kuka robot pick-and-place tasks in PyBullet using OpenAI Gym.
Robotic Grasping
Semantic Segmentation using Segformer
Trained Segformer model on cityscapes dataset and performed testing on edge devices achieving 45% mIOU.
Semantic Segmentation using Segformer
Structure From Motion
Reconstructed a 3D scene from a given set of images by feature correspondence with RANSAC-based outlier rejection along with triangulation and nonlinear optimization techniques for robust camera pose estimation.
Structure From Motion
Superpixel Image Segementation
SLIC superpixel image segementation network using ResNet18, achieving 85% accuracy.
Superpixel Image Segementation

Competitions

ABU Robocon 2021

ABU Robocon 2021

Co-lead the team of 70 people overseeing departments such as Manufacturing, Designing, and Simulation to assemble and fabricate 2 robots from scratch to shoot arrows in a pot kept at some distance and achieved National Rank of 11 (Results).