Mayank Sharma

Mayank Sharma

Computer Vision Engineer

University of Maryland

Biography

Hi! I’m Mayank Sharma, an AI & Robotics Engineer with an M.Eng. in Robotics from the University of Maryland (’24). At Ryght Inc. I built LLM‑powered diagnostic tooling, including a side‑by‑side chatbot‑comparison framework and a radiology triage module that together cut manual review time by roughly 35 percent. Earlier, as a Computer Vision Engineer at Kick Robotics, I designed the perception stack and autonomy logic for a warehouse AMR that navigates aisles, avoids obstacles, and monitors carbon‑monoxide levels. My research at the Robotics Algorithms & Autonomous Systems Lab focused on Next‑Best‑View planning and deep‑learning‑driven 3‑D reconstruction to improve object mapping, and at IIT Bombay I co‑designed a magnetic UAV docking mechanism that extended quad‑rotor flight time by 40 percent. I am proficient in Python, C++, ROS2/Nav2, OpenCV, PyTorch, MATLAB, TypeScript, React, and Tailwind CSS. Passionate about the intersection of autonomy, perception, and large language models, I am actively exploring new opportunities and am open to relocation.

Interests
  • Autonomous Systems
  • Computer Vision
  • Path Planning
  • Control Systems
Education
  • Masters in Robotics, 2024

    University of Maryland

  • B.Tech in Mechatronics, 2022

    NMIMS University

Experience

 
 
 
 
 
Ryght Inc
AI Engineer
September 2024 – July 2025 Anaheim, CA, USA
  • Built a chat app in React for side-by-side LLM comparison with editbale prompt templates and settings improving testing efficiency.
  • Revamped product features across Ryght AI’s clinical trial platform by modernizing legacy UI with React, TypeScript, Next.js 13, and Tailwind CSS, delivering a cohesive and user-friendly interface for sponsors and partners.
 
 
 
 
 
Kick Robotics
Computer Vision Engineer
Kick Robotics
June 2024 – May 2025 College Park, MD, USA
  • Developed an autonomous mobile robot (AMR) using Nvidia Jetson, ROS2 Nav2 for mapping, navigation, and carbon monoxide monitoring, with EKF-based fusion of wheel odometry and IMU data for robust state estimation.
  • Implemented a custom ROS2 Nav2 costmap plugin for an AMR that used YOLOv8n for real-time pixel-level image classification.
  • Optimized the segmentation model with TensorRT, achieving 30% faster inference hence improving warehouse navigation efficiency.
  • Automated model training on AWS using CloudFormation to orchestrate EC2 instances, managed data on S3, trigger SNS notifications via Lambda, and monitored on CloudWatch.
  • Streamlined CI/CD by containerizing ROS2 packages with Docker and building SIL simulations with unit/integration tests, ensuring reliable AMR deployment and robust performance in mapping, navigation, and monitoring.
 
 
 
 
 
Robotics Algorithms & Autonomous Systems(RAAS) Lab
Graduate Student Researcher
March 2024 – August 2024 College Park, MD, USA
  • Worked on optimizing 3D reconstruction and object mapping with Next-Best-View (NBV) planning by estimating image-based uncertainty to maximize information gain, and using deep learning with Gaussian splats to predict full models from partial views.
 
 
 
 
 
Lighter than Air Systems Lab, IIT Bombay
Robotics Research Intern
July 2021 – August 2022 Mumbai, India
  • Achieved ∼20cm landing accuracy with precision landing algorithm using April tags, OpenCV pose estimation in ROS and PX4.
  • Increased UAV flight time by ∼ 720x (1 hour to 30 days) by designing CAD, manufacturing hardware, and developing firmware for a novel modular mid-air docking and battery-swapping mechanism, resulting in seamless integration of a robotic system.
 
 
 
 
 
NMIMS University
Control Systems Research Intern
NMIMS University
May 2021 – July 2021 Mumbai, India
  • Researched nonlinear BLDC motor speed control methods and implemented a speed control algorithm based on sliding mode reaching law (SMRL) on MATLAB Simulink.

Projects

3D Time to Collision using Sensor Fusion
Detected and tracked objects in 3D space from the benchmark KITTI dataset based on camera and lidar measurements and computed time-to-collsion on both camera and lidar sensors by projecting 3d lidar points on to camera sensor.
3D Time to Collision using Sensor Fusion
Ariac Agility Challenge
Used MoveIt motion planning and ROS Services to pick and place bin parts using UR5 robot and submitted orders using AGVs.
Ariac Agility Challenge
Biomimicry Robotic Snake
Designed and simulated a robotic snake achieving snake like motion.
Biomimicry Robotic Snake
Camera Calibration
Tsai’s Method for Camera Calibration using QR decomposition to get camera intrinsics
Camera Calibration
Cyber Shopper
Executed pick-and-place operations for grocery items utilizing a mobile robot featuring a UR5 arm in ROS2 Gazebo and employed inverse kinematics to ensure precision in robotic manipulation.Used Peter Corke’s Toolbox in MATLAB to validate forward and inverse kinemtics of the UR5 arm. Introduced proportional control for robot’s mobile base movement.
Cyber Shopper
Human Detector and Tracker
Used Agile Software Development process to detect and track humans in a frame using HOG feature descriptor and SVM in C++ (14+)
Human Detector and Tracker
Implicit Neural Representations
Improved image reconstruction using positional encoding to achieve 29 PSNR with basic FFN over basic model.
Implicit Neural Representations
LangGraph Code Agent
A streamlined code generation agent built with LangGraph and Groq. This LangGraph agent is a synchronous graph. The agent analyzes user requests, generates Python code, performs quality assurance checks, and automatically saves the results to a file and traces all workflow steps to the Judgeval dashboard for monitoring and evaluation.
LangGraph Code Agent
Optimal Control Robot Manipulator
Optimal Control Approach i.e LQR controller to solve the Robust Control problem of Robot Manipulator.
Optimal Control Robot Manipulator
Robotic Grasping
Implemented Proximal Policy Optimization (PPO) and Deep Q-Networks (DQN) reinforcement learning algorithms to optimize Kuka robot pick-and-place tasks in PyBullet using OpenAI Gym.
Robotic Grasping
Semantic Segmentation using Segformer
Trained Segformer model on cityscapes dataset and performed testing on edge devices achieving 45% mIOU.
Semantic Segmentation using Segformer
Structure From Motion
Reconstructed a 3D scene from a given set of images by feature correspondence with RANSAC-based outlier rejection along with triangulation and nonlinear optimization techniques for robust camera pose estimation.
Structure From Motion
Superpixel Image Segementation
SLIC superpixel image segementation network using ResNet18, achieving 85% accuracy.
Superpixel Image Segementation

Competitions

ABU Robocon 2021

ABU Robocon 2021

Co-lead the team of 70 people overseeing departments such as Manufacturing, Designing, and Simulation to assemble and fabricate 2 robots from scratch to shoot arrows in a pot kept at some distance and achieved National Rank of 11 (Results).