profile-pic
Vetted Talent

Thuvaraga Krishnarajah

Vetted Talent

Thuvaraga is a seasoned Senior Robotics Engineer at Proliant InfoTech in Bangalore, spearheading an Autonomous Driving project for DRDO. With a wealth of experience since 2019, Thuvaraga has a proven track record in the robotics industry. Notable achievements include the development of a SLAM map using RF sensors, integration of Autoware.ai with CarMaker, and contributions to ROS2 Nav2. Adept at bridging technologies, Thuvaraga has worked on ROS2 interfaces for deep learning models and reinforcement learning for self-driving cars in the CARLA simulator.

Prior to this, Thuvaraga held the role of ROS Engineer at RoshAi Pvt Ltd, where expertise was demonstrated in the ROS2-based Drive-By-Wire system for the Jeep Compass Autonomous vehicle. Contributions included the development of DBW interfaces, user interfaces, and testing interfaces. Automation setup and software upgrades on the DBW control system were also part of the accomplished portfolio.

Thuvaraga's journey in robotics began as an Assistant Software Engineer at InGen Dynamics, focusing on ROS development for the Home Assistant Robot Aido. Educational credentials include a B Tech in Mechanical Engineering from SASTRA University, Tamilnadu. Thuvaraga's skills encompass a spectrum of tools and languages, from ROS and Autoware to Python, C++, and hardware like NVIDIA GPUs. With a comprehensive background in both software and hardware aspects of robotics, Thuvaraga is an invaluable asset to any team.

  • Role

    Robotics Simulation Developer

  • Years of Experience

    3.6 years

  • Professional Portfolio

    View here

Skillsets

  • Ubuntu - 3.5 Years
  • Ubuntu - 3.5 Years

Vetted For

5Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Robotics Simulation DeveloperAI Screening
  • 53%
    icon-arrow-down
  • Skills assessed :Large Language Models, Isaac Sim, NVIDIA Omniverse, Problem Solving Attitude, Python
  • Score: 48/90

Professional Summary

3.6Years
  • Mar, 2023 - Present2 yr 6 months

    Senior Robotics Engineer

    Proliant InfoTech, Bangalore
  • Aug, 2022 - Jan, 2023 5 months

    ROS Engineer

    RoshAi Pvt Ltd, Kochi
  • Nov, 2021 - May, 2022 6 months

    Assistant Software Engineer

    InGen Dynamics, Bangalore
  • Mar, 2020 - Aug, 20211 yr 5 months

    Developer

    Freelance
  • Sep, 2020 - Jan, 2021 4 months

    Robotics Engineer Intern

    Admatics Solutions

Applications & Tools Known

  • icon-tool

    Python

  • icon-tool

    OpenCV

  • icon-tool

    Tensorflow

  • icon-tool

    MySQL

  • icon-tool

    C++

Work History

3.6Years

Senior Robotics Engineer

Proliant InfoTech, Bangalore
Mar, 2023 - Present2 yr 6 months
    • Currently working on an Autonomous Driving project at DRDO.
    • Developed SLAM map using RF sensors and worked on ROS2 Nav2.
    • Developed a bridge between Autoware.ai and CarMaker
    • Worked on ROS2 interface for Deep learning models and RL model for self-driving cars in the CARLA simulator.

ROS Engineer

RoshAi Pvt Ltd, Kochi
Aug, 2022 - Jan, 2023 5 months
    • Worked on ROS2-based Drive-By-Wire system of the Jeep Compass Autonomous vehicle.
    • Developed DBW interfaces, user Interfaces & testing interfaces
    • Set up automation on the DBW control system and software upgrade.

Assistant Software Engineer

InGen Dynamics, Bangalore
Nov, 2021 - May, 2022 6 months
    • Worked as ROS Developer For a Home Assistant Robot Aido.
    • Worked on LiDAR-based Navigation, environment sensors & robot movements.
    • Developed a remote-controlling system of UAV using Jetson Nano, Arduino Mega, and Esp32

Robotics Engineer Intern

Admatics Solutions
Sep, 2020 - Jan, 2021 4 months
    • Created an automated tutorial for ROS melodic in Ubuntu 18 Docker.
    • Documentation of protocols & Hardware interfaces of PCA9698dgg, ATtiny841, ATtiny85, ESP32 and STM32.
    • Robochef: Designed Blueprint For RoboChef Control Panel in AutoCAD and Assembled the Control Panel.

Developer

Freelance
Mar, 2020 - Aug, 20211 yr 5 months
    • Worked on Python Selenium, RESTful API, AWS, Google Place API, and MongoDB.

Major Projects

3Projects

Autonomous Driving Project, DRDO

Proliant InfoTech
Mar, 2023 - Present2 yr 6 months
    • This project aims to develop Deep-learning Neural networks and algorithms for autonomous driving.
    • Worked on a ROS1-based POC project to build a map using Sonar/Radio frequency sensors instead of lidar.
    • Developed a bridge between Autoware.ai and CarMaker using ROS melodic to transfer sensors data and vehicle control commands.
    • Deployed Prius vehicle in a custom Gazebo world and developed a script to synchronize the sensors (Lidar, Camera, GPS ) data to train a Deep learning model and control the vehicle.
    • Worked on developing a PPO-based RL algorithm for self-driving cars using CARLA structure and unstructured environment.
    • Tools: ROS2 Humble & Melodic, CARLA
    • Languages: Python3, C++
    • Hardware: NVIDIA Titan & Nvidia Quadro.

Drive By Wire System

RoshAI Pvt Ltd
Aug, 2022 - Jan, 2023 5 months
    • This project aims to develop a control system to actuate the accelerator, gear, steering, and brake of Jeep Compass by external actuators.
    • Built Modbus protocol library in ROS2 humble to control the accelerator, brake, and steering.
    • Established serial communication between DBW interfaces ( System Activation, Battery Management, Driver Notification, Joystick, OBD).
    • Autonomous stack: Converted PCAP data to PCD format using ROS2.
    • Mobile Control: Established communication between DBW system interface & iOS App.
    • Tools: ROS2 Humble, Autoware, Carla simulator
    • Languages: C++, Python3 & Embedded C
    • Hardware: Velodyne Lidar, Novatel GNSS, NVIDIA RTX 3060 Dual GPU & i5 10th Gen CPU.

Aido, Home Assistant Robot

InGen Dynamics
Nov, 2021 - May, 2022 6 months
    • Worked on Hector SLAM & G mapping SLAM and stored navigation path in MySQL.
    • UAV control system is developed using Arduino Mega, ESP32 and Jetson Nano. Worked on ROSSerial, ESP CAN, ESP32 self-hotspot and ROS-bridge websocket suite.
    • ML & DL: Developed model for weapon detection using TensorFlow FasterRCNN. Convert Emotion detection, Yolo,
    • SSD & Fomo DL models to the TensorRT framework using ONNX, and TensorRT.
    • Tools: ROS Melodic, Jetson-inference, TensorRT, TensorFlow & OpenCV, Arduino IDE
    • Languages: Python3, Python2, Embedded C
    • Hardware: NVIDIA Jetson Nano, RpLiDAR A1M8, Arduino Mega, Esp32

Education

  • B Tech Mechanical Engineering

    SASTRA University (2019)

AI-interview Questions & Answers

Hi. Uh, could you help me understand more about your background by giving a brief introduction of Yes, sir. Okay. I have been, um, passed out in 2019. I'm from BTech Mechanical background. Initially, I worked as a freelance dev, uh, developer. There, I used Node. Js and MongoDB and also Python to automate, uh, WhatsApp message and everything. Then after I joined arithmetic solution there, I worked in, uh, embedded systems. So there, I used ESP 32, STM 32, and And, you know, and the PCA boards. There, I worked on ICT protocol and, uh, CAN protocols. After that, I joined IndianDynamics. There, I worked on ROS. There, I worked on SLAM, g mapping, navigations, and hardware and software integration using. After that time, the Limited. There, I my work is, uh, converting Uh, semiautomatic Jeep Compass vehicle to a fully autonomous vehicle. There, I worked on driver system. There, I used Prostoke. And, also, I worked on auto wear, um, to simulate sensor data and to make your sound mapping. Then after that, I joined Polio and Infotech. There, I worked on simulations. Uh, there, I used Gazebo, to, uh, plot mapping using a sonar sensor. And, also, I worked on, Uh, Ignition gazebo simulation to set up, uh, environment and navigating the vehicle there and reading the radar camera and GPS data and processing that for deep learning. Also, I worked on reinforcement learning for self driving. There, I used car loss integrator, and I used PPR. I have been worked in Python, ROWS, uh, and also CPPC and, uh, deep learning and reinforcement learning. This is my main

Python based approaches, I will use, uh, computer visions, uh, deep learning, and, uh, reinforcement learning, and, also, We can use Python for automation also. Um, using computer vision, if we wanna detect any object. For example, in Collaborative robots, we will mainly used in mechanical, uh, manufacturing and production environment. There, Therein, uh, quality testing, we have to test the product is, uh, is in proper quality. For that, we can use, uh, deep learning and computer visions to detect that effect of the robot. And, also, um, in the environment, also, if suppose a human, uh, face any accidents for detecting. And if humans are following all the safety procedure, like wearing the helmet and having all the safety requirement with them, We can use any deep learning or computer vision model to detect all those things. And the machine learning, uh, and data analysis, how From starting to end, how the production plant and supply chain is working, we can analyze in the data, and we can calculate how the loss is going to, Uh, loss is going on, and how can eliminate those losses? We can use, uh, loss of algorithm to process all those things. That, we can use Python to analyze all these

Can you share how you have approached the incorporation of artificial intelligent models into robot presentation for improved decision making? Yeah. Uh, while I am working in reinforcement learning, uh, we only use camera data to obstacle evidence, and, uh, lanes following, and also intersections in the route, and any object is directed, all those things. See, if the robot is in a warehouse, its robot is located, the robot has to navigate In its path and to which its call position, it has to do its own task. So while navigation, how many turns are coming, how it's following the lanes, and everything is important. So there, we can use RL algorithm or deep learning algorithm to teach This robot to how to follow the road, how to avoid the obstacles, and And if if you have if it is doing a particular task, so we have to guide the robot to how to do the particular task. So there, we can use artificial intelligence to Train that robot to to particular

In what way can reinforcement learning within a simulation framework like I six in robotics? Yes. Of course. There are lots of algorithm in reinforcement learning, d q n p p o and d d p g. So first, we have to decide see what we are going to give. We are going to give camera input or Lidar input or any other data as input to we are providing. And before this data and how it's gonna analyze, what is that revolt, and what is the penalty, and what is the termination conditions, everything we have to analyze and which 6 in in I six in itself, there are lots of API, camera API, leader API, and also obstacle detection APIs and, also, if collisions happen, that robot collide with their humans or any other robot or any other obstacle is a collision detection API, everything will available in ice simulation itself. And, also, the object delays dynamics or obstacle. So now we have to write a reward policy or penalty policy or termination policy based on all these condition. And the robot is completed with task, we will give a reward. And robot is disobeying, we will give a penalty. And robot did collisions or did a big mistake. It is unacceptable. For that time, we will give termination condition to the, uh, robot. So now We need to give a input to the reinforcement learning. What is the input this robot is going to treat? So using Isaac's in the APIs, we will give, uh, input see from the simulator to the reinforcement learning model and analyzing how it is behaving in the simulation. Based on that, we will give policy, and we will make, uh, reward penalty and termination. Based on that, now we have to consider what is the action condition. There are 2 type of action, discrete actions or continuous action. That is very much important in reinforcement learning because based on this action category only, we will decide if we are what kind of distribution we are going to use to take the decision? This is very much important concept in the reinforcement learning. It is continuous space or discretions space. So in ISAC simulation, what kind of action is available, which is suitable for the ISAC simulations. Continuous and discrete, both are suitable. So based on that, c, uh, we we can make the algorithm. Uh, it is Gaussian distribution or, uh, beta distribution. Gaussians means discrete. The term is as continuous maximum space based on that. We can write algorithm to automate the robot in the ISAAC simulation. It's not only ISAAC simulation. We did that in the carless simulator and also

What step you should take to ensure a simulation scenario created using LLMs accurately reference the given real world problem? LLM is a language learning model. Uh, so if it is a if this if it's a example is a collaborative robot, it has to collab with humans involved. So if a human is gonna give some command to the robot, that human can give the command as in his language, so now we can use LLM model to what is the words that human is using, and we will vectorize those words and how we can pass this, uh, words or sentence that Human language command of robot. That's where we will use LLM models. And for to make that LLM models, we need to have lots of datas. So examples, uh, um, an example in the while in the term. Then we are typing lots of words that, uh, Google will automatically, uh, autocomplete all those words. Right? All those are happening because of the LLM model. Previously, how you may use the words in the social media based on that, we will collect all the task and train the LLM models after we are deploying that, and they are using that LLM model to predict your, questions and answering your questions. And while we are searching something, they are giving lots of, uh, website links. These are the solution for your questions. This all those things happening using based on the LLM models only. So the question is what steps would you take to ensure a simulation scenario created using l l m's It greatly reference the given real world problem. So asking the robot to do a particular, uh, work or asking the robot The same test tend to give, uh, some, uh, real time problems. Problem. Problem. K. In a production environment, we may ask, Uh, lots of questions, like, how many production things happen, what is the quality result, and everything. So in the simulation, we make of face supposed to respond, uh, production environment, and we can ask this kind of question how humans or our managers will work, and we will try to, uh, resolve that problems in the simulation environment the how in a a production plant How we are doing that, um, how women will interact between each others and get the quality result or production result or chat rates result. We can approach these things in the simulation itself, and we can check how it is working in the real time and how it is working in the simulation with only with

Yeah. As in the white function is like as in is we will call a function, And until that for function will, uh, give the return or suppose any delay will happen, we have to use the await command to wait there until, that function give word returns, then only after that only other things will k. Call. If any function or any other process depending on this result of that async function, till that it will wait And that it is completed, then other process is, uh, started. So it won't block the call. Yes. It will wait. It is not only in Python. If you can see this async and await function in Node JS also. Suppose APIs, uh, won't be, Uh, is example, we did, uh, we are extracting data from some database. The data bit is far away, So sometimes the delay may happen to get the data from there, but our process depend on the data. So that async function we will defend the async function, and we will use the await command till the return is coming. So until the our query is connected with the database until we get the response function will wait for that response after it will get that response only Other process will started. So if suppose if we didn't use a as in a byte function, we are connecting with some HTTP protocol, WebSocket protocol, K. But some Internet delay, we didn't get the response on time. The the function will be, uh, terminated or exit, or it will block other process. But here, the blocking won't be happening. It will just wait to the process

Given this period of code for robotic simulation where a robot must navigate through a series of waypoints, identify, and expand the bug that will cause a robot to skip every other waypoint. This function is similar. It's just for loop only. In the 1st line the 1st line is waypoints they define all the waypoints in an array and 2nd line is the for loop in the for loop um, from 0 to length points minus 1 is nothing, but the 1st array is, uh, 1st item of the array is we represent s the syntax is 0 and 2nd index is 1 and another one is 2 so so waypoint 0 is nothing but waypoint 1 so waypoints array number 1 is nothing but waypoint 2 so this loop will happen and waypoint 0 means first waypoint 1 and waypoint 1 means waypoint 2. That's why it will require how much waypoint was defined, and, uh, it will request that robot to navigate to all those robots. This is the this function is, uh, uh, defined for this purpose.

How assuming you are using Isaac simulation for developing a realistic robotics simulation, the following is a snippet of a loop that's mean to execute as long as the simulation is running. However, due to a logical error, the loop runs infinitely even after the simulation should have ended. Identify the logical error in this x similar step. If check simulation in condition, simulations assuming you are using my initially, simulation running is slow. So singleness is running also. So while simulation running means they they are creating a while loop. This while loop is only running if simulation running is true only because while loop will run if there is true, while loop will stop if the condition is false. Okay. Simulate set. So that function is doing some simulation step and there is 1 if condition check simulation and conditions so this condition is checking the simulation and its conditions if it is true, then simulation training will, uh, will be false. So and simulation cleanup means all the things set up in the simulation will be cleaned up and end y. The y loop will be ended. See, uh, that simulation cleanup should be outside the while loop. Yes. Because if if the diff condition check simulation end condition if true, then simulation running will be false. If that is true only, the simulation will be stopped. Apart from that, this while loop will be running continuously. So while simulation is running, you should not clean up the simulation. So the simulation cleanup should be outside the white loop. This is the logic x. Otherwise, if you are having the simulation cleanup inside the while loop main, every time the while loop is running, it seems like the simulation is restarting everything, so it will be a big issue. Uh, we are necessary cleaning up the simulation all the time. We are running x

Principle of concurrency to manage. Uh, explain how you would use the principle of concurrency to manage robotic and moments in relation with NVIDIA Omnibus. He will use the principle of concurrency to manage robotic count To check how its joins and links is working in every concurrent moments. Paste in that only. We can kind the robot come to reach its, uh, target or and pick the target or do its task. So concurrency checking is Very much important to monitor the robotic moments in the simulations. Not only in simulation, we can deploy that in the real