profile-pic
Vetted Talent

Shivang Vijay

Vetted Talent

With over four years of experience in the field, I have honed my skills as a developer with expertise in C++, Python, and Agile development methodologies. Throughout my career, I have successfully implemented load balancing techniques to optimize performance and ensure seamless user experiences. My proficiency in these areas, combined with my strong problem-solving abilities, allows me to tackle complex challenges and deliver high-quality solutions. I am passionate about staying up-to-date with the latest industry trends and technologies, enabling me to continuously improve and adapt to the evolving needs of the development landscape.

  • Role

    Senior Robotics Software Engineer - L3

  • Years of Experience

    4 years

  • Professional Portfolio

    View here

Skillsets

  • ROS2
  • Pd controller
  • Cycle gan
  • Natural navigation
  • Zed camera
  • Frontier detection
  • Vda5050
  • Multi-agent path planner
  • Yolo
  • TensorFlow
  • SLAM
  • AMQP
  • Ros1
  • Robotics
  • Mosquitto
  • Fleet management system
  • Docker
  • DDS
  • CI/CD
  • auto-encoders
  • Aruco markers
  • Ant colony optimization

Vetted For

5Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Robotics Simulation DeveloperAI Screening
  • 62%
    icon-arrow-down
  • Skills assessed :Large Language Models, Isaac Sim, NVIDIA Omniverse, Problem Solving Attitude, Python
  • Score: 56/90

Professional Summary

4Years
  • Nov, 2023 - Present1 yr 10 months

    Senior Robotics Software Engineer - L3

    Unbox Robotics
  • Jul, 2022 - Nov, 20231 yr 4 months

    Robotics Software Engineer - L2

    Unbox Robotics
  • Aug, 2021 - Jul, 2022 11 months

    Software Engineer

    Addverb
  • Sep, 2020 - Feb, 2021 5 months

    Internship - Mobile Robotics Department

    Addverb

Applications & Tools Known

  • icon-tool

    Python

  • icon-tool

    Keras

  • icon-tool

    C++

Work History

4Years

Senior Robotics Software Engineer - L3

Unbox Robotics
Nov, 2023 - Present1 yr 10 months
    Created an advanced simulation ecosystem for robotics algorithm testing, implemented transfer from ROS1 to ROS2, developed Multi-Agent Path Planner for swarm robots, optimized traversable area using Ant Colony Optimization, incorporated Junction-based re-planning and Lane relaxation rules, Dockerized the stack for CI/CD pipeline.

Robotics Software Engineer - L2

Unbox Robotics
Jul, 2022 - Nov, 20231 yr 4 months
    Played a pivotal role in the development of the Fleet Management System (FMS) for Autonomous Mobile Robots (AMRs), designed and implemented a Multi-Agent Path Planning system, developed communication protocols including DDS, AMQP, and VDA5050 using Mosquitto library.

Software Engineer

Addverb
Aug, 2021 - Jul, 2022 11 months
    Developed an algorithm to automate the manual process of creating an occupancy grid using SLAM techniques, ARUCO markers, and frontier detection.

Internship - Mobile Robotics Department

Addverb
Sep, 2020 - Feb, 2021 5 months
    Developed an algorithm to automate the manual process of creating an occupancy grid using SLAM techniques, ARUCO markers, and frontier detection. Received a Pre-placement offer (PPO) from the company.

Major Projects

8Projects

AI Tools Aggregator Web Application

http://aihubs.co/
    Developed a web application that centralizes a comprehensive list of AI tools, implemented user login functionality, enabled users to curate a personal list of favorite AI tools, and provided additional user-centric functionalities.

ROBOMUSE 5.0

    Designed and developed an Autonomous Mobile Robot (AMR) capable of transporting payloads up to 100 kg between locations using natural navigation. Integrated a ZED Camera for human-robot interaction.

Contactless & Modular Design for Actuation of Elevator Buttons

    Project 1:- Contactless & Modular Design for Actuation of Elevator Buttons.

Sterilization of Escalator Handle using UV rays

    Project 2:- Sterilization of Escalator Handle using UV rays.

Image Super-resolution using Auto-encoders

    Successfully implemented an image super-resolution project using Auto-encoders in the Keras framework, improving image quality and clarity.

Sentiment Analysis using TensorFlow

    Conducted basic sentiment analysis using TensorFlow, gaining insights into the field of natural language processing.

Cycle GAN for Map and Satellite View Conversion

    Developed and implemented a Cycle GAN to facilitate the conversion between map views and satellite views.

Inter IIT Tech Meet

PlutoX Hackathon
Jan, 2018 - Dec, 2018 11 months
    • Represented IIT Jammu at the 7th Inter IIT Tech Meet in 2018 during the PlutoX hackathon event hosted at IIT Bombay.
    • Implemented a Proportional-Derivative (PD) controller to effectively minimize errors in infrared (IR) sensor data.
    • Successfully completed various challenging tasks during the hackathon, including playing table tennis (TT) with a drone and programming a drone to navigate in straight lines under both continuous and discontinuous wall scenarios using IR sensor technology.

Education

  • B.Tech

    Indian Institute of Technology (IIT) Jammu (2021)

Certifications

  • Mastering SYSTEM DESIGN From Low-Level to High-Level Solutions Detailed Course Syllabus \x0cCONTENTS

  • Mastering SYSTEM DESIGN From Low-Level to High-Level Solutions Detailed Course Syllabus \x0cCONTENTS

AI-interview Questions & Answers

Yeah. So myself is Shivang. Myself is Shivang Vijay. Uh, I graduated from IIT Jammu. And from 1st year of my college, I enrolled in robotics activities. In 2nd year, I become a head of a robotics club. Uh, I take part by in various international and national competition. Um, I represent, uh, my college, IIT Jammu, in, uh, Techfest, IIT Bombay in Inter IIT Tech Meet, in Exodia, IIT Monday event, and in IIT Bombay, in IIT Monday event, we grabbed 1st positions. In 2nd year, I also got the opportunity to work with, uh, SK professor, SK Saha. I worked with, uh, with them in, for making AMRs, autonomous mobile robots, uh, for hospitals, which can carry 100 KG from one point to another. And, uh, my contribution in that I created a human robot interaction mode using current, uh, three d depth camera and the current sensor, uh, and I got the exposure of ROS and robotics. The industrial robotics, I got the exposure from my 2nd year. I continued that thing in, uh, my end during the college time. Even my BTech project is in, uh, that only. Uh, in 3rd year, I got the opportunity to work with Adwarp Technologies, which is a top, uh, company or top, uh, industry in robotics in India. You already benchmarked, like, international market also. So I'm, uh, I'm in 3rd year, I I got the opportunity to work in mobile robotics, uh, department in that, and I worked in AMR only. I created 1, uh, algorithm to automate all the creation of the documents. Uh, and by this, uh, like, I also involved in simulation of robotic arms, simulation of AMRs in Gazebo. I little bit explore Omniverse at that time, but, uh, not very far. I also got up, like, due to my, uh, great performance in that, internship. I got a BPU, and after graduation, I joined that company as a full time. I involved again, I enrolled in AMR activities. So I created a fleet management system which is controlling more than 50 robots, and we have deployed a fleet management system in our international site. I worked with the Duality software. I worked with, uh, Omniverse. I worked with Gazebo, uh, as a simulation platform, and more than 300 robots, we have run-in simulation using our fleet management system. Uh, after 1 year after completing 1 year, I jumped, uh, to Unbox Robotics. Uh, like, I joined Unbox Robotics as a senior robotics engineer. And, uh, in that, I I am working with AGVs, or automatic guided vehicles, which is running, uh, through QR codes, uh, by scan like, by scanning QR codes. And, uh, and my contribution, I'm I was working as a, like, a part of a simulation team and a part of a fleet management system for AGVs. So, uh, currently, we are capable like, I I created a Vuel simulation, uh, architecture here. I developed controller. I developed Various supporting libraries for that, uh, for for that simulation, and, currently, we are running more than 300 robots in that simulation. And I also contributed. I'm, like, a core member of, uh, fleet management system for AGVs and, uh, that has that Fleet management system has been deployed in, uh, international site as well as, uh, like, Indian sites. In Indian sites, there are more than 40 robots running through our Fleet Management System. So this is my whole background of robotics, uh, as well as of

All these strategies or tool, how do you recommend? Simulation outcome. Yeah. Uh, so, like, uh, firstly, we will talk about, like, simulation outcomes, and, Like, in simulation, there are very less noise. Like, there are no friction. We can create that. Uh, like, there is a but There will always be a difference between the simulation and the reality. So I'm going to talk, like, how we can minimize that gap, Uh, what parameters and what tools can help, uh, help us to, uh, like, minimize that gap and how we can get a proper outcome of, uh, simulation. So firstly, like, uh, like, in real world, there are a lot of errors occurring, maybe in fleet management system, maybe in robots. So how we will tackle them? In simulation, we created 1. Uh, I I have done personally. Uh, I created a Grafana board. In Grafana board, there is a instance which is recorded, uh, proper with the logs. Like, uh, whenever there is a any error occurring robot or in any system, the that time stamp is has been loaded, and we are recording continuously ROSBEC Yeah. There is a rollover mechanism creating that roll roll uh, ROSspec files when the logs has been generated. So in Grafana dashboard, we are directly, recording that error ID and the time stamp and that error ID is linked with the cloud and the logs has been uploaded directly. So It is very helpful in doing the post analysis. Now, uh, so, uh, this Graphana dashboard is a great tool to get, uh, uh, to do post analysis. Another tools are, uh, like, there is a traffic open source. So what Trafficdoo directly merge, Uh, your IP is with DNS server. And if you want to access any system, you can directly access through Such an like, directly a name. Uh, you can directly create a DNS. So that open source tool is really very helpful. Uh, then there is another tool like rclone. You rclone, you you can use to, uh, unpack the ROSspec file. It is a GUI, uh, so you don't need to do, like mostly, we have a custom message, and for custom message, you'll require something. So Artron is a great tool. And, like, uh, if you will come to technical parts and there is a like, in real life, there is a friction. There is a noise on the curve of velocity curve. Maybe it is not very, Like, simile, uh, what we want and it is not very similar look. So in simulation but there will be no noise. There will be no such things. Uh, they like, the velocity profile, uh, the controller that we have brought for, uh, simulation or for real world, it it should be same, but simulation, it will not give the noise because of there is no such external parameters, but in, uh, real world, it will definitely give, some kind of noise. So we need to mimic that also. Like, we need to introduce the friction. Meaning, uh, we need to, Uh, introduce some more kind of courses, which is reducing our robot to match the real world. And there are maybe some, uh, communication latency because all the system in simulation is in 1 system, but in real world, maybe some server They are in robot are communicating through communication. So communication is a very important part. All these factors is very important

Protocol you would implement for inter process communication in a distributed robotics We should be using Python. So, uh, like, it's a very like, it's dependent on the Things that we need in a robotics, uh, simulation. Suppose, uh, there are only 1 master, there is a centralized system. Like, there is a cent there is a master There are multiple robots who is communicating through that masters. Then the best protocol, uh, maybe TCP IP, Maybe, uh, MQTT. Maybe different. And could it, like, uh, like, there is there should be no, uh, peer to peer communication. But when you are talking about, uh, drones, Like, uh, each drone each drone has to talk with, uh, all other drones. Like, this is the So I'm robotics they they decentralize So I'm robotics. Then you want to talk peer to peer. So for that peer to peer talking, uh, the best choice is FastDDS because Fast DDS, uh, is very helpful, uh, to communicate across the network. So, uh, even, Uh, and I think for decentralized system, uh, we can use FastDDS. Uh, that is the best protocol. For, uh, centralized, I think AMQP protocol is the best, uh, advanced message queuing protocol. You can implement that AMQP very easily through, uh, RabbitMQ. And for MQTT, there is a MQTT library. There are multiple, like, free source open source libraries. Like, through that, you can implement these protocols very easily. But, uh, I think AMQP because, uh, in advanced message queuing protocol, Uh, the queuing will be very advanced. Uh, like, it is automatically handling all the messages and it is not you will not get, uh, you your important message will not get lost very easily. In Python, uh, like, all these things can like, the RabbitMQ framework is supported Python. Um, many other things is supported. Like, even the, uh, Kafka is supported in Python, so all this AMQP and fast ADS, we can implement using Python. Even ROS two support Python and and raw under root of ROS two, there is a fast ADS. And under root of cross on various DCP IP mechanism kind of thing, and they are like, there is a centralized master. But, The disadvantage of that, uh, there is, uh, like, uh, there is a single point of failure. So, um, there is one more thing I want to discuss. Um, in Fast DDS, we can use initial peer list or discovery server, which create Fast DDS as a, Like, a single there is a single master who everyone is communicating to that single master and, uh, then it is distributing to every other, uh, client or every other server, the messages. So Fast DDS can also be, use, uh, when there is a centralized system.

If a simulation. K. Okay. So, uh, for nondeterministic behavior, definitely, um, firstly, we can do this thing in a real time, And, uh, we can do our post analysis. Um, so for real time, you need to write some, uh, Python scripts or so that you can generate the of properly, uh, of, uh, each and everything, uh, the feature that you are implementing or the message transform from 1 process to another process. You can, uh, draw the graph using a library. Or, uh, if you are using ROS, then there is a directly RQT graphs, which can plot, uh, everything very easily. And, uh, in run time, you can do that. Uh, post analysis, you can record all the data, If you're using ROS, then there is a ROSspec files. And if you're not using ROS, you can, uh, dump the log logs in a JSON format and a r e ninety format. And in post analysis, you can, uh, run the back file or you can Check that, uh, JSON. You can read that JSON to plot the graph. Uh, even I I Develop 1 tool, uh, which is a, like, a web tool. And in that, previous raw debug file is extracted, uh, into a JSON format, and then JSON is, We have we are reading JSON. Uh, like, there's a robot state. Uh, robot state, we all we have recorded all the robot state time, uh, time based. I can through, uh, like, In every time stamp, we are recording, uh, robot state, and we are using that tool to read that robot state, and we are running the robot, uh, similar to that manner. So the some nondeterministic behavior has been happened. We can, uh, definitely, the logs has been generated and the logs dumped into some files. We can, again, run that, uh, files to see what actually happened and how the robot is moved and how the things has been Done previously, and we can record that video also when we are running that tool. Uh, and we can run that tool again and again, using that files, and we can exactly determine what exactly hap happening. Like, the suppose robot are moving, like, uh, in a manner. Maybe there is a controller issue or controller is not very efficient. So from observing that, we can, do certain things. We can, uh, make some assumption. Like, maybe it is a control issue, and we can put more logs. We can do some, uh, tuning, And, again, we can run, uh, in our simulation other things. We can deploy to go straight to our simulation. The, uh, and that, Through that robot state, the simulation will be run, and we can see that our e our non deterministic behavior has been solved or not solved. At least, we can figure it out, like, uh, at one

Scale. Yeah. So we have used, um, Azure Azure Cloud for the deployment of large scale, uh, robotics simulation. So, initially, we faced a very, uh, like, we have run more than 40 robots, uh, through that cloud, And we are facing, uh, certain problems. So earlier, uh, was earlier, our system is in a Rosspan based. So in Rosman, we directly created 1, uh, master, uh, that cloud cloud service. It is a centralized system. There is a master, and there are multiple robots which is run communicating through master, and all the master is sending command to all the robots. So we have created that server as a master, and all the robots is a slave. So just we need to so Roskor, we are running on a master and all our slaves. So very Very easily, we can communicate as ROS1 gave that, uh, benefit of creating a master and slave node. Uh, under root, there is a TCP IP or TCP ROS, which is working on. In when we when we shifted from ROS one to ROS two, um, so in ROS two, we have faced a lot of problem Because of the because in ROS two and the root, there is a fast DDS. And fast DDS, uh, like, if we are running, uh, 40 robots. Then all the 40 robots are communicating with other 39 robots because of architecture. It is communicating All the things which is present in, uh, the network. So that becoming, uh, very problem for us and to run for the robots. So, uh, and, like, running robots to cloud because there is a exchange of, uh, message, uh, happening from local system to the cloud. So, firstly, what we did, uh, we shifted that cloud to our nearest geo geographical location. We we are running that, uh, robots in Asian network. So, like, we have shifted that cloud service to a geo Asian Asian Cloud Service. Um, we have taken Azure, uh, that, uh, like, nearest to our India Cloud Service. Then what we did, we introduced initial peer list. What initial peer list? Uh, like, it's kind of a discovery server. There is only single point, like every throughput is communicating, and that discovery server is communicating to master, and then the master is sending again the command to the discovery server. And suppose master send a message for robot number 1, so that discovery server has a role that this message is only delivered delivered to robot 1, Not other 39 robots. So, uh, also, we have reduced our payload. Uh, only important things we are transferring from, uh, robot to the Cloud service master, uh, and we are like, we use multithreading. We use Batching. Batching. Batching. Like, uh, we are not completely calling that, uh, completely calling that communication model. We are sending, like, uh, suppose in 10 second, we are only sending one time, uh, the commands when it is necessary. So on an average, we are, Uh, we are sending 1 message on a 10 second to a robot, and, like, robot can and we have put some smartness inside the robot also so that robot can take up, uh, certain calls, uh, without communicating masters. So we have reduced the payload. We have, uh, introduced patching, introduced, uh, asynchronous, multiple threadings, and this all solve, uh, our, like, communication between cloud and the local server. And, uh, so I worked with Azure Cloud Service, And I know how to like, we have also put a security, like, we have introduced false data certificate both in cloud and as well

Yeah. So, uh, reinforcement learning, like, uh, I will talk a little bit about what is a reinforcement learning. Suppose you want to mute move some robot in a particular, uh, direction. Uh, so there is a a point. There is a b point. Robots robot we have started from a point, and robot, uh, start taking the path to b b point. But, Certainly, uh, like, uh, in some situation, it is very far from that viewpoint. So we are giving giving reward to the robot, but that reward is not Very good. Slowly slowly, it is coming, uh, towards b. We are giving we are giving we are start giving good rewards to that robot. And slowly slowly, it's going far away from me. We are giving, uh, robots to bad rewards. So, accordingly, that model is trained, and It start start getting good rewards and it start moving towards b point because the model required good rewards. And we are not giving, like, other the the layers that we have created in our model, in reinforcement learning model. It is giving the rewards. We have just, uh, do the little bit mathematics, and we have defined all the like, we have created the model. So and, uh, with that model, the robot start moving at b point. And, uh, so, like, first task is completed. Now we have changed that b point. Again, the model start tuning their parameters. And slowly slowly, the model becomes so accurate that whenever we give Any a point or b point, the robot very accurately moves from a point to b point. Uh, now the thing is why it is very, Why it is very important? Reinforcement learning, uh, with another traditional methods like a star, mode a star, d star, uh, drista, uh, because, like, reinforcement learning given their parameter on its own, and it is very, very, very complex and, complex inside the inside the model, but we don't want to do much thing to implement that model. Yeah. Maybe the pan parameter tuning, it's a little bit, tricky. And with reinforcement learning, we can use, uh, like, different different algorithms like, uh, map less navigations. Uh, even In, uh, even reason based navigation has been achieved, uh, very accurately through reinforcement learning. I just talked about path learning, but the reinforcement learning, Uh, in in simulation framework, there's a way way great advantages. Uh, like, reinforcement learning can be can be used to sense the data, Not just for just for path planning. Even, um, even for the controller, we can use reinforcement learning, Uh, even for some perception activities like, uh, to detection any object and, uh, accurately, like, with the monoclonal camera to run, uh, robot very accurately. All the things can be easily, uh, not easily. Like, uh, it's a it's a hard to achieve, but very easy to implement. We can do through reinforcement learning. So reinforcement learning play very important in robotics as well as the simulation framework, like Isaac Sim. So and it will ultimately improve our robotic behavior, uh, or improve our robotic

Context of simulating a robotic arm moment using NVIDIA Omniverse. Okay. Now in the context of simulating a robotic arm moment using NVD Omniverse, consider the following pseudo code snippet. Current position. Target position Moment speed current position is not equal to target position. Yeah. So firstly, uh, so in, Uh, the direction equals to normalize target position minus current position, and you're ultimately giving that current position to the robot in which he wants to move. Uh, firstly, that cat robot arm position should give you, uh, 3 d coordinates. So current position current position should be 3 d coordinates. Then, uh, again, that from target position, To get target position, get target position should be on 3 d coordinates. And when you are doing, uh, like, a normalize, Uh, it should also give out, uh, 3 d 3 d coordinates. If target position, current position are 3 d, then, uh, direction will be three d. You are directly adding current position and, uh, like, you're directly multiplying direction into moment speed? I don't think so. It can be possible, uh, very easily because direction is a three d coordinate. Uh, you need to do some, uh, like, you need to use different Wait, uh, a different line to do this calculation, current position plus direction into moment speed, then you will get a new position and then you can, Uh, update that robot arm position. And in that f distance, you are ultimately seeing the target what what target you want to achieve and what is the current position. And if, uh, and you're subtracting that. If you're, uh, subtracting that, And it comes under the tolerance, then you are breaking. But, uh, this, uh, yeah. Uh, so distance is not defined, Uh, currently, in your, uh, code snippet, so you need to define, uh, distance. Like, you need to subtract all the 3 d posts with the that, uh, 3 d pose of current position minus 3 d pose of target position. And if come under the tolerance, then you can, uh, say it reached the final position. Um, so this is

So you are using I six m for developing a realistic robotics simulation. Okay. Yeah. So there may be logical error in, uh, 2 steps. 1st is simulate step. So suppose, uh, the simulate step, uh, like, while simulation running through the whenever the simulation, uh, running, it's 2, it's completely by loop is con continuously running. So, uh, in simulation step, it is tending to the to, uh, it should be tending towards the End of a simulation. But simulate simulates have it not tending towards the ending of a simulation. If it's standing towards some other case or not tending to the, uh, condition where the simulation should stop. Suppose there is a condition, like, when theta becomes 0 and, uh, the simulate, uh, step is not making theta, uh, towards 0. It's increasing continuously theta. So it's not go going to be this little bit is not never ending. And, uh, and that check simulation end condition should be like theta is less than, uh, uh, less than 0. But, Uh, that simulation step is continuously increasing theta. So the theta is not become less than 0. So this loop is continuously running and which, uh, like, which continuously occupied our RAM continuously. Even in this file loop there is a no sleep. Uh, whatever while loop we will use in Python, in CPB, in any programming language, any By loop we will use or any loop we will use, we already we should put any kind of sleep, uh, because it will like, For us, that sleep may be very small, but for, uh, computer RAM, the sleep is very significant. Like, it, uh, it, Uh, reduces CPU utilisation very, uh, very effectively. So there is no sleep also. Another like, it is, uh, in relevance to the question, but that's very important. Uh, so, uh, logical error is only. Right? Uh, logical error can happen in the simulation step or checks check check simulation end condition. Like, maybe the end condition is different. You have put, Uh, maybe another end condition which we have to put, but, uh, you have put different end condition, and it is not, um, coming to that part. Uh, another thing, uh, let me check while simulation running, simulation steps, check simulation end conditions, Simulation running goes to false. And if and when it become false, file false, It will do that. Yeah. Also, uh, there like, simulation cleanup will not be happened, uh, because it

Explain how you would use the currency concurrency to manage. Explain how you would use the principle of concurrency. Yeah. So do buff. Concurrency, what concurrency provide? Uh, concurrency provide the, uh, parallel calculation of, graph. Or parallel execution of multiple code snippets. Uh, so suppose, uh, like, in RoboDKAM, there are, uh, format animatics and inverse animatics. And in also, there are multiple you can save Frames or multiple, uh, origins or multiple parameters to calculate. Like, there may be x y theta or x y z theta to calculate. So you can do that calculation, uh, in a different track so that your calculation is faster. And, uh, doing that faster calculation, you will merge to 1 place and, uh, using that graph. Fast calculation, you can move the, uh, robotic arm. Another thing in concurrency, uh, you need to take you need to take care of threadings properly because, uh, like, uh, there is a less very less, uh, control of thread in our developer hand. Uh, so you need to use proper locking mechanism in threading. Uh, in Python, there is a limitation because of a g I l lock. Uh, that GIA log, uh, just divide the tag and, uh, it give the feel of multithreading, but it is not actually the multithreading. Uh, if you get a proper, uh, multithreading or concurrency, then we should use c plus plus or, uh, any other language, which is buff. Good for multithreading. Python, there is a because of g I lock. But recently, I've seen some logs. Uh, some developers have removed g I lock, uh, in Python. Uh, like, there is some upgradation of Python, uh, which is not, uh, like, very reliable, but some developers, uh, which is not, uh, like, official Python developers. But, yeah, have created some library. We can use Python multithreading, um, in that also. But, yeah, for robotic movement, I think, uh, we can do different calculation in different threads. You even we can do inverse kinematics in different threads or forward kinematics in different threads. Ultimately, we will merge or join the thread of the calculation in 1 place, and then we finally give the moment or give the final calculation to Docotecam to do very