Result-oriented Senior Technical Lead with 8+ years of experience in developing and leading the implementation of innovative solutions. Gained experience in handling project management activities including project scoping, estimation, planning, finalization of technical/ functional specifications and resource administration. Strong expertise in web development technologies like Node.js and React.js., Express.JS, Typescript, Python, C# and Angular. Knowledge of SQL and NoSQL Databases and CI/CD pipelines & tools. In-depth knowledge of Docker, AWS (lambda, EC2, Code-Deploy) for managing and scaling applications. Proven experience in architectural decision-making and system design. Solid planning and organizational skills in coordinating all aspects of each project from inception through completion. Strong team builder and facilitator, fosters an atmosphere that encourages highly talented professionals to balance high-level skills & maximum production.
Technical Lead
Turing Pvt LtdSenior Engineer
Saagas AI IncTechnical Lead
IQ-Line Pvt LtdSoftware Development Engineer II
Smiths DetectionSenior Backend Developer
BizzTM Technologies Pvt LtdSenior Backend Engineer
DarioHealth Pvt LtdSoftware Engineer
Blackmagic DesignFull Stack Developer
CusXPNodeJS
AWS Lambda
AWS EC2
Apollo GraphQL
MongoDB
Azure DevOps
Docker Swarm
Qt
C++
C#
jQuery
Bootstrap
Virtuoso
Ansible
Azure Data Factory
Bubble
1. Managed the backend infrastructure for Genie Connections, a dating app popular in UK
2. Integrated ChatGPT into their system.
3. Reduced company's cloud costs by 20% by combining related modules into a singular microservice.
4. Created cron jobs for marketing and promotion activities
5. Optimized DB queries by introducing aggregation pipelines instead of db queries.
Technology used - NodeJS, Python, AWS, OpenAI SDK, mongoDB, mongoose
1. Designed and developed an image viewer application with features like drawing shapes, labelling, transformations,support for touch devices, pressure sensitivity,color coding
Technology used - JavaScript, Canvas SDK
Developed a web application to classify transactions based on transaction code by reading messages on the phone, and using it to build monthly budgets using AI
Technology used - Python, Scikit-learn
Hi, everyone. This is, and I'm a software developer professional. Uh, my educational background is that I've done my BTech in computer science engineering from IS to shift course in the year 2010 and graduated in 2014. After that, I went for my masters and completed I'm taking computer science engineering from the prestigious IIT Bombay. I graduated in 2016. After that, I've been working as a software developer for a few companies. And by the total working space, more than 7 years. I have started my career as a dot net developer with Angular frame, uh, with the front end technology being Angular. And I have around 4 years of work ex as an as a dotnet developer. Since past 3 years, I've been working mostly on JavaScript domain, and I have full stack experience in Node. Js as well as React. Js. Apart from this, I have good experience in both SQL as well as NoSQL databases. And, also, I have good good experience in Docker.
To mitigate SQL injection risk in a Node. Js application, we can use ORMs or we can use parameterized queries rather than, uh, hard coded queries or input or, yeah, parameterized queries or we should use the ORM. Other than that, we can also make use of the fact that and, uh, the inputs which are received from the front end, we check for inputs before running the query. We do sanity checks on the input to test the input is valid or not. So that is also 1 technique.
So to improve the load time of a single page application, uh, which is built on React and which has many components. What we can do is to improve the total time is we can do code splitting. Maybe we can split up out a smaller chunks, and that can be loaded on demand. Um, we can do lazy lazy loading using the React router. Um, you can also do optimization of the images and the assets. We can make use of CDNs. CDNs will also help. We can minimize the bundle size, bundle of CSS. We can do the mini bundling. And caching, we can do. All these activities would help improve the
So we would use the interface for separate interface for MongoDB and SQL, And, uh, the bringing it to a common API, like, what I imagine maybe there's a there's an API which needs access to both these databases, uh, data in both these databases. So it will try to connect to each of them using their own respective ORMs, and, uh, it would fetch data or push data, whatever. But, like, whatever it would, it would be it would fetch data from both of these tables and, uh, using their separate interfaces. And then based on the data which has been received, we can we can take action of that data. We can modify the data or take or or apply our business logic on that data.
So for a content management system, if you want to design a MongoDB schema to handle multiple content types and their associations, We can have it in the form of a JSON, and we can query on this JSON object. Imagine that we have these JSON data stored in the DB. Each of them would have their content type and their association type inside, And, uh, we can have the table name as contents, and contents would be having different content content types. So 1 content type can be article or a text text based content. Another can be an image based content. Another can be a video document based content. And, uh, association array would be stored in their associations, uh, to which related content they are associated to. And, uh, another can be yeah. All these associations will be having a type and a content ID. And we can also have the author who is the author. So there can be an author author collection. And, also, each content would be tagged to 1 author. We can also tag articles or videos with tags. So we can also have another entity called tag collection, another tag. So probably this should be a good schema design for in MongoDB to handle multiple content types and their associations.
So here, what we can do, we can make use of a logic search to, uh, or maybe we can have an in memory cache like that is. Uh, we can make use of that to store to to stores most frequently most, uh, recently or most frequently accessed data in the cache, we can store them. And when for repetitive queries, we can make it sure that it is stored in the cache and it is served from the cache rather than making a DB query. So this is how we can implement the caching mechanism.
So in this particular code snippet, we're not we're not checking whether the query is actually returning a user of user object or not. So we can we have to add that check null check over there. If user exists, only then we can return a test dot status 200 dot JSON user. If it if the user object doesn't exist, then it would result in a 500 error.
So it can be the situation here with we're trying to find the user and updating its email. So we need to first add add an add a check before calling this find by ID and update. What I mean is that, uh, it might it might be that the user does not exist at all. So this if if condition to check the user should be before calling this find by ID and update. If the user exists, only then we can update. So, uh, probably, uh, this should this should be the issue.
So in 1 of our previous rules, we had the, uh, entire application on, uh, Node. Js back end, and the DB was MongoDB. So while we we were we met with the situation where the number of users were scaling up at our MongoDB database was getting slow. So we used the we used the inbound feature of MongoDB to to increase the cluster size. And, uh, we in we earlier, we were having only 1 node, so we increased it to scale up to, uh, 3 nodes and 1 master, 2 slaves like that. And, uh, it would auto scale on the basis of that. It earlier, it would be only 1 node, and maybe with the number of requests is increasing to the database, it would scale to multiple nodes. So this was a feature of the MongoDB MongoDB itself, so that's how we managed to resolve this.
To ensure code quality, we run unit tests and find out the code coverage. We also make use of various hooks provided by GitLab to check to ensure that the code is of good quality and that all these tests passed before we only that it would merge the feature into the main branch. This provided this kind of, uh, code quality insurance is provided by, uh, the paid repository providers or, like, GitHub or GitLab. We use GitLab. So at GitLab, we have done this kind of a thing.
So for managing data consistency, what we use is we use a message queue. And any changes in we and it is to ensure that different services, uh, are, uh, having these queues, and they push they they pull data from this queue 1 at a time in that in the, uh, using the queue architecture, and that's how we manage data consistency.