profile-pic
Vetted Talent

Dhairya Verma

Vetted Talent

Experienced Software Engineer with a demonstrated history of working in the software industry. Skilled in Java, AWS, Distributed Systems. Strong engineering professional with a Bachelor's Degree focused in Computer Science.

  • Role

    Golang Developer

  • Years of Experience

    6 years

  • Professional Portfolio

    View here

Skillsets

  • Python
  • TypeScript
  • Terraform
  • System Design
  • Svelte
  • SQS
  • MongoDB
  • FFmpeg
  • Golang
  • Microservices
  • AWS
  • AWS - 5 Years
  • Java
  • Amazon S3
  • Python
  • Redis
  • DynamoDB
  • SNS
  • Spring
  • Kafka
  • Java
  • AWS - 4 Years

Vetted For

9Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Media Engineer ( Python and Golang) - RemoteAI Screening
  • 60%
    icon-arrow-down
  • Skills assessed :akmai, fastly, media streaming, Terraform, AWS, Docker, Go Lang, Kubernetes, Python
  • Score: 54/90

Professional Summary

6Years
  • Aug, 2024 - Present1 yr 9 months

    Senior Software Engineer

    Jacks Club
  • May, 2024 - Jul, 2024 2 months

    Backend Engineer

    Freelance Developer
  • Mar, 2021 - May, 20243 yr 2 months

    Founding Backend Engineer

    Fanclash
  • Aug, 2019 - Mar, 20211 yr 7 months

    Software Development Engineer

    Amazon

Applications & Tools Known

  • icon-tool

    TypeScript

  • icon-tool

    Golang

  • icon-tool

    Kafka

  • icon-tool

    Redis

  • icon-tool

    Lambda

  • icon-tool

    MongoDB

  • icon-tool

    Node.js

  • icon-tool

    CockroachDB

  • icon-tool

    AWS

  • icon-tool

    Java

  • icon-tool

    AWS Glue

  • icon-tool

    S3

  • icon-tool

    Spring

  • icon-tool

    JSP

Work History

6Years

Senior Software Engineer

Jacks Club
Aug, 2024 - Present1 yr 9 months
    Designed scalable backend solutions for processing events, integrated third-party casino APIs, and developed a Telegram chat moderation bot.

Backend Engineer

Freelance Developer
May, 2024 - Jul, 2024 2 months
    Built a 24/7 live streaming platform and implemented real-time game state management.

Founding Backend Engineer

Fanclash
Mar, 2021 - May, 20243 yr 2 months
    Key founding engineer contributing to user base growth, built an esports RAG chatbot, reduced CPU usage, and developed a highlight generation SaaS and real-time esports fantasy platform.

Software Development Engineer

Amazon
Aug, 2019 - Mar, 20211 yr 7 months
    Integrated a Java-based rule engine and implemented shadow mode workflow for API migration.

Achievements

  • Implemented non-transactional approach and optimized performance with Redis integration
  • Architected multi-region betting backend using CockroachDB, Kafka, and AWS, emphasizing GDPR compliance and scalability.
  • Scaled data pipelines for daily jobs that generated customer reports
  • Implemented shadow mode work ow for the migration of the refund API, automating daily mismatch reports using AWS Kinesis Stream, AWS Glue, and S3.
  • Implemented non-transactional approach and optimized performance with Redis integration
  • Architected multi-region betting backend using CockroachDB, Kafka, and AWS, emphasizing GDPR compliance and scalability.
  • Scaled data pipelines for daily jobs that generated customer reports
  • Implemented shadow mode work ow for the migration of the refund API, automating daily mismatch reports using AWS Kinesis Stream, AWS Glue, and S3.
  • Implemented non-transactional approach and optimized performance with Redis integration
  • Architected multi-region betting backend using CockroachDB, Kafka, and AWS, emphasizing GDPR compliance and scalability.
  • Scaled data pipelines for daily jobs that generated customer reports
  • Implemented shadow mode work ow for the migration of the refund API, automating daily mismatch reports using AWS Kinesis Stream, AWS Glue, and S3.
  • Key founding engineer at FanClash with a user base of 3 Million
  • Reduced CPU usage by 10-20% with serverless event-driven architecture
  • Resolved concurrent slot booking conflicts by 99% using Redis
  • Developed GPT-based chatbot for esports knowledge
  • Engineered real-time esports fantasy platform
  • Led development and design of app features including payments and taxation
  • Built social graphs with ArangoDB for user engagement
  • Revamped application for B2B capabilities
  • Architected multi-region betting backend emphasizing GDPR compliance

Major Projects

2Projects

Real-time esports highlight generation SaaS

    Built a SaaS to automate esports highlight generation using OBS Studio, RTMP servers, OpenCV, FFmpeg, and GPU acceleration.

RAG Chatbot for esports knowledge

    Developed a chatbot using GPT, Pinecone, Langchain, and AWS Lambda for message processing and summarizing.

Education

  • Bachelor of Technology in Computer Science and Engineering

    IIT Mandi (2019)

Interests

  • Skateboarding
  • Drama Club
  • Skateboarding
  • Drama Club
  • Skateboarding
  • Drama Club
  • AI-interview Questions & Answers

    I am a back-end developer. I started working in 2019 with Amazon. I worked there for one and a half years. And there, I was mostly working with Java and AWS services. After Amazon, I moved to a very early startup in Bangladesh. It was a fantasy gaming startup. At that time, it was seed-funded, and I joined it as a founding back-end engineer. We scaled that application, or a fantasy gaming app, to 3 million users or 100,000 concurrent users. And I contributed to raising series A and B afterwards. But after certain years, there were some government regulations regarding taxation around fantasy gaming. So we discontinued that app, and later, we started focusing on more AI applications. So recently, we built automatic highlight generation using OpenCV with Python and FFmpeg with hardware acceleration. So, there are esports like CS GO games, CS GO streams being streamed by the organizer, and they take a lot of time to cut the clips and upload it to their social media handles. So, we provide a live highlight platform to them. Highlights will be generated automatically in a matter of minutes, and they were able to reduce their video posting time from an hour to 10 minutes. And currently, I am working with an Amsterdam-based startup. It's a crypto casino, and I am building a live video stream on a certain game for them, which they will use to bet on. Like, the stream will be played, and the player users will bet on the people fighting on that stream. So I'm building a stream for them. And I am particularly using Golang for it and FFmpeg for headless streaming. And in my previous company, Fanclash, I was using TypeScript, Golang, Python, AWS, Kafka, AWS Lambda, and many other services.

    Let's let's take a function, like an example function which says, okay, post anything or I would say it's definitely a single responsibility principle. So I will try to break that function into multiple functions with each function performing only one responsibility. Like, if there is a certain function post command, then it should only post command. It should do nothing else. Like, there might be some certain flow where I am posting a comment, and I'm also sending an event to Kafka, just to its just an event, I know, that shows, okay, that some comment was posted on a certain post. So there should be two functions. One should post a comment. Another function should send an event to Kafka. So this is the kind of thing we can do. Breaking the function into multiple functions, and each function has a single responsibility, and that's all.

    Python data structures are the most useful to process live video streams. I think it's definitely a hash map or something like that or maybe a buffer when we store the video continuously and the buffer fills up. Same buffer fills up. It might be that, or an algorithm would be more effective. I don't really have much idea about the algorithm, but it comes about the live stream. I have mostly worked with OpenCV, where I read video and like read each frame, and we had some certain logic defined on each frame.

    The first thing going, I would definitely use buffers for reading live video streams. Like, a buffer could be of some certain megabytes, and it always reads and pushes it out. That will definitely improve our memory management. And another approach would be, if somebody is subscribed to our video service, then I would manage connections directly from our service. But I would consider moving it to a CDN and having people read the video from the CDN, which will definitely take a lot of load from the calling servers.

    CloudWatch, basically, it shows you the metrics like the graphs and all these things. One thing I would use CloudWatch for is logs. I would print every log that shows any error or any debug or any info that is maybe relevant to our performance. Another thing that will be useful is definitely the graph metrics. And, that could be like the processing time, each time we receive a live video, what is the performance level? What is the time that we took to process it and move it to, what is the logic time that's definitely one of the metrics. And another would be the error rate. What is the number of errors that we have seen in the past? 1, 5 minutes in a past two minutes window, and it is the metric over that. Maybe latency, like, how we are moving and latency over any crucial part of our code, like, maybe we are sending that video stream after processing to somebody else. Maybe we are uploading that video to S3. So, you would see all of these things and error metrics or all of these things.

    I would opt for a serverless architecture on AWS if the stream is not really going 24 hours. It's like sometimes, once a day or 1 hour, 2 hours, then I would definitely opt for serverless architecture. And in my previous/current project where I am streaming live games to all users, we are using AWS IVS, which is kind of serverless. It just gives you an RTMP endpoint, and you can send your video to the RTMP URL. AWS will give you a playback URL, which is like an m3u playlist, and it is very easy to play without much effort. So if you really want to deploy it fast, the serverless architecture is really good, and the stream is not really 24 hours because I think a 24-hour stream gets expensive. Then we should start looking for some other solutions.

    Availability is mostly related to the memory or the CPU that we have assigned to the container. The CPU is definitely the key factor. It might be that the container doesn't have all the memory it needs, or the CPU isn't meeting the requirement. The replicas are only two. So the two containers might exit before the new one is spawned.

    I would say, always, like, it's kind of a deployment where you save the state order, like, spawn, like, save that save the state of that stateful application and replicate it somewhere else. And I don't know. Okay. It's like a blue green deployment where you will spawn a new instance of something, which is a stateful application. And only when it is fully completed and we are sure that, okay, it's completely similar to the one that we are upgrading. We will destroy the previous one, and the new traffic will go to the new one. The last fix is also finished.

    Strategy. I would say, round robin is also fine, but if some instances are loaded and it's gone. I think round robin or maybe the CPU, like, the number of requests related if there is, something like that where requests will always go to an, container with the least CPU utilization.

    I want the CICD definitely. Yes. CICD, I have used a checking pipeline where we would just deploy anything, and it will. Like, based on the strategy that we have defined, it will, like, abort the current local containers and it will keep spawning new ones, ensuring that there is high availability. Like, if there are 4 replicas, then it will, like, exit 2 of them and then it's online. And then when the new one is on, then we are then it will exit the other two. So, like, CICD, Jenkins, pipeline, or AWS code pipeline. It's fine with it.

    I think it's only a benefit that it is a service already defined in AWS. And if we try to implement this on our own, then it would take time. So it's kind of like reinventing the wheel. So if we have to fast deliver our projects and in the future, we can definitely change it to something of our own, and let's test it.