Experienced Software Engineer with a demonstrated history of working in the software industry. Skilled in Java, AWS, Distributed Systems. Strong engineering professional with a Bachelor's Degree focused in Computer Science.
Senior Software Engineer
Jacks ClubBackend Engineer
Freelance DeveloperFounding Backend Engineer
FanclashSoftware Development Engineer
Amazon
TypeScript

Golang

Kafka

Redis

Lambda

MongoDB
Node.js

CockroachDB

AWS

Java

AWS Glue

S3

Spring

JSP
I am a back-end developer. I started working in 2019 with Amazon. I worked there for one and a half years. And there, I was mostly working with Java and AWS services. After Amazon, I moved to a very early startup in Bangladesh. It was a fantasy gaming startup. At that time, it was seed-funded, and I joined it as a founding back-end engineer. We scaled that application, or a fantasy gaming app, to 3 million users or 100,000 concurrent users. And I contributed to raising series A and B afterwards. But after certain years, there were some government regulations regarding taxation around fantasy gaming. So we discontinued that app, and later, we started focusing on more AI applications. So recently, we built automatic highlight generation using OpenCV with Python and FFmpeg with hardware acceleration. So, there are esports like CS GO games, CS GO streams being streamed by the organizer, and they take a lot of time to cut the clips and upload it to their social media handles. So, we provide a live highlight platform to them. Highlights will be generated automatically in a matter of minutes, and they were able to reduce their video posting time from an hour to 10 minutes. And currently, I am working with an Amsterdam-based startup. It's a crypto casino, and I am building a live video stream on a certain game for them, which they will use to bet on. Like, the stream will be played, and the player users will bet on the people fighting on that stream. So I'm building a stream for them. And I am particularly using Golang for it and FFmpeg for headless streaming. And in my previous company, Fanclash, I was using TypeScript, Golang, Python, AWS, Kafka, AWS Lambda, and many other services.
Let's let's take a function, like an example function which says, okay, post anything or I would say it's definitely a single responsibility principle. So I will try to break that function into multiple functions with each function performing only one responsibility. Like, if there is a certain function post command, then it should only post command. It should do nothing else. Like, there might be some certain flow where I am posting a comment, and I'm also sending an event to Kafka, just to its just an event, I know, that shows, okay, that some comment was posted on a certain post. So there should be two functions. One should post a comment. Another function should send an event to Kafka. So this is the kind of thing we can do. Breaking the function into multiple functions, and each function has a single responsibility, and that's all.
Python data structures are the most useful to process live video streams. I think it's definitely a hash map or something like that or maybe a buffer when we store the video continuously and the buffer fills up. Same buffer fills up. It might be that, or an algorithm would be more effective. I don't really have much idea about the algorithm, but it comes about the live stream. I have mostly worked with OpenCV, where I read video and like read each frame, and we had some certain logic defined on each frame.
The first thing going, I would definitely use buffers for reading live video streams. Like, a buffer could be of some certain megabytes, and it always reads and pushes it out. That will definitely improve our memory management. And another approach would be, if somebody is subscribed to our video service, then I would manage connections directly from our service. But I would consider moving it to a CDN and having people read the video from the CDN, which will definitely take a lot of load from the calling servers.
CloudWatch, basically, it shows you the metrics like the graphs and all these things. One thing I would use CloudWatch for is logs. I would print every log that shows any error or any debug or any info that is maybe relevant to our performance. Another thing that will be useful is definitely the graph metrics. And, that could be like the processing time, each time we receive a live video, what is the performance level? What is the time that we took to process it and move it to, what is the logic time that's definitely one of the metrics. And another would be the error rate. What is the number of errors that we have seen in the past? 1, 5 minutes in a past two minutes window, and it is the metric over that. Maybe latency, like, how we are moving and latency over any crucial part of our code, like, maybe we are sending that video stream after processing to somebody else. Maybe we are uploading that video to S3. So, you would see all of these things and error metrics or all of these things.
I would opt for a serverless architecture on AWS if the stream is not really going 24 hours. It's like sometimes, once a day or 1 hour, 2 hours, then I would definitely opt for serverless architecture. And in my previous/current project where I am streaming live games to all users, we are using AWS IVS, which is kind of serverless. It just gives you an RTMP endpoint, and you can send your video to the RTMP URL. AWS will give you a playback URL, which is like an m3u playlist, and it is very easy to play without much effort. So if you really want to deploy it fast, the serverless architecture is really good, and the stream is not really 24 hours because I think a 24-hour stream gets expensive. Then we should start looking for some other solutions.
Availability is mostly related to the memory or the CPU that we have assigned to the container. The CPU is definitely the key factor. It might be that the container doesn't have all the memory it needs, or the CPU isn't meeting the requirement. The replicas are only two. So the two containers might exit before the new one is spawned.
I would say, always, like, it's kind of a deployment where you save the state order, like, spawn, like, save that save the state of that stateful application and replicate it somewhere else. And I don't know. Okay. It's like a blue green deployment where you will spawn a new instance of something, which is a stateful application. And only when it is fully completed and we are sure that, okay, it's completely similar to the one that we are upgrading. We will destroy the previous one, and the new traffic will go to the new one. The last fix is also finished.
Strategy. I would say, round robin is also fine, but if some instances are loaded and it's gone. I think round robin or maybe the CPU, like, the number of requests related if there is, something like that where requests will always go to an, container with the least CPU utilization.
I want the CICD definitely. Yes. CICD, I have used a checking pipeline where we would just deploy anything, and it will. Like, based on the strategy that we have defined, it will, like, abort the current local containers and it will keep spawning new ones, ensuring that there is high availability. Like, if there are 4 replicas, then it will, like, exit 2 of them and then it's online. And then when the new one is on, then we are then it will exit the other two. So, like, CICD, Jenkins, pipeline, or AWS code pipeline. It's fine with it.
I think it's only a benefit that it is a service already defined in AWS. And if we try to implement this on our own, then it would take time. So it's kind of like reinventing the wheel. So if we have to fast deliver our projects and in the future, we can definitely change it to something of our own, and let's test it.