profile-pic
Vetted Talent

Chandan Singh

Vetted Talent

Experienced Technical Lead with 9 years in software development and cloud architecture, specializing in AWS, Docker, Golang, and Python. Proven track record in leading teams through complex projects, including the design and implementation of Kubernetes connectors and Software Defined WAN solutions. Skilled in IAC, Microservices, and problem-solving, with hands-on experience in tools like Code Pipeline, Travis CI, and Kubernetes. Adept at deploying and maintaining VNFs, orchestrating machine learning code, and ensuring adherence to coding standards. Recognized for mentoring junior developers and providing technical expertise in analytics projects. Seeking opportunities to contribute expertise in a dynamic and challenging environment.

  • Role

    Golang Developer

  • Years of Experience

    9.00 years

  • Professional Portfolio

    View here

Skillsets

  • Ci/Cd Pipelines
  • Security
  • PostgreSQL
  • OpenShift
  • Machine Learning
  • Google Cloud
  • Golang
  • Debugging
  • Code Review
  • Code Design
  • Cloudformation
  • Cloud Services
  • Python - 3 Years
  • automation
  • Apache Airflow
  • Agile development
  • AWS - 5 Years
  • Terraform - 0.6 Years
  • Terraform - 0.6 Years
  • Docker - 4 Years
  • Docker - 4 Years
  • Kubernetes - 2 Years
  • Kubernetes - 3 Years
  • Python - 3 Years

Vetted For

7Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior GoLang DeveloperAI Screening
  • 51%
    icon-arrow-down
  • Skills assessed :IAC, Micro services, AWS, Azure, GCP, Go Lang, Problem Solving Attitude
  • Score: 46/90

Professional Summary

9.00Years
  • Mar, 2021 - Present4 yr 6 months

    Technical Lead

    Incedo Inc
  • Nov, 2020 - Mar, 2021 4 months

    Senior Solution Integrator

    Ericsson
  • Apr, 2019 - Oct, 20201 yr 6 months

    Senior Platform Engineer

    Quantiphi Analytics
  • Jun, 2015 - Mar, 20193 yr 9 months

    Associate Consultant

    Atos

Applications & Tools Known

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    OpenShift

  • icon-tool

    Apache Airflow

  • icon-tool

    Git

  • icon-tool

    PostgreSQL

  • icon-tool

    Gunicorn

  • icon-tool

    AWS CloudFormation

  • icon-tool

    Terrafrom

Work History

9.00Years

Technical Lead

Incedo Inc
Mar, 2021 - Present4 yr 6 months
    Create automation tool for deploying configuration of Load balancer using custom operator/controller using golang. Create design document and implementation plan for new features. Perform bug fixes and maintain the existing code based on Kubernetes Version. Debugging the packets for testing the traffic flow.

Senior Solution Integrator

Ericsson
Nov, 2020 - Mar, 2021 4 months
    Deploy and maintain the VNF using Ericsson Solutions. Maintaining the platform and monitor and provide resolution to customer for issue and trigger change or incident management.

Senior Platform Engineer

Quantiphi Analytics
Apr, 2019 - Oct, 20201 yr 6 months
    Enable Machine Learning code with REST API. Hands on with Flask Framework and its interaction with AWS Services. Exposure to orchestration tools and Kubernetes.

Associate Consultant

Atos
Jun, 2015 - Mar, 20193 yr 9 months
    Developing and implementing Analytics Project. Mentoring developers in Big Data Analytics and predictive maintenance for wind turbines.

Achievements

  • AWS Certified Developer Associate Awarded for outstanding performance and contribution in the development of product.
  • Awarded for exceptional work in stabilizing the project

Major Projects

4Projects

Thunder Kubernetes Connector

Mar, 2021 - Present4 yr 6 months
    Create automation tool for deploying configuration of Load balancer using custom operator/controller using golang.

Activation and maintenance of Software Defined WAN using Orchestrator

Nov, 2020 - Mar, 2021 4 months
    Deploy and maintain the VNF using Ericsson Solutions while providing customer resolutions and incident management.

Enhanced Optimisation and Forecast Analytics

Apr, 2019 - Oct, 20201 yr 6 months
    Working on REST API-based solutions to operationalize machine learning models.

Wind turbines Predictive Maintenance and Windmill power prediction

Jun, 2015 - Mar, 20193 yr 9 months
    Developing analytics solutions and proof of concept projects based on latest technologies.

Education

  • Bachelor of Engineering (Computer Science)

    Pune University (2014)

Certifications

  • Microsoft EDX certification for Processing Data on HDInsight, Performing Real-Time Analytics using HDInsight on Azure.

AI-interview Questions & Answers

Yeah. So, uh, hi. Uh, my name is Shandan, and I have been working with I have a total 3 years of experience in, uh, Golang. Uh, typically, uh, and also before that, I know that I'm having, um, 9 years of experience. Uh, like, Before switching to Golang, I was working with, uh, um, the different, uh, projects based on Python And, uh, also cloud based technologies like, uh, well, I got I got I got exposure to, uh, AWS. So mostly, I was working on AWS cloud in conjunction with different, uh, different, uh, ecosystems. For example, Uh, I was working on 1st project, which was basically, uh, uh, creating the analytics platform. There, I was using, Uh, big data analytics tools. Like, with the we were creating analytics platform using Hot inbox platform, And it was a VM based deployment back in 2015. Then, Uh, then I was started working on, uh, one of the projects, which was doing the future like, doing the machine learning algorithm, Productizing the machine learning algorithms. The main, uh, the main, uh, challenge that we are facing that the the the amount of data that we were handling was Huge. Uh, just to give perspective, it was around, uh, 500 GBs of data. Uh, and, uh, this was heavily, uh, it was, like, a very compute heavy instance compute heavy operations since it was a machine learning algorithms, uh, and the con the completion was having during Using the optimization servers, and there, my responsibility was to, uh, get the code from the machine where the data scientists or the ML engineers and convert it into scalable fashion using, uh, different cloud technologies like, um, ECS, Python and Flask. Uh, post that, I started working on, uh, uh, Golang where I got a chance to work on creating custom controllers and custom resource definition for Kubernetes. Uh, there, currently, I have I have currently, we deployed or developed Seven to 8 custom resource definitions, and it's custom business logic of Upper call of configuring the the different third party applications, and this is using, uh, the standard or base Kubernetes source code, uh, which which which involves code generation, code execution and, uh, then handling the reconciliation logic of the controllers. The main objective of this project is The the client is having a custom, um, is having his his own product the security product or the application load balancer. And what do you what we our application does or our controller does is it con it helps configure the, Uh, the security application slash load balancer based on the CRD resources that are deployed in Kubernetes. And then here we are using mostly in, like the main terminologies that we are mostly involved is in Goroutines and, uh, queues. And, uh, we try to use API server spot, um, monitoring the the cluster. And there's, uh, there's once resources are, um, available, then the configuration happens on the, uh, regolith.

System enable for that. In a microservice architecture, how would you utilize AWS services AWS services how would you utilize AWS service to enable service Odie, for a Golang based service? How do you how would you utilize AWS service? How they want to utilize what type of service to enable service discovery for Golang base service? So if I understand correctly, um, it's it's more of a service mesh that you're pointing out to. Uh, so, typically, a service mesh needs to have Load balance server or API gateway. Uh, I could use, uh, ALB. And then using ELB, if we are using a virtual gateway, uh, those gateway will be, Uh, we need to create a gateway and point to the service of the Golang, uh, or the that particular service is written in Golang. And then what will happen is, uh, a, if it's the the the traffic is directly coming to that this particular Golang server based application? Then in that case, the IP address or the economical name or c name, uh, which is being generated by the or the IP address, which is created by the alert balancer? That can be used as a serving URL to get the access to this Golang based service? If it's a multi cluster based instances, then in that case, We need to use internal load balancer, uh, provided, uh, and then there needs to be communication between twos 2, uh, clusters? Uh, if it's a different 2 Kubernetes, uh, 2 Kubernetes clusters, And the the client is in another different Kubernetes cluster. So there, in that case, he can, um, use the internal internal, uh, load balancer to communicate with this Golang based service? Uh, 3rd is, uh, in service discovery, we can also use since there's a kube proxy. So, uh, there there in that case, with AWS service using of AWS service doesn't make sense because, uh, you get, uh, service dot Namespace.svc.cluster..local or whatever is the domain name of that particular cluster. That DNS can be used for, Uh, getting or getting the services from a plus product based, uh, application. So that's one use case which I can think about. Uh, secondly, if you would want to be very cloud heavy, uh, in that case, we can use, uh, e Elastic Commodities service, uh, which provides better integration with AWS code balances? So that's One thing which we can use. And and top of that, I think so we can use, uh, STO or Linkerd.

Describe a process for migrating existing uh, existing Go microservice to AWS serverless platform with minimal downtime. So, a, I'm assuming that this this particular cluster or this particular application uh, is turning already running on a mic, uh, on a on a mic on a container based platform. Uh, it could be Kubernetes or it could be Docker or it could be any cloud specific services or it could be uh, in house, uh, on on premise application. So first, what we can do in this case is, first, we can, uh, we need to deploy our applications on AWS platform. Uh, meaning, we can use, uh, Elastic uh, container services or, uh, EKS, which is Elastic Kumeris Service. There, we can run this particular, uh, application, and that depends on, uh, which which type like, which platform you're using. Now let's assuming that, uh, your traffic is being served from this, uh, I can think of 2 solutions. 1st is that, uh, there needs to have a rolling update kind of a pattern uh, where the the once the the application is ready on the, uh, the platform of AWS and ready to serve to the external traffic or, uh, the traffic. In that case, uh, you can use existing load balancer which you are using on your on premise, uh, and slowly and gradually, you can migrate the traffic from, uh, let's say, you can put a weightage or it would kind of a rolling update where some of the traffic is going to the the application which is running on on premise uh, and, uh, the one which is running on the AWS or, let's say, any kind of a EKS based platform. This this will happen is that, let's say, even if you are, uh, there would be 2 versions of the there would be same versions but with the 2 replica. Uh, one which, uh, if it's if the if the traffic is in a round robin fashion, so this means that, some of the free requests will go to your, uh, on premise data on premise application, and then that will go to your, uh, the container platform on AWS, which is also the application which is doing on AWS. So once this happens, uh, and the the uh, the sanity check confirms that the traffic is working fine. Then in that case, you need to, uh, change the load balancing or, let's say, the the traffic from which uh, the the place or the source from which the the traffic is getting routed. Uh, you can either change the entire routing uh, or you can and if you are using a load balancer, which I assume we are we you may want to use, then in that case, we are, uh, you just need to change the weights of that particular route. So that currently, let's say, if you're doing 60, 60, 40 split within the traffic, you can just change it to, uh, 100% to that particular section. So this means that entire and take into consideration the sessions. Uh, the session needs to be, uh, since this would be round top in fashion, so you need to turn out the traffic. You need to first stop the traffic to the current, uh, applications that's running on on prem uh, and then all the traffic back to the the application which is running on the AWS platform. And this would be, uh, initially, a blue green deployment, uh, typically, but with the same versions, um, and not a bit different versions. Uh, so it's the same version which is starting on on prem and, uh, on Amazon platform.

Uh, when refining a Go service, what practice would you implement to avoid tight coupling between the services? So, uh, typically, in microservices, I assume that, uh, the databases are running on a different if it's it's it's running on some serverless up serverless or managed databases or it's running in, Uh, it's running in, uh, a plat container platform. Uh, that's so this means that, um, whatever data is being, uh, processed? Those are getting processed from the the same databases. And if the if 2 services are using the same databases? In that case, you can, uh, bifurcate or you can break down those particular services into Much, uh, smaller services. And, uh, this means that the it's internally using the same database? But, uh, but the the APIs or the endpoints are different for different different services inside of a particular application. So that, I think, is one of the way we can, um, uh, avoid tight coupling. Secondly is, uh, we can Do I mean, depends on, like, what is the scope of the application. Uh, and if it's a polyglot or, say, where multiple languages are being used, then we can use the different types of, uh, APIs. It could be a REST API. GRPC? Uh, but if, uh, that so in that also, there are 2 categories. If it's, like, internally being used, then, uh, gRPC would make Or RPC call would make more sense because, uh, it's it's faster and it's the implementation goes, uh, and the the integration is seamless with different different languages. Uh, and in terms of flexibility, if you wanted to expose it to the outer world and make it more user friendly, In that case, uh, we may go for the REST API, uh, or the REST REST services respective services. So I think so. These these are couple of ways in which we can avoid a tight coupling. Uh, also, you can use, Uh, I think so we can use different different, uh, you can use the same load balances, but, uh, the same I mean, the ingresses of the gateways can be should be can be safe, uh, but the routes to different different protocols. Uh, so, basically, you can provide path based container objects. If it's the, uh, the if the API, the microservice is, too heavy? Then for each URL path, you can Uh, it can have its own container, own, um, application, and then that this can be routed via some objects. For example, you can use that by itself? You can use, uh, ingress or gateway APIs to traffic the to traffic the the the data or the particular URL, uh, to different containers based on the URLs which is being

What are the benefits of using AWS Elastic Container to go to deploy Go microservices, and how would you leverage it? So, uh, first of all, uh, e AWS container services is, uh, managed services. Right? So, uh, this gives you a flexibility that, a, uh, you don't have to worry about the Maintenance of, uh, the, uh, the underneath, uh, host OS, uh, that is taken care by ECS, uh, as soon as Fargate. So if you if you use an Elastic Continuous Fargate, then you are, uh, then the the maintenance of the platform is um, is is is owned by the Amazon and not with the user's perspective. So OS patching and OS updating is the The purview of the Amazon and not with, uh, not with the user or the the end customer who is using the service. And secondly, let's say it has tighter integration with other, um, so you get logging, Uh, metrics, uh, inbuilt with ECS and also get integration with and, uh, load balances. Uh, it could be network based load balancer. It could be application based load balancer based on the level of, I mean, it's if it's, uh, mostly load balances are are of type l seven, so, uh, application load balancer. Uh, so in that case, you can leverage this to directly serve your application. And, uh, since this is connected directly to the load balancer, You can have you can create the Route 53 and have their own zones. And also you can use, uh, manage, uh, you can use certificates as well inside this. So, uh, you can use the entire AWS tag for using this. Now how do how would you leverage it is, basically, We need to first create a container of, yeah, the microservices. Uh, once the container is steady, We you need to push it to it could be a public repository or it could be, uh, Amazon, uh, registry. Once that registry is there once the code is pushed up, that particular, uh, container is pushed to the particular image registry, Then you need we can use the container Elastic Container Services to run the particular, to you to point it to the particular image history and provide the endpoint of the particular, um, application. Uh, secondly, you can also, uh, use data, uh, data trust using, uh, using data encryption strategies in, uh, containers. And, uh, you can use, uh, KMS for storing your databases or credentials, which is, uh, since super sensitive and credentials using, uh, which which does which is not present inside the repository, from the databases name or a a key or API key, something like that. So you can use these, uh, integrations with KMS. And once it's there, once the EKS is once you provided the endpoint, Uh, and the application will start running, and, uh, load balancer will be used to serve those applications. Uh, and also that the sensor has mentioned that the host is managed by the AWS, so you can, Uh, on the fly, change the specs of the host on which the container is running. So in this case, you can go from I mean, I I have seen The customers I have I have switched the the box of been on which the host is starting from, uh, 32 GB to 64 GB to 1 28 GB based on the workload that, uh, I have worked

What are the most critical aspects to consider when designing a Go Microservices? For serverless architecture on AWS. I haven't worked on Go Microservices. Yeah. So I'm unable to attend this question. But, uh, if if just to give a perspective, if I'm understanding correctly, Uh, you need to deploy I think the logging, tracing, and, uh, event based triggering would be the way to go forward for

Review the code and identify go func. Uh, okay. So uh, which is supposed to run along on intent in asynchronously. So in this case, the first issue which I see is Uh, if you just write this inside the main, uh, this won't have any output, and You will just get launch the task info, but not the the actual process or task won't even done because, uh, goroutine is not having any kind of a, uh, watch sorry. Any kind of weight or a weight groups kind of a mechanism or the, uh, to like, for example, if we we for every cofunction, Uh, we need to have we need to have, uh, mutex, um, mutex based counter. For example, you can use wait groups dot add so that it, um, it spawns the particular go functions and waits for those functions to get executed. So if if you want to run multiple of such of these, If that's the condition, then in that case, you need to have multiple go functions with the same long running tasks, uh, in a loop. Sorry. Uh, in a go for go routine with a wait group. And, uh, once So you need to pass this wait group option, uh, that particular parameter inside this Go func. And once this process is long running task is done and their task gets completed, you need to do you need to mark, uh, the wait request done so that the counter of that particular or the the the the counter of the wait groups decreases. And, uh, and so that that that means that the code of the or the coroutine has completed, uh, the verification. So you can use, uh, wg.10 or you can just use, uh, w. Yeah. So that would be one thing. And, Secondly, this go process itself is log long running task itself is not, uh, is a single is a single process. So if you want to launch multiple process of the same, then the same pattern needs to be applied. So for go process, Uh, so it should be go func, uh, and then followed by that, you need to have, uh, I mean, the go func will have the anonymous function would have its weight group. Then you need to have a weight group declaration inside the the anonymous function, and then we need to pass Wait. We have 2 process long running task, uh, as a parameter. And then in process long running task parameter, you need to do defer w like, wait groups dot done so that, uh, multiple processes can run fancy if it's a multiple, uh, processing at the same time. Uh, and there are lots multiple functions which just needs to be executed, and this

The code that processes there's a bug reported that it's not really processing. Function handler request takes the weight group. We are taking that weight group. We make a channel of, uh, stock type request, and its buffer is of size of 100. Okay? It's not processing the request concurrently. So Opened the request. We So first thing is that you the request, I don't see requests. Uh, since it's a receiver object. The request itself is not getting passed in this case. So the re like, the handle go handle Requests will, uh, request will will process a request, but the input of this request is, Uh, unavailable. So we need to put the request pointer and then whatever request you want to process. I I mean, whatever is the process, uh, body, I would it's the request, like, uh, whatever is the parameters required inside the request, it should go in the request struct. And that request that needs to be passed to this particular Go handle request so that it can be processed. So that is That is, I think, the main issue. And Anything else apart from this? And, also, for for requests, well, Yeah. But it's channels, channels, channel. You can run this channel and then process Uh, because this channel, so you need to read. And also this request yeah. We can do this thing. Request is getting redundant process. Yeah. This is fine. So we can do this. Uh, yeah. Wg.dem is also there.

What strategies would you employ to ensure go microservices complies with Framework principles. This is very broader questions. But then just to, uh, just to narrow it down, uh, the framework should involve, I would say the best security practices. So that means the the code should be variable limited, and, Uh, the the the security scanning of that images are done properly. So that would be my first Point. 2nd would be that, uh, I would use It's a it's a heavy, uh, cloud specific, AWS specific, then we can use, uh, code builder, code pipeline, And code commit, uh, uh, for deploying the codes. Uh, other you can use the CICD pipelines for, uh, creating those, uh, images, uh, obviously, not with the latest tag. And, uh, 3rd is the secrets inside those should become should be, Uh, should be used in an encrypted way, so it should be it can be stored inside a Vault or a or a KMS? 3rd. Yeah. I think so. I don't have much idea on Go Microservices and what Since this well architected principles is a very broad subject, So I would need more specifics on, like, what exactly we are trying to do.

How do both channels in the serverless approach using AWS API Gateway? Go how do both channels and the serverless? I mean, the So I assume that, uh, there's a API gateway. There, we have a Lambda function, which is having Go as a as a language. So by default, uh, API gateway or Lambda takes that particular context. Once those context is available, we can use, Go channels to read the request which is coming and the request which is coming to the Lambda in context of request. You can take that as the channel And could be it could it could be it should be buffered, uh, but that based on, like, if we want to create a buffer object or you want to, uh, do, uh, read and writes based on this buffer? Uh, buffering would take another strategy because, uh, we need to be very specific about the size uh, of the the channel? So I would say, let's for simplicity, we can take that channel. Uh, the request can be written to that particular channel. And, um, even if the request comes in, the Lambda, uh, the code inside the Lambda will keep on processing the requests, uh, as and when it comes. So the output of that can be pushed to a channel. And if it's multiple request is coming. Um, so based on that, you can take that request and process it. For example, you can have, let's say, calculate, uh, and then take the request as an argument. And then whatever you want to calculate, you can and you can do the custom logic and push the code as a payload back to the a API gateway in JSON format, uh, obviously using structs and, uh, omit empty

How about AI technology feature into Microsoft's tool?