profile-pic
Vetted Talent

Durga Sai Eswar

Vetted Talent
Python developer with 4.5+ years of experience in backend development on REST APIs, skilled in developing and optimizing backend applications.
  • Role

    Back End Developer

  • Years of Experience

    5 years

Skillsets

  • Python - 5 Years
  • Azure
  • Git
  • Kubernetes
  • Docker
  • Flask
  • CI/CD
  • Algorithms
  • AWS - 2 Years
  • Restful APIs - 5 Years

Vetted For

10Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Python Developer (AI/ML & Cloud Services) - RemoteAI Screening
  • 64%
    icon-arrow-down
  • Skills assessed :GCP/Azure, Micro services, Django /Flask, Neo4j, Restful APIs, AWS, Docker, Kubernetes, machine_learning, Python
  • Score: 58/90

Professional Summary

5Years
  • Jan, 2021 - Present4 yr 8 months

    Python Developer

    KPMG
  • Jan, 2019 - Jan, 20212 yr

    Python Developer

    Idexcel Technologies Private Limited

Applications & Tools Known

  • icon-tool

    Git

  • icon-tool

    Azure Pipelines

  • icon-tool

    JFrog

  • icon-tool

    Flask

  • icon-tool

    Docker

  • icon-tool

    Kubernetes

  • icon-tool

    Azure DevOps

  • icon-tool

    AWS Secrets Manager

  • icon-tool

    AWS CloudWatch

  • icon-tool

    Sklearn

  • icon-tool

    Postman

Work History

5Years

Python Developer

KPMG
Jan, 2021 - Present4 yr 8 months
    Worked on various projects including migration from GitLab to Azure, containerizing and deploying Python applications, and building an orchestration layer for real-time fraudulent transaction screening.

Python Developer

Idexcel Technologies Private Limited
Jan, 2019 - Jan, 20212 yr
    Worked on mapping financial terms using an ensemble learning model and integrated machine learning models with REST APIs.

Achievements

  • Participated in Ideathon and won a first prize.
  • Published my first blog in AWS blogs portal

Education

  • Bachelor's (Computer Science)

    REVA University (2019)
  • 12th

    JCNRM Jr. College (2015)

Certifications

  • Aws machine learning - specialty

  • Az 900 (azure fundamentals)

  • Dp 900 (azure data fundamentals)

  • Ai 900 (azure ai fundamentals)

AI-interview Questions & Answers

Hey. Hi. Uh, myself, Sai Ishar. I have, uh, 5 years of experience in the back end development wherein I develop REST APIs using Flask, FastAPI frameworks, and write, uh, you know, integrated business logics with the APIs and make sure the application is, you know, 95% the core coverage, right test cases for it, right integration test cases, and unit test cases for this. And, uh, also, uh, writing a Docker file, writing a Kubernetes manifest files, like, uh, writing a, uh, ingress, so this deployment, replica search, uh, ports, and all these manifest files for all these and making sure the application is accessible from the browser once the application is deployed onto the Kubernetes cluster and, uh, creating a CACD pipelines to do all the, you know, automatic, uh, uh, deployment to the Kubernetes cluster. So yeah. So in all these areas, I have the experience, and, uh, yeah. That's pretty much it.

What are the best practices for structuring classes? Miss Practicing. Miss Practicing. Launch Python code is for so we always have to, you know, try to when it is a large, uh, Python code database, we always have to try to develop application code in a OOPs manner so that, uh, when we follow design patterns, uh, while writing a OOPs manner of programming. Uh, it will always be you know, our application will be flexible and, uh, open for enhancements and maintainable. And in all these areas would be beneficial for us when, uh, uh, we develop a code and write a code in the ops programming manner. So like that, we have the solid principles already, like single role responsibility, single responsibility, and, uh, dependency inversion, and, uh, um, I think there are other principles. So when we are writing a OOPS programming, following the solid principles and maintaining the design, um, may creating a design pattern like factory design pattern, uh, singletons design patterns. So now when we follow in this manner and write a code in in this particular way, that will be, uh, the best practices to manage the large, uh, Python code base. Yeah. That's it.

What strategy would you recommend for versioning the rest of the APIs when adding new features to an existing? Uh, usually, we can do through the, uh, you know, blueprints. Uh, so when we are developing an APIs, we always write, uh, APIs for using blueprints in Flask or, uh, uh, anything in, let's say, in fast API or any framework. So these blueprints will help us to utilize the URLs for the version 1 or for version 2, making the URLs different for 1 and 2. In in that way, we can segregate it in a better way. Apart from that, yeah, my answer would be going with blueprints for maintaining the for the APIs so that that makes available of both versions available, uh, for the user, whichever version they want to utilize. They may they can utilize that particular API. Yeah. I think yeah. That's all.

How do you handle a data persistence and fault tolerance in a cloud hosted Python application using? Uh, so we always have the, uh, vertical scaling. Let's say the application is deployed onto the Kubernetes cluster. So we always try to, uh, create the persistent volumes attached to the every application port and making sure the port instances are, like, uh, available 5 instances or 10 instances depends on the, uh, uses of the application. We can always try to do the vertical vertical port vertical scaling, and the other one is attaching the volume to the port so that even the port crashes, the data still persists in the persistent volume. And as and when the port is up and running, again, the vol whatever the data that is available in the volume that can be used in the new newly spun up, uh, instance of an application. So my answers would be go for horizontal scaling and also attach a persistent volume to it if the application is deployed on the Kubernetes. Yeah. That's it.

If task to to develop a system that requires transactions within Neo 4 g to adhere to asset properties, how would you achieve it? Okay. So in this Neo 4J database, we always try to deploy this as a port in the Kubernetes cluster. Let it be, uh, one instance or 2 instance or 4 or how many other instances that we want. We keep running, uh, those many instances for specific to Neo four g database so that if one goes down, the other one will be up and running to solve the request. So keeping all the asset properties in mind, like availability and, uh, consistency and the durability, integrity, and all. So when we have the multiple instances up and running of Neo four j, we we will not be, you know, uh, will not be facing any problems, uh, and it will always be followed, uh, contains the set properties. Yeah.

Can you outline your high level solution for real time? Okay. So if the machine learning model, whatever that we are using for the inferences, uh, if that is bigger model or the smaller model, depends on that. We can choose the service. The smaller model, we don't have to utilize any of the, uh, machine learning services of the machine just to save the cost and the maintainability and other things. Uh, if it is a bigger mesh machine learning model, the smaller machine learning model, we just run the within the Kubernetes cluster or the any easy two instance that, uh, has the models, and then we can run it. Otherwise, we for this a bigger one that requires a lot of maintainability and things like that, we can go for deploying our machine learning model and keep it available in AWS SalesMaker and, uh, enable those models through an API. Uh, so all these manage management will be available in the SageMaker services itself. Once that so once the model is deployed in that service, we make the model enabled through the API. Our model is ready. So whenever on the Python application, we just, uh, make a request to that, uh, SalesMaker API and then get the inferences and do the processing and return the output. So to put it in a high level, that means in a one, uh, nutshell, uh, when the data is coming in, uh, someone is making a request to the to our API, wherein they need some, uh, predictions for their input data. Let it be an image, what what does the image contain, or let it be a file that has the you know, someone wants to find out the text format, so whatever the user wants to predict. So input data will be sending through the through the to the to our API. Uh, in our API, we'll try to, uh, do a little cleaning, uh, preprocessing before sending the data to the model. Once the preprocessing is done, we'll try to send that data to the SalesMaker API within our Python application. And once we make the call to the SalesMaker API, that gives a prediction using the models that we deployed, and then that output will be utilized, formatted in the necessary, uh, necessary way, and then return the output. So in in a flow, that's how we'd work. If at all, if any other things that is needed for to store data or store predictions, we'll try to store them in the s three bucket. Or, uh, if it is a kube a Python application is deployed in Kubernetes cluster, we can put it in the persistent volume. So, uh, API gateway and, uh, and application is in Kubernetes cluster. Model is in SalesMaker. We take the API gateway, make sure the requests are filtered, and we keep the authentication our to our API where wherever the API is developed, which is in Python. And once it is done, we, uh, from the port, it will try to make a call to the SalesMaker API to get the predictions and then get the output from the model and then return back to the UI or user. So, yeah, that that is fine. That's all.

Uh, consume the phone. It just, uh, whatever the return statement that is there in the code, that return has the some arithmetic operations happening. So that formula is mistaken. So it should be price minus price into discount percent by 100. So first, the percentage will be calculated from the price, and that per that amount will be deducted from the price. So it's just that we have to modify the formula that we are doing, uh, to calculate the discount. So price minus, open brace, price into, uh, uh, discount percent, uh, divided by 100 and close the brace. So first, it calculates whatever inside the braces, and then that output will be deducted from the price and then returns the final amount.

In the code snippet below, there is a I think, uh, single responsibility principle is missing as what my guess, wherein one class is doing engine start, and the same time, it is also playing the music. So two things are happening. My answer would be the single responsibility principle is missing here. So in the car class, only the starting or initiation part should be there, like start engine, uh, or anything like that. And, uh, all music related in the separate, uh, class So that that class will be direct dedicated, like play music and raise volume, decrease volume, and, uh, stop music. So all these kind of functionalities comes under media player of the music. So that those functionalities will be in a different class, and, uh, related to starting, uh, functionalities would be in one class. So single responsibility principle is missing here. Um, that would be my answer.

Demonstrator technique to ensure robust handling in a distributor system that leverages both API design and the Uh, robust error handling. Uh, I think to my knowledge, uh, usually, we always keep the try, accept blocks, uh, wherever it is needed, uh, or wherever the developer most of that, uh, developer will identify the errors. So, mostly, we'll try to catch the errors through the try accept block itself by writing a customized exceptions, like, uh, if it is something unexpected, not value error or key error or not, or the default by the errors. If some some other errors are coming in, we'll just try putting the customized errors in separate file and try to raise those errors and, uh, put them in the try accept blocks. So utilize those errors and raise those errors whenever it is needed. So answer would be use try accept block and, uh, and also use customized error handling so that you will be handling almost all of the errors that are gonna raise and according to the developer expectations. Yeah. I think that's all from this. I think that should be the correct one. Yeah. To my knowledge, that's my answer for this.

I'll discuss the strategy to monitor and document the scaling of Python applications in the cloud using both. So the always CloudWatch, we always use to monitor the, uh, application logs. Right? So on the at the same time, we also monitor the metrics of the, uh, instances that are up and running. So whenever we have we can keep a threshold, uh, for, you know, whenever the utilization of CPU or the memory going across the 75% of it, then trigger an alarm so that, uh, that, uh, trigger a notification or something like that so that we can go and, uh, initiate the auto scaling process so that CPU or memory will be increased according to the, uh, usage of the instance. So this can be done automatically. So, yeah, using CloudWatch monitoring tools, we can try to keep a threshold, uh, for 75% or 80%. And then whenever the usage is crossing that threshold value, we can initiate the auto scaling services so that that increases the CPU course or memory of the instance. Yeah. That's it.

Describe a scenario where you would choose between implementing a graph title design. I would choose Neo four c only when a lot the data that is coming in is enough. Uh, I know, 3 or a family kind of structure away. Let's say, one data point is related to the another pay data point, and lot of these linked data points are available. And that linked data points are making a in a form of tree or it's all, like, nested links to the one point one data point to the other data point, one point to the other point. So when the data is in that form, I I would definitely go for Neo 4J database because all the databases are Neo 4J is the best one to, you know, manage the graphical structured kind of data. Yeah. What else where you would choose between? I haven't much worked on Neo 4 j, but I have just a theoretical knowledge. Based on that, uh, I'm answering, uh, this, saying, uh, I would choose only the Neo four g database if the data is in the form of graphical structure or to more of in LinkedIn nature. Let's send, uh, one point to the other point, other point to the other point. It's a lot of linking is happening. So in that manner only, I would choose Neo4j. In any of the other manners, I would go for relational database. So, yeah, that's it.