profile-pic
Vetted Talent

Jitendra Daya

Vetted Talent

DevOps and Cloud Engineer with over 6.7 years of experience in building and managing cloud infrastructure and automation. Skilled in AWS, Terraform, Kubernetes, Docker, OpenShift, OpenStack, Consul, Nomad, Linux and CI/CD tools to improve deployment efficiency and system reliability.

Passionate about driving automation, optimizing processes, and implementing scalable solutions. A proactive problem-solver who enjoys collaborating with teams to deliver innovative, high-performance infrastructure that supports business goals. Eager to tackle new challenges and continuously learn and grow in the DevOps space.

  • Role

    DevOps Engineer

  • Years of Experience

    7 years

Skillsets

  • Test automation
  • Grafana - 2 Years
  • Linux Server - 7 Years
  • AWS EC2 - 3 Years
  • deployment pipelines
  • Azure - 2 Years
  • Docker - 4 Years
  • infrastructure as code - 2 Years
  • Continuous deployment (cd)
  • Continuous integration (ci)
  • AWS - 3 Years
  • Python Scripting
  • OpenShift
  • Infrastructure as Code (IaC)
  • Containerization
  • Configuration Management
  • Python - 2 Years
  • Kubernetes - 4 Years
  • Terraform - 3 Years

Vetted For

15Skills
  • Roles & Skills
  • Results
  • Details
  • icon-skill_image
    Senior Software Engineer, DevOpsAI Screening
  • 60%
    icon-arrow-down
  • Skills assessed :infrastructure as code, Terraform, AWS, Azure, Docker, Kubernetes, 組込みLinux, Python, AWS (SageMaker), gcp vertex, Google Cloud, Kubeflow, ml architectures and lifecycle, pulumi, seldon
  • Score: 54/90

Professional Summary

7Years
  • Nov, 2022 - Present3 yr 6 months

    DevOps Engineer

    Infobeans
  • Apr, 2021 - Nov, 20221 yr 7 months

    System Engineer (Level 3)

    Cybage Software Pvt. Ltd.
  • Feb, 2019 - Apr, 20212 yr 2 months

    Linux Administrator (Level 2)

    VSN Internation
  • Jul, 2014 - Nov, 20151 yr 4 months

    Hardware Engineer

    R.D. Computers.
  • Jun, 2017 - Dec, 20181 yr 6 months

    Linux and Windows System Administrator (Level 1)

    Exclusive Securities Ltd.

Applications & Tools Known

  • icon-tool

    Kubernetes

  • icon-tool

    Docker

  • icon-tool

    Jenkins

  • icon-tool

    Git

  • icon-tool

    AWS

  • icon-tool

    Azure

  • icon-tool

    Terraform

  • icon-tool

    Prometheus

  • icon-tool

    Grafana

  • icon-tool

    VCenter

  • icon-tool

    Github

  • icon-tool

    Bitbucket

  • icon-tool

    Chef

  • icon-tool

    Rancher

  • icon-tool

    Nginx

  • icon-tool

    AWS

  • icon-tool

    VMware

  • icon-tool

    Windows server

  • icon-tool

    Nginx

  • icon-tool

    WordPress

Work History

7Years

DevOps Engineer

Infobeans
Nov, 2022 - Present3 yr 6 months
    Deploying and managing containerized applications using Kubernetes, Docker, or similar tools. Developing Jenkins jobs and Jenkins Pipelines while ensuring the successful execution of existing jobs. Provisioning new virtual machines on the Openstack platform. Creating, deploying, and resolving issues related to VMware configurations. Performing Ubuntu server upgrades and verifying proper functionality post-upgrade. Establishing VCenter clusters and promptly addressing any encountered issues. Implementing modifications to existing code, pushing changes to Github and Bitbucket when necessary, and subsequently raising pull requests. Managing upgrades and troubleshooting for Kubernetes clusters. Conducting basic-level tasks on Openstack and troubleshooting VM-level issues as needed. Configuring Jenkins master and slave nodes and creating new jobs when the need arises. Maintaining and creating documentation for infrastructure, processes, and configurations. Integrating security practices into the DevOps workflow, including vulnerability scanning and code analysis. Directing GitOps practices to manage infrastructure and applications using Git repositories.

System Engineer (Level 3)

Cybage Software Pvt. Ltd.
Apr, 2021 - Nov, 20221 yr 7 months
    Led storage volumes (LUNs) by creating, deleting, and assigning them to servers from SAN storage. Installed and configured Rancher clusters and Kubernetes clusters. Planned and executed upgrades of Kubernetes clusters in both production and non-production environments. Conducted Windows and Linux server patching and managing VCenter upgrades, VMware host patching, and VCenter cluster deployment. Configured Storage Classes with Pure storage on Kubernetes clusters and redeploying Pure Storage Classes as needed. Executed adding and removing Kubernetes hosts as required and troubleshooting Kubernetes Master and worker node issues. Identified and troubleshooting Docker host issues. Built and configured new physical and virtual servers in the infrastructure as needed. Performed installed and configured Kubernetes clusters on Rancher, creating multiple Kubernetes clusters within it.

Linux Administrator (Level 2)

VSN Internation
Feb, 2019 - Apr, 20212 yr 2 months
    Created, deleting and assigning storage volumes (Luns) to the servers from the SAN storage. Developed two node clustering in our infrastructure on the Linux environment and also good knowledge about facing, shared storage, resource and resource group etc. Configured two node clustering on windows server 2016 and then created SQL clustering for data warehouse. Performed taking full backup of our organization into tape library and also managing the tape library like modify and schedule backup by CA-ARC server software. Managed and configured Nginx reverse proxy server and small AWS infrastructure. Installed and configuring Red Hat virtualization. Prepared multiple AWS EC2 instance when required. Generated, live migrating and taking snapshots of VMs on Red Hat virtualization Manager. Added new host in Red Hat virtualization and integrate Red Hat virtualization manager with windows active directory. Conducted installation and configuration of Cpanel and workdpress in Centos7. Executed migration of MSSQL disk from one disk to another disk in windows clustering. Administered setting up GFS2 filesystems within a clustered environment and configuring the cluster's logical volume manager on Linux. Transferred and exported physical volumes, volume groups, and logical volumes in a Linux environment. Directed migrated SAN storage volumes from Fujitsu SAN to IBM SAN storage while configuring virtualization between them. Led end-to-end server administration for Linux platforms, ensuring the stability, availability, reliability, and service capacity of Linux servers. This includes provisioning new Linux servers on diverse hardware systems. Configured SCSI target and initiator settings, as well as implementing multipathing in Linux. Established and maintained core infrastructure components, technology standards, processes, and policies. Identified and analyzed issues that impact system performance, collaborating closely with various teams to recommend solutions. Detected system discrepancies, assessing associated risks, and implementing solutions while adhering to security standards.

Linux and Windows System Administrator (Level 1)

Exclusive Securities Ltd.
Jun, 2017 - Dec, 20181 yr 6 months
    Red Hat certification in 2018 Hardware & Networking Course from Jetking in 2017 AWS Certified Solutions Architect Associate in 2021 DevOps tools and technologies like Jenkins, Terraform, Docker, Kubernetes, etc. Deploying containerized applications using Kubernetes and Docker Developing Jenkins jobs and pipelines Using Terraform and Ansible for infrastructure provisioning and configuration management Managing Kubernetes clusters and OpenShift environments Implementing monitoring and alerting using Prometheus and Grafana Managing cloud environments on AWS and Azure Integrating security practices into the DevOps workflow Developing and implementing CI/CD pipelines Operating system management tasks such as server upgrades, VM provisioning, and troubleshooting Writing and maintaining scripts in Bash and Python Handling code changes using Git and raising pull requests Conducting training and mentoring sessions for junior engineers Documentation of processes, infrastructure, and configurations Collaborating with cross-functional teams to streamline workflows Ensuring compliance with enterprise security policies and procedures Identifying and addressing complex technical challenges Driving continuous improvement and maintaining knowledge of industry trends and advancements. Proactive problem-solving and collaboration skills motivated us to stay at the forefront of DevOps practices, contributing to operational excellence and the reliable delivery of high-quality software releases.

Hardware Engineer

R.D. Computers.
Jul, 2014 - Nov, 20151 yr 4 months

Achievements

  • Led storage volumes (LUNs) creation
  • Developed Jenkins jobs and pipelines
  • Provisioned virtual machines on Openstack
  • Managed upgrades and troubleshooting for Kubernetes clusters
  • Administered server patches and upgrades
  • Integrated security practices into DevOps workflow
  • Managed large-scale server environments

Education

  • BCA (Computer Application)

    DAVV University, Indore (2014)
  • Intermediate

    Vimal Higher Secondary School, Bhopal (2010)
  • Matriculation

    Vimal Higher Secondary School, Bhopal (2008)
  • BCA (Computer Application)

    DAVV University (2014)

Certifications

  • Red hat certified engineer (rhce) in 2018

  • Red hat certified system administrator (rhcsa) in 2018

  • Red hat certified specialist in containers and kubernetes (openshift i) in 2022

  • Hardware & networking course from jetking, indore in 2017

  • Aws certified solutions architect associate in 2021

  • Red hat certified specialist in containers and kubernetes (openshift ii) in 2023

AI-interview Questions & Answers

Hi. My name is. Basically, I have total 6, 7 years of experience in the IT field. I've worked on multiple tools and technologies in the past 7 years. Let's say, for example, Fibonacci, Stock Cash, and several times, AWS, Azure, and Terraform. And, recently, I migrated to a new project, which is the moment and the console. I successfully upgraded the moment cluster. Like, around any moment cluster, we have around 120 clients and 2 data centers in 2 locations. I also migrated or upgraded the console cluster. In the console cluster, we have multiple services, which are critical parts for us. So, I work on these things. I also have working experience on the AWS and Azure side. In our project, we are using AWS and Azure for both cloud things. To manage or create services on the Azure side, we are using a Terraform script or Terraform code to create or make changes in the infrastructure side. I also have working experience on the storage side. In my past working experience, I have worked on multiple types of storage, like SAN or NAS, and the providers for the same are all storage, NetApp, or Dell, and then IBM. I work on those kinds of storage side, and I also work on Kubernetes. In our infrastructure, we are around 40 or 30 people using Kubernetes. We have managing those things on the Kubernetes and deploy our application on the Kubernetes. Whenever we have an issue on the Kubernetes side, I was there. We need to resolve it. The deployment part is managed by the deployment team, but in the infrastructure side, we manage the Kubernetes cluster. Whatever services are managed by the Kubernetes, we take care of those things. If any issues occur on the Kubernetes cluster or in the client mode, we need to manage and resolve it. I also planned and upgraded the Kubernetes cluster in my working experience. I also worked on the Ansible side, on Linux, Windows, and Windows AD and Linux. I have some working experience with the LDAP side. That's all about me. Thank you.

To deploy a straightforward application on the Kubernetes cluster, for example, we have in our organization database clusters, and we wanted to deploy on the Kubernetes cluster. So we can do like create a stateful application and create a stateful set. Sorry. The service is called StatefulSet. StatefulSet, and write the file for it and create an application for a database using a stateful state, and then create a service for it. And at the time of deployment, we also need to provide the volume, like PV and PVC. So for storing data in it, and create an application on the Kubernetes cluster. Like for backing up and restoring data of the database, we can use the volume on the back-end storage. So it will be automatically taken care of by the storage services for backing up. And also, we can set up a cron or set up in the storage site for creating a volume. So we're taking a backup of the volume for different things, we can also do a second read, like create and backup on the regular basis of the volume which are mounted by the or which are used by the straightforward application for our database.

To set up the build process by these test documenting build process to optimize the Python application, we need to create a Dockerfile for it. To create a multi-instance Docker build process, we need to create a Dockerfile for it. We can start by specifying the base image, like for the Python image, we can use the official Python image and with whatever configuration we want to do in it, like for copying our Python code to the image, we can use the COPY instruction. In the first stage, we require an image. For the Python image, we can use the official Python image, and with whatever configuration we want to do in it. Like, for copying our Python code to the image, we can use the COPY instruction. And then, in the secondary stage, we use the same image in which we have copied our Python code, and we can build it. We can use that image in whatever configuration is required or whatever package is required to build the Python code, like requirements.txt. And whatever package is required to run the code, we can install those codes inside the image, and we can build the Docker image. In that way, we can create a multi-stage Docker image, and we can reduce image size by following this method.

For using a secret in the Linux environment, we can use the HashiCorp Vault for storing our secret inside the HashiCorp Vault. And whenever we want to use it, we can use it. Other than this, in the AWS side, we can also use AWS's secret service, in which we can store our secret data, like passwords or keys or certificates of anything which are secrets for us, and we can store those things in it. And whenever we require it inside our application, we can use it from there. So the secret is stored in the secret service, we can store our data and we can use it in our Linux environment or Linux application where the application requires. In our application, when it requires a secret, at that time, we can use our secret inside it. Inside a containerized way, we can store our key in the HashiCorp Vault. And whenever we deploy the container on the Linux environment, we can use that key by using the HashiCorp Vault, and then we can start the service, we can start the container.

Comparing using AWS with Terraform from first to last with a focus on a specific use case like network provisioning. So, like, AWS secret is the AWS SDK, to use, by, like, for an example, sorry. So, Terraform is the infrastructure as a code. So, write in Terraform code to build an infrastructure on the AWS provider or any other cloud provider. So, it's like creating a VPC for AWS or creating a VPC on AWS cloud provider using a Terraform script. We can write in a script, Terraform code inside Terraform code, like what VPC you want to use, in which reason you want to use, you want to create the VPC, and then what will be the subnet for the VPC. And, for it, will be do you want public access for the private, public access for the subnet, allow public access for that particular subnet or not. Whatever the configuration you want inside it, you can mention in the Terraform code, and you can create an infrastructure using Terraform code. But, in AWS SDK, we need to provide, like, for an example, we need to create an AWS VPC. We need to use an AWS command, CLI command for creating the VPC and whatever configuration you want to use. We need to provide the argument at the time of running that command.

Automated approach to scale Kubernetes deployment in response to increased wave traffic loads, so an automatic scale of Kubernetes deployment, we can use an HPA, or horizontal pod autoscaler. So, by that way, we can use it to automatically increase the number of deployment pods if the traffic is increased. It uses metrics, like when the CPU or memory goes above 90 or 85 or 95%, it will automatically increase two or three pods and depend upon our configuration side. So, for the automatic approach, we need to use a Kubernetes horizontal pod autoscaler.

I'm not able to, like, understand what error I am getting. I will get when I run this code and see, what error I am getting. And on that behalf, I will fix this code, where it is failing. But by looking at the code, I am not understanding at what point this code will fail. Corrected transcript: I'm not able to understand what error I am getting. I will get the error when I run this code and see what error I am getting. And on that behalf, I will fix this code where it is failing. But by looking at the code, I am not understanding at what point this code will fail.

Assuming you are viewing the data from module for deploying an AWS system, I notice the follow-up. Look. Can you point out potential security risk here? So in the quotation, potential risk for this is the key. So we are using a default key. And inside the variable, we are storing the key. And so the risk is here, inside the file, anyone can look at the file and see the key. And if someone has access or someone has the IP address or name of the server, which are deployed on the AWS, they can access the server. So it's like it's a big risk for us to use that type of configuration. So instead of using this way, we can use HashiCorp Vault for storing our key. So it will secure and only authenticated people can access.

How to deploy a multi-tier application using a data form, ensuring high availability in both data and environment. Okay? So, for deploying a multi-tier application using Terraform code, we first need to write a script for deploying a Terraform script. For example, multi-tier events like, we can deploy a single instance, or we can choose to use an easy-to-scale instance, depending on the scenario. Do we want to build an infrastructure with easy-to-scale instances or not? Or can we go with a serverless architecture? But in this case, I will use the infrastructure way. I'm not going to use the serverless architecture. So, for the multi-tier application, first, I will create a VPC and write a code. So, for creating a VPC and inside the VPC, I create the VPC with three subnets. One subnet would be private, one subnet would be private, and one for the public for the web server. In the private subnet, I will deploy the database application. The second subnet, I will use to deploy the API, REST API. And in the third, I will deploy the web server. So, the web server will be in the public subnet. But the other two applications will be in the private subnet, and I can access the application by using it. This is a requirement to deploy the application in a highly available zone. So, for that, I will deploy the application in multiple AZs. Let's say, for example, the web server will be deployed in two or three availability zones, with one server in each availability zone. And the same way, I will use the database and the API and deploy all these in different availability zones. So, whenever we have an issue in one of the availability zones, we won't be able to hamper our application, and users can still access it.

I will create a pipeline for deploying AI driven application using customized technology. I will use a containerized technology for this pipeline. First, the developer builds the code. I will then use a pipeline to build the code and create an artifact. In the second stage, I will store the artifact on the artifact server, copy the artifacts into an image, unzip and install the required packages inside it, and upload the image to the Docker Hub or any other registry. After uploading the image, I can use the image to deploy the application or deploy a new image or create a container by using that image in the test environment. Then, the QA team will test it to ensure everything is working fine. Once the testing is completed, I will trigger a message for production deployment. We can use things like email or messaging systems to trigger a message. Once the message is triggered, the manager will approve the deployment, and then the application will automatically deploy to the production environment. So, I need to create a pipeline for this and deploy the application to production.

To optimize a Kubernetes cluster for deploying a computer v vSAN module developed in Python, we can utilize Prometheus and Grafana for monitoring purposes. We can determine how much load exists within the Kubernetes cluster by examining the current usage of resources such as CPU and memory. To identify which node is utilizing the least resources, we can compare the resource usage across all nodes in the cluster. This information will allow us to make informed decisions regarding the addition of new nodes to the cluster, especially in scenarios where the load is high. In the context of the Python application and its deployment on the Kubernetes cluster, I'd like to clarify the question to better understand the scenario and the goals of optimization.