
DevOps and Cloud Engineer with over 6.7 years of experience in building and managing cloud infrastructure and automation. Skilled in AWS, Terraform, Kubernetes, Docker, OpenShift, OpenStack, Consul, Nomad, Linux and CI/CD tools to improve deployment efficiency and system reliability.
Passionate about driving automation, optimizing processes, and implementing scalable solutions. A proactive problem-solver who enjoys collaborating with teams to deliver innovative, high-performance infrastructure that supports business goals. Eager to tackle new challenges and continuously learn and grow in the DevOps space.
DevOps Engineer
InfobeansSystem Engineer (Level 3)
Cybage Software Pvt. Ltd.Linux Administrator (Level 2)
VSN InternationHardware Engineer
R.D. Computers.Linux and Windows System Administrator (Level 1)
Exclusive Securities Ltd.
Kubernetes
.png)
Docker
.png)
Jenkins

Git

AWS
Azure

Terraform

Prometheus
.jpg)
Grafana

VCenter

Github

Bitbucket

Chef

Rancher

Nginx

AWS

VMware

Windows server

Nginx

WordPress
Hi. My name is. Basically, I have total 6, 7 years of experience in the IT field. I've worked on multiple tools and technologies in the past 7 years. Let's say, for example, Fibonacci, Stock Cash, and several times, AWS, Azure, and Terraform. And, recently, I migrated to a new project, which is the moment and the console. I successfully upgraded the moment cluster. Like, around any moment cluster, we have around 120 clients and 2 data centers in 2 locations. I also migrated or upgraded the console cluster. In the console cluster, we have multiple services, which are critical parts for us. So, I work on these things. I also have working experience on the AWS and Azure side. In our project, we are using AWS and Azure for both cloud things. To manage or create services on the Azure side, we are using a Terraform script or Terraform code to create or make changes in the infrastructure side. I also have working experience on the storage side. In my past working experience, I have worked on multiple types of storage, like SAN or NAS, and the providers for the same are all storage, NetApp, or Dell, and then IBM. I work on those kinds of storage side, and I also work on Kubernetes. In our infrastructure, we are around 40 or 30 people using Kubernetes. We have managing those things on the Kubernetes and deploy our application on the Kubernetes. Whenever we have an issue on the Kubernetes side, I was there. We need to resolve it. The deployment part is managed by the deployment team, but in the infrastructure side, we manage the Kubernetes cluster. Whatever services are managed by the Kubernetes, we take care of those things. If any issues occur on the Kubernetes cluster or in the client mode, we need to manage and resolve it. I also planned and upgraded the Kubernetes cluster in my working experience. I also worked on the Ansible side, on Linux, Windows, and Windows AD and Linux. I have some working experience with the LDAP side. That's all about me. Thank you.
To deploy a straightforward application on the Kubernetes cluster, for example, we have in our organization database clusters, and we wanted to deploy on the Kubernetes cluster. So we can do like create a stateful application and create a stateful set. Sorry. The service is called StatefulSet. StatefulSet, and write the file for it and create an application for a database using a stateful state, and then create a service for it. And at the time of deployment, we also need to provide the volume, like PV and PVC. So for storing data in it, and create an application on the Kubernetes cluster. Like for backing up and restoring data of the database, we can use the volume on the back-end storage. So it will be automatically taken care of by the storage services for backing up. And also, we can set up a cron or set up in the storage site for creating a volume. So we're taking a backup of the volume for different things, we can also do a second read, like create and backup on the regular basis of the volume which are mounted by the or which are used by the straightforward application for our database.
To set up the build process by these test documenting build process to optimize the Python application, we need to create a Dockerfile for it. To create a multi-instance Docker build process, we need to create a Dockerfile for it. We can start by specifying the base image, like for the Python image, we can use the official Python image and with whatever configuration we want to do in it, like for copying our Python code to the image, we can use the COPY instruction. In the first stage, we require an image. For the Python image, we can use the official Python image, and with whatever configuration we want to do in it. Like, for copying our Python code to the image, we can use the COPY instruction. And then, in the secondary stage, we use the same image in which we have copied our Python code, and we can build it. We can use that image in whatever configuration is required or whatever package is required to build the Python code, like requirements.txt. And whatever package is required to run the code, we can install those codes inside the image, and we can build the Docker image. In that way, we can create a multi-stage Docker image, and we can reduce image size by following this method.
For using a secret in the Linux environment, we can use the HashiCorp Vault for storing our secret inside the HashiCorp Vault. And whenever we want to use it, we can use it. Other than this, in the AWS side, we can also use AWS's secret service, in which we can store our secret data, like passwords or keys or certificates of anything which are secrets for us, and we can store those things in it. And whenever we require it inside our application, we can use it from there. So the secret is stored in the secret service, we can store our data and we can use it in our Linux environment or Linux application where the application requires. In our application, when it requires a secret, at that time, we can use our secret inside it. Inside a containerized way, we can store our key in the HashiCorp Vault. And whenever we deploy the container on the Linux environment, we can use that key by using the HashiCorp Vault, and then we can start the service, we can start the container.
Comparing using AWS with Terraform from first to last with a focus on a specific use case like network provisioning. So, like, AWS secret is the AWS SDK, to use, by, like, for an example, sorry. So, Terraform is the infrastructure as a code. So, write in Terraform code to build an infrastructure on the AWS provider or any other cloud provider. So, it's like creating a VPC for AWS or creating a VPC on AWS cloud provider using a Terraform script. We can write in a script, Terraform code inside Terraform code, like what VPC you want to use, in which reason you want to use, you want to create the VPC, and then what will be the subnet for the VPC. And, for it, will be do you want public access for the private, public access for the subnet, allow public access for that particular subnet or not. Whatever the configuration you want inside it, you can mention in the Terraform code, and you can create an infrastructure using Terraform code. But, in AWS SDK, we need to provide, like, for an example, we need to create an AWS VPC. We need to use an AWS command, CLI command for creating the VPC and whatever configuration you want to use. We need to provide the argument at the time of running that command.
Automated approach to scale Kubernetes deployment in response to increased wave traffic loads, so an automatic scale of Kubernetes deployment, we can use an HPA, or horizontal pod autoscaler. So, by that way, we can use it to automatically increase the number of deployment pods if the traffic is increased. It uses metrics, like when the CPU or memory goes above 90 or 85 or 95%, it will automatically increase two or three pods and depend upon our configuration side. So, for the automatic approach, we need to use a Kubernetes horizontal pod autoscaler.
I'm not able to, like, understand what error I am getting. I will get when I run this code and see, what error I am getting. And on that behalf, I will fix this code, where it is failing. But by looking at the code, I am not understanding at what point this code will fail. Corrected transcript: I'm not able to understand what error I am getting. I will get the error when I run this code and see what error I am getting. And on that behalf, I will fix this code where it is failing. But by looking at the code, I am not understanding at what point this code will fail.
Assuming you are viewing the data from module for deploying an AWS system, I notice the follow-up. Look. Can you point out potential security risk here? So in the quotation, potential risk for this is the key. So we are using a default key. And inside the variable, we are storing the key. And so the risk is here, inside the file, anyone can look at the file and see the key. And if someone has access or someone has the IP address or name of the server, which are deployed on the AWS, they can access the server. So it's like it's a big risk for us to use that type of configuration. So instead of using this way, we can use HashiCorp Vault for storing our key. So it will secure and only authenticated people can access.
How to deploy a multi-tier application using a data form, ensuring high availability in both data and environment. Okay? So, for deploying a multi-tier application using Terraform code, we first need to write a script for deploying a Terraform script. For example, multi-tier events like, we can deploy a single instance, or we can choose to use an easy-to-scale instance, depending on the scenario. Do we want to build an infrastructure with easy-to-scale instances or not? Or can we go with a serverless architecture? But in this case, I will use the infrastructure way. I'm not going to use the serverless architecture. So, for the multi-tier application, first, I will create a VPC and write a code. So, for creating a VPC and inside the VPC, I create the VPC with three subnets. One subnet would be private, one subnet would be private, and one for the public for the web server. In the private subnet, I will deploy the database application. The second subnet, I will use to deploy the API, REST API. And in the third, I will deploy the web server. So, the web server will be in the public subnet. But the other two applications will be in the private subnet, and I can access the application by using it. This is a requirement to deploy the application in a highly available zone. So, for that, I will deploy the application in multiple AZs. Let's say, for example, the web server will be deployed in two or three availability zones, with one server in each availability zone. And the same way, I will use the database and the API and deploy all these in different availability zones. So, whenever we have an issue in one of the availability zones, we won't be able to hamper our application, and users can still access it.
I will create a pipeline for deploying AI driven application using customized technology. I will use a containerized technology for this pipeline. First, the developer builds the code. I will then use a pipeline to build the code and create an artifact. In the second stage, I will store the artifact on the artifact server, copy the artifacts into an image, unzip and install the required packages inside it, and upload the image to the Docker Hub or any other registry. After uploading the image, I can use the image to deploy the application or deploy a new image or create a container by using that image in the test environment. Then, the QA team will test it to ensure everything is working fine. Once the testing is completed, I will trigger a message for production deployment. We can use things like email or messaging systems to trigger a message. Once the message is triggered, the manager will approve the deployment, and then the application will automatically deploy to the production environment. So, I need to create a pipeline for this and deploy the application to production.
To optimize a Kubernetes cluster for deploying a computer v vSAN module developed in Python, we can utilize Prometheus and Grafana for monitoring purposes. We can determine how much load exists within the Kubernetes cluster by examining the current usage of resources such as CPU and memory. To identify which node is utilizing the least resources, we can compare the resource usage across all nodes in the cluster. This information will allow us to make informed decisions regarding the addition of new nodes to the cluster, especially in scenarios where the load is high. In the context of the Python application and its deployment on the Kubernetes cluster, I'd like to clarify the question to better understand the scenario and the goals of optimization.