Senior DevSecOps Consultant
EMPLOYER S Infosys, BangaloreSenior DevSecOps Consultant
InfosysGitOps Consultant and Kubernetes Developer
CapgeminiCloud DevOps Consultant
Tata Consultancy ServicesAzure
Azure DevOps Server
Kubernetes
Terrafrom
PowerShell
Bash
Python
Azure Cloud
AKS
EKS
Docker
Helm
Jenkins
Nexus
SonarQube
ArgoCD
Ansible
Yes. So I am, uh, Adi. I'm having around, like, uh, 6, 7 years of experience in devo as a DevOps engineer. I am working on, uh, currently a project there. I have designed the deployment architecture for the application and the CACD architecture. So the project was, like, uh, like, microservice with application that need to be deployed on the Azure Kubernetes cluster. So for that, we have used Azure Kubernetes service. And, like, uh, the deployment architecture what we created was kind of simple. Like, it was, like, using we we use as the we use the Azure application gateway as the entry point of the application. And then behind the Azure application gateway, we used Azure Kubernetes service. Inside the Kubernetes service, like, we were having around, like, 20 microservices. So we have deployed all the 20 microservices, uh, onto the Azure Kubernetes service. And for each microservice, we have used we have created a deployment dot YAML file and service dot YAML file. Like, we have created, like, a deployment of replicas 3 for high availability for each microservice. And then we use a pressure IP service to, uh, like, create a load balancer kind of thing. And then we use the ingress controller, the the ingress object to expose it. And there were, like, the config maps and secrets also used for, uh, for this one, for the perspective. And in the deployment component of the YAML file, we have also implemented the readiness groups. So before, like, like, the the point was that our application took around, like, 2, 3 minutes to be started. The pod started in a running state. The pod will be started when you say it, but our application will be ready in after 3 minutes. It it takes some time. There's there's some processes that it does. So for that, like, what we have done, we have used the readiness probe inside the container, and, uh, that readiness probe was used to prevent the traffic before the application is read in the in the ready state. So, yeah, this was the system. And the second thing what we have done, we have created the CICD pipelines for different microservices, and we have used Azure pipelines for that one. And our code will also store in Azure repositories, Azure reports. And our tasks and, uh, and the user stories and the sprints we were creating in Azure Boards. So complete, we have used Azure DevOps, uh, product. And, uh, let me tell you the pipeline details. So in the pipelines, we're having multiple stages. Like, uh, the first stage was to create the, uh, like, to clone the repository. The second one, uh, was to, like, uh, do the docker build. Then we have a SonarQube scan static codency scan. Then we will be scanning the docker image using the trivial vulnerability scanner. Then we will be uploading that, uh, image to the, uh, Azure Kubernetes service like that. So this was the complete deployment architecture and the CICD architecture, what, uh, I have planned and what we have created. If I tell you about the, uh, branching strategy, so we have used the gate flow branching strategy for the applications. So in that branching strategies, like, whenever the new requirement whenever developer was working on a new requirement, the developer was, like, uh, creating a new feature branch. And once the feature was developed, he was launching a develop develop branch. That develop branch was deployed onto the dev environment. Like, uh, and once that dev environment was, uh, like, he was good enough, then he will merge it to a release branch, and then release branch will be deployed to a queue environment. Okay. So this, uh, queue environment will be tested by our testing team. There there was a separate testing team. They were, uh, doing the manual testing, and there was some automation testing. For automation testing also, there is a pipeline. So we will we'll do it like this.
Yes. So can you detail the security measures you would implement in Kubernetes cluster to prevent unauthorized access. Okay. So to prevent, like, uh, unauthorized access to the Kubernetes cluster, what I will do is Just give us a second. Yeah. So to implement the queue like, to implement the security measures, what I will do, like, I will try to implement RBAC in the cluster so that even if, uh, someone access to the cluster, like, they would be having a limited access. So we will be enabling the RBAC. And, like, for example, if we want to give access to some of the users, like, uh, they only want they only can create the pods. So we'll only be giving the create create pod access to them. And if if, uh, if, for example, if we have some users, they only have, uh, if they only want to, uh, like, uh, have the reader access, so we will we will give only the reader access to the system. So like that, we will plan it properly, and we will we will do that. So but, Majil, we are we'll need to implement the. Second thing, we will have to save the token. Like, uh, there would be a token, the the the token that is used to communicate with the API server. That token need to be, uh, secured. So the thing is if we are using the Kubernetes cluster, like, on the Azure or AWS, if we are using managed decays, then the token, we can only receive it using the Azure CLI or the AWS CLI. So that kind of protect it. But if we are, uh, hosting it on a bare metal, then we definitely need to protect the, uh, master server, the master node of the Kubernetes cluster. Then we need to prevent the access. Like, we need to prevent the, like, public connectivity to the master node of the cluster. So, like yeah. Like, 2 there are majorly 2 things. 1 is to prevent, uh, the the we we have to restrict the access to the master node, and second, we have to, uh, like, enable the RBAC in the cluster. Like, 2 2 mega things.
How do you configure horizontal for data scaling? Based on custom metrics. Yeah. So, like, by default, horizontal for data scaling, we will be setting up by using, like, the metrics, like CPU and memory. Like, these are the basic metrics. But if you want to use custom metrics, then there is an option in the HPI setting. HPI, we can do that. And one thing that I use is. So what I have done in my project, so we have a event app, Azure event app. Okay. So what the the requirement was, like, uh, whenever that traffic like, let me tell you the application. So the application was reading the, uh, the messages that were continuously coming in the event of topic. Okay. So what was happening, like, uh, when the topic, uh, when the number of messages were getting up, then we want to scale the number of ports. And if the number of messages are going down, we want to scale down the number of ports. So for that, we have used the KEDA service, and, uh, we we have used the KEDA, and we have deployed the KEDA in the cluster. And that KEDA was, uh, continuously like, we have or we authenticated that KEDA service to the event tab. We have given the credentials to the event tab. So that Kera was, uh, like, trying to see what is the number of messages currently that event tab is getting, and once everything was good. Like, once, like, like, the horizontal auto scaler was set up, like, uh, it like, the KDA was getting the metrics from the event event app, and, uh, the the HP, the horizontal support autoscaler, was configured to check if the number of messages was going around, uh, um, like, if if the number of the spike in the number of message goes about 200 per minute, then what we'll do? We will increase the number of ports. So this kind of setup, we can do using, uh, the KDA. That we have set up in the product.
What strategies you would you employ to ensure 0 downtime deployments? Transitioning from Tanzu to Kubernetes to AKS. Okay. So why transiting to Tanzu Kubernetes to AKS? Like, the thing is, like, both are, like, Tanzu Kubernetes, what I would say, it's kind of bare metal only. So they are, like, we need to manage everything and, uh, like, it's not a managed kind of cluster. So from unmanaged cluster, but and the Kubernetes, AKS is a managed cluster. So from going unmanaged cluster, like, not not completely in one, but, yeah, kind of unmanaged cluster to a completely managed cluster that is AKS. So what we have to do, like, uh, like, first, like, we have to see what what what is that. First, we need to access access the environment. Okay. So first, we need to see, like, what application is running, what number of ports, what deployments, what cached rules, and what config maps, what secrets, whatever things whatever things whatever Kubernetes objects are running, whatever Ingress controllers. Like, we have to plan what what Ingress controllers were used in the Tundu Kubernetes, what ingress controller we will be using in Azure Kubernetes service. So 1 by 1, we need to, uh, understand the existing environment, what is running in the Kubernetes. And once everything is, uh, we we have understood the things, then what we can do, we can start, uh, migrating the things 1 by 1. Like, for example, first, like, uh, we I would say, like, let's deploy first the rules and cluster rules and everything, whatever is required. Let's deploy that first. Then second thing, like, let's try to see what config maps are, uh, running there. So we will just, uh, get the config maps and run on the AKS cluster. And, uh, like that, 1 by 1, we will be migrating the objects to the Azure Kubernetes service. And in the Azure Kubernetes service, uh, like, uh, at last, like, we will be migrating all the deployments and the pods and the replica sets and the stateful sets and the, like, daemon sets. Whatever whatever the workloads are that are running in the Tensor Communities, 1 by 1, we will be completely migrating into. And, like, when we are when we are doing the migrating also, we will not make the 10 gigabit is down. What will happen? We will set up the complete application again on the AKS and at last. Okay. At last. So for example, our application is running on one domain name, and that domain is, uh, linked to the Tenzu Kubernetes Ingress controller IP address, public IP address. Okay. That that is a a record that is created in the DNS. So now what we need to do, once the complete application is deployed and the Ingress controller is deployed in the AKS and everything is set up, the ingress object, ingress rules, and everything is written in AKS, then at last, first, like, what I would say, I will try to access it using the kube proxy, like, a kube serial proxy command kube serial port sorry. Kube serial port forward command. I will try to access the application in the running in the AKS using the kubelet port forward. And if I see everything is fine, we have done the migration, the complete application is running, find in AKS, then what I will do, then we will be doing a DNS switch from Tanzu Kubernetes to the AKS. So this will ensure, like, 0 downtime deployment. Like, uh, the the till the time the DNS was pointing to Tuncumin, the application will be delivered from there. The customers will be able to use it from there. But once the application is completely set up on AKS, then we will do the DNS switch. And once the DNS is switched to the AKS, uh, English controller IP address, then it will be, like, uh, all the users will be directly accessing the application from the edge of Kubernetes cluster.
What are the benefits of using Helm charts in Kubernetes, and how would you manage dependencies in a Helm chart? So the benefits of using Helm chart in Kubernetes is, like, for example, if you will see, we need to write a deployment of YAML file. And, uh, like, for for 4 things, we have to write a deployment of YAML file. Okay. And, uh, the thing is there is we cannot create any variables in that deployment of YAML files or any any Kubernetes manifest. We cannot create any variables. So there, for the templating perspective, this Helm chart will come into the picture. So for example, let's suppose an example of a microservice application. There, we have 4 microservices. And for 4 microservices, microservices, we have deployment. Yml, service. Yml, ingress. Yml, config. Yml. All all the required things, whatever we need for a specific application. So all the things are there. So if we are not using the Helm chart, then we have to create separate separate deployment of the ML and everything. In in, like, uh, I'd like for example, if you want to upgrade a image, okay, in the in the Kubernetes, like, in the in the Kubernetes manifest file, if if if you have written deployment. And there is a version v one running for for one of the microservices. And if you want to upgrade it, then we need to upgrade the manifest file to the version v 2, and then we need to apply it again. Then it will be, uh, reflected in the cluster. Okay. But in the case of help charts, we get a template. Like, help charts are is kind of a templating engine. So what it will do, we can create variables. Like, for example, if we want to create a variable for the image, then then we can you, uh, place the, uh, variable there in the deployment YAML file, and we can control the values of those variables using the single values of YAML file. So now what we can do like, for example, we have 4 environments, then we will create the complete Helm chart of our Microsoft page, uh, system. And then we will the the the Helm chart will be same. The only difference that would be will be we will be creating 4 different, uh, or 3 different or 2 different, whatever number of invoices we have. We create different values dot YAML file. And, uh, like, once it's, uh, set up and once it's done, then it's, uh, the the hint chart will be deployed. Like, it would be very easy to deploy. The using a single hint chart and different values values dot values dot mlsl, we can deploy on the dev, QA, prod, and and so on. So this this helps. And manage dependency. So to manage the dependency, like, for example, 1 Microsoft is dependent on other. 2nd is dependent on 3rd. So this this this can be done also using the chart. This proper syntax that while writing the in the in the templates folder of the Helm chart, while writing the Helm charts, like, while writing the, uh, the templates, we can we can plan this, and we can we can manage the dependencies properly. It's it's it's possible in the Helm chart.
Okay. What is the Kubernetes operator, and how does it simplify cluster application management? Okay. Yeah. So, uh, the Kubernetes operator, like, uh, what like, let's take an example, like, of Prometheus. Okay? To understand this answer, let's take an example of Prometheus. So if you want to deploy a Prometheus in the cluster okay. If you want to deploy Prometheus application in the cluster, it would be very difficult. Like, we need to set up the like, uh, all the things. The the we need to create a deployment. We need to create a service. We have to create all the things, whatever is required for running the Prometheus. Okay. But what happens, there is a Prometheus operator. So what Prometheus operator does, Prometheus operator will help us to deploy the complete Prometheus package in the cluster. And for that, we just need to write 1 YAML file, one manifest file with a kind of Prometheus, and that will deploy the complete Prometheus application inside the cluster, and that we can use it for, uh, like, it like, it's it's it it it it makes it very simple. So now let's take an example. Like, for example, we have to deploy 4 Prometheus instances in our cluster, like, for different namespaces. Okay. We want we don't want a 1 Prometheus instance in in the complete cluster. We want 4 different Prometheus instances. So before the Kubernetes operator, before the Prometheus operator, what we need to do, we need to install and manage the Prometheus in the separate names, which is one by 1. But after using Kubernetes and the Prometheus operator, after using the Prometheus operator, what we can do, we just need we just need to deploy the Prometheus operator in the cluster, and it will give us new CRDs. And once we have the new CRDs in place, what we can do, we can just write give the API version. We can give the kind as Prometheus. In the spec, we can give some details about the Prometheus instance, what we want, and we can create this YAML file, uh, the manifest file of the Prometheus application in all the 4 different link pieces to deploy the Prometheus easily and manage it, uh, in a proper way. So this this helps. Like, this this this operator, uh, Kubernetes operator helps a lot in managing the application. I just took an example of Prometheus. But, uh, if you want, we can also create our own Kubernetes operator. Like, for example, if we have a application, we want to deploy it deploy the application in the Kubernetes, we can create our own operator. We can deploy the operator in the cluster. It will it will give the, uh, different CRDs. We we need to create the CRDs for that. And then if you want to deploy the application, whenever you're going to deploy the application, we just need to give the kind and that application name. And once we apply that manifest file, it will, like, it will deploy whatever is required for that application and will do that.
Explain the concept of call life cycle and states of a call can be in. So, uh, the call, like, if if I talk about the call, so pod is first of all first of all, like, pod is the smallest unit in Kubernetes cluster. Okay. Yeah. The second thing is the pod when we deploy it. So the first time when we deploy the pod, like, when we create the pod, whatever we say, the pod will be in the pending state. So the first state of the pod is the pending state. Once it is a pending state, like, uh, like, something is, uh, happening, like, it will be in the pending. The scheduler will be, like, checking what node is available and all, like, it will be there. So first, pending. Then second, if everything is done, it will be scheduled. Like, the pod will be in the scheduled state, then it will be in a running state. So if everything goes well, the pod will be in the running state and, uh, yeah, uh, there there are if, uh, if the task is complete like, for example, in the pod, if we are running one command. So if that command is completed, then the pod will be in the completed state when the command is, uh, executed. Okay. But if the pod is running continuously, then it will be continuously running. So that is to deliver. Once the application is, uh, like, exited, if something happens, like, exception occurs or something happens, then the pod will be in the failed state. There can be different states like crash to back off, image pull back off. There can be different states of a pod. But, majorly, the pod will be kind of in a failed state, we can say. And in the in the failed state, we have the crash loop back off. If something happens in the application, the in in the container running inside the port, something happens to that, then the port will be in the crash loop. Then we need to check the logs to fix and understand what is the what is the issue that I've offered. And second thing, what can happen is image pullback. Like, for example, while pulling the image from the container registry, there is some issues. It might be the authentication issues. Might be the container registry is not working. Something any issues. Might be the we are not able to connect to the container registry. For whatever the issues, we will be, uh, like, getting the image back off, and then that you can check-in the events. Like, kubernet Kubernetes will, uh, give the events for that one, and we can check the using the kubectl describe command, and we can get all the details. So this, uh, is the thing. And for example, like, if the port is running, then there is one more state evicted. But for for that, let's take an example. For the for example, let's see if the port is running. Okay. And while running, uh, like, it's running on one of the node, and that node goes down. Okay. Or the CPU memory for that node's, uh, unavailable. Something happens to that node. So at that time, the pod will be marked as evicted. That means, like, uh, the CPU memory that were required for that pod were not available due to some issues in the node or something. So that at that time, it will be marked as. So there are total, like, uh, 4, 5 stages. We started from, uh, pending, then it could be running, completed, failed, and evicted. And in the failed, like, we have seen, like, 2, 3 more crash to back off image black off, and there it can be more.
What do you need to consider when creating a persistent volume claim in Kubernetes? Yeah. So to create a persistent volume claim in Kubernetes, what we need to do? Okay. So first, we need to see where we want to create a persistent volume. First thing is that where we want to create a persistent volume. Like, for example, if we are using Azure Kubernetes service, we have different options. We have a we can create a storage on the Azure, uh, storage account. Like, we can create a block. We can create a Azure disk. So for that one, firstly, to plan where we want to create a persistent volume claim. And according to that, we need to see if that storage driver is available or not in our human data. And once we have, uh, once we know that the the the set once we know that, uh, the storage type is available. Like, for example, we have planned that we will be using the Azure disk, and we will attach in that disc onto the port as a a persistent volume. So now once we know that, then we will be seeing the storage driver. And, uh, we have seen that we have a storage driver for Zodays that is available to use, and that is authenticated to the Azure. That is that that also need to be done. Like, that storage driver should be authenticated to the Azure. That should be able to, uh, provision a resource in Azure, a new disk in Azure. Otherwise, it won't work. But in any case, if you are using, that is taken, that is already done. So once that is done, we'll just write the manifest file to the, uh, position volume claim and the position volumes, and then we will be attaching the position volumes to the, uh, deployments or the pods where wherever is required, and that that will help.
How could you handle the disaster recovery and backup strategies for stateful applications running on Kubernetes in Azure? Okay. So for disaster recoveries for the straightforward application okay. So what we will do, like, in straightforward set. Like, first first of all, we need to create a straightforward set for, uh, like, uh, like, for example, if we have a stateful application known as SonarQube. Like, SonarQube is also a stateful application. So let's suppose we have, uh, like, a one stateful application running in a so first of all, we need to run that application in a stateful check. So stateful check, uh, will give 2, 3 good things, like, that will help us for the, like, stateful stateful application to run properly. First of all, like, stateful applications are low application. Like, for example, we have a database also. The all the database is also also stateful. So whenever we have a database, like, that is a kind of we want to create a cluster a cluster of database. So what happens? One instance of the database need to know what are the instance of the database is doing. So for that one, what we need to do, we need to have the names of the pods to be same so that one pod can be connect to another very easily. And that is that that that will be, uh, done by the application itself. But it should be like, for example, if we create a deployment for that, then the pod IDs would be changed, the port name would be changed. That is a problem. So for that one, we'll be using a stateful set. And then what we'll be doing, we'll be attaching a volume. Like, in the stateful set, we will be attaching the volume onto the ports that are running so that, for example, one pod is running and one pod let let's suppose, like, there are 2 pods running our Sunark application. Okay. So on the pod 1, it's running SonarQ pod number 1. On the pod number, uh, 2, we have the SonarQ pod number 2. Okay. Now on each pod, we have 1 volume 1 1 volume attached. Attached. Okay. So pod 1 is attached with 1 volume. Pod 2 is attached with 1 volume. Now what will happen? Like, uh, if if the pod is, uh, like see. So what will happen now? If the pod is, like, uh, how to like, for example, if a pod one is down. Okay. This is the you want to set up the backup recovery only. Backup and disaster recovery only to we need to discuss on that only. So let's suppose if the pod 1 goes down, what will happen? When the port 1 goes down, then stateful set will create a same port. The port with the same okay. And it will attach the same volume that was connected to the pre connected previously to that port. So what will happen? The new port will be up and running with the same name connected to the same volume. And once it is done, it would be, like, very simple that the port with the application will pick will read the volume. Whatever is there in the in the in the volume, it it will read it, and it will continuously continue the work from there only. It will not start the work again. It will, uh, start the it will read the volume, like, from where the previous pod was, uh, died, and it will start working from the same place. So like this, we will do it. And, uh, yeah, and one what we can do for the disaster recovery, what we can do, we can, uh, run the backups of the vault. Like, we can take it. Let's suppose, like, let's assume that these these vaults are in the storage account. So then, uh, these these these are the block storage. These are the block containers that are connected to as a persistent volume in the in those ports. So then, like, what we can do, like, uh, we can enable the GRS replication in that storage account. So the data will be replicated in 3 regions. So that will be a totally, uh, thing we can use from Azure. So to make it so to make the data or the volume as a, uh, highly available one in the in the case of disaster also.
Describe the advantages of implementing a service mesh in a Kubernetes environment and considering consideration for choosing that. Okay. So advantages are so many. Like, for example, if in a in a microservice based environment, we need to know which application like, which microservices, uh, traffic is going to where and what application is communicating with which application. Like, we need to know the complete mesh. We need to know the complete flow of the application network flow and the traffic flow and everything. We need to know it. So this for this one, service mesh will come into picture. And, uh, I would say, like, uh, like, as for the cc and a plan's clip, like, uh, there is 1 linker d. Linker d and h two, there are 2, but I have used linker d. So let's talk about linker d. So what linker d will do? Like, all the service mesh will be kind of working with the same. There are 2 different names, I'm telling, Linkerd and sir Linkerd and. So what what happens is, like, for example, we have 4 ports running, 4 different microservices. So when we install the Linkerd on the cluster, every port running in the cluster will be created when in in all the ports running in the cluster, there would be 1 sidecar created for that click for the link ID. Okay. And there would be a link ID dashboard that we can access it, uh, using that using the port of the link ID. We can we can access it. We can we can create ingress. We can create a service for that to expose it also. That is totally fine. So what happens? That Linkerd application will be like, uh, like so so whenever the traffic is going to any port, like, for example, pod 1 is communicated to pod 2. So what happens? When the pod 1 is communicated to pod 2, then pod 1 will directly cannot communicate with the pod 2. There is sidecar running in the pod 2. So all the traffic going into the container main container running the application, all the traffic will be going through the sidecar container. And once the traffic is going through the Sidecar container, the link ID will be creating a complete map of, uh, like, uh, the complete mesh on a on on on its UI so that we can see which pod was sending traffic to which application and which pod was sending traffic to which application and how to and for communication. So it will it will create a kind of complete mesh so that we can see how, uh, and how how it how how the things, uh, are going on the microservice based environment. So, yeah, that's why it it helps.
How do you approach performance testing for deployments in Kubernetes, and how is it how does it influence capacity planning? Yeah. So for, uh, doing the performance testing for deployments okay. So for doing the performance testing, like, uh, like, what we have done previously, we have deployed our application. Okay? And then we have set up a different tool, okay, at at a different, uh, on a different virtual machine. And, uh, from that machine, like, uh, we like, in our, the application was running as a replicas for there was a total that was set up. Everything was done. Now from a different virtual machine, from the g meter, we were sending a fake traffic. We we're generating a lot of traffic on the endpoint of that uh, of of the application and that which is running in the Kubernetes. So we were monitoring that in the Grafana dashboard. Like, once we will run the JMeter test okay. We have we have written JMeter test to, like, how much traffic we need to send, all the the number of codes, number of threads, what everything we have set up in the JMeter. So once that is done and, uh, the we we we know that. We will we will be checking in the group, like, what is happening, how it's happening. So according to that, we have now planned the capacity of the cluster. Like, for example, like, we know it. We know that our our cluster will be able to handle this much amount of load. Like, for example, we we know that, like, we are sending 10,000 users traffic from the g meter. So if if on the dashboard, everything is good, like, when the test is running, g meter test is running for around 1 hour or 15 minute, 30 minutes, we we will try we try to run it at a different different, uh, for we write we'll write it at different times. So we'd see, like, uh, if if the if the, uh, CPU memory level on the graphite dashboard is is okay, it's it's it's kind of normal, then it's fine. Otherwise, we know we we come to know, like, this much amount of users will be able will be handled with this amount of size of the cluster. So if we want to cater more users, then what we'll do? We'll increase the ports. We'll we'll increase the not that the ports, we'll increase the further nodes. Like, we we need to increase number of nodes in the cluster. That we can set as auto scaling also that in the if we are using Razor Kubernetes, uh, cluster, then we can set the nodes also as a, uh, like, what we say. We we we can set the clusters the, like, nodes also as a auto scale one. We can create a auto scale node pool. So that will be automatically scaled. But if we have manual 1, then we need to increase the number of nodes. And, uh, second, we need to what we will do, we will increase the number of resources for each model. For example, like, uh, it it totally depends. Like, we want to use the horizontal scaling or vertical scaling. Horizontal scaling, we use it when our application is kind of more, uh, like, for for accessing handling more traffic. So then I will horizontally scale the application. Like, for example, if our application is application that need to, uh, deliver traffic to more users, then I will increase the number of ports. But, for example, if our application is kind of a machine learning model or machine learning algorithm that is running, so that takes a lot of CPU and memory. So for that one, we need to increase the pod resources, the request and limit of the pods so that the pod is able to, uh, run that container, run that application properly. And then we can we can add the vertical and horizontal both to if if if for a for a machine learning, we might need to add a vertical and horizontal, both kind of scaling. Single one might not work. So it it totally depends on the requirement. But, uh, yeah, this is the way how it affects the capacity planning, how we will know it, and by checking the refined test, how we'll be knowing, how much capacity we have, and how much we will how how we will see how how many customers we have to cater. And according to the c, we will we'll be increasing the number of ports and notes.