Proficient Automation Test Engineer with a strong background in Manual and Automation QA, team management, and test automation.
Skilled in utilizing Selenium WebDriver, NUnit, and Extent Report to deliver efficient and reliable test automation solutions. Experienced in implementing and maintaining the Page Object Model (POM) & Data Driven Framework to enhance test maintenance and reusability. Proficient in programming languages including C# and Java, enabling seamless development and execution of automated tests.
Well-versed in performance testing, API testing, database testing, manual testing, and functional testing to ensure comprehensive software quality assurance with different set of tools available. Proficient in utilising JIRA, Asana for issue tracking and good in developing strategic test plans and strategies. Can gel in easily with Cross-functional teams.
Automation Test Engineer
Techbit SolutionsQuality Engineer
ElectroTech EngineersSelenium WebDriver
NUnit
GitHub
Asana
Visual Studio
Postman
JMeter
Could you help me understand more about the okay. So, um, I I have, like, 6 years of experience working with C Sharp Automation, Selenium WebDriver, and, uh, Java. Uh, I worked in TTD frameworks, hybrid frameworks. Apart from that, responsibilities include, um, day to day test automation, automation of test cases, uh, maintenance of the automation framework, uh, including I I also do some amount of manual testing in which I execute the test cases, write the test cases. Uh, in my current organization, I'm working as a test lead, uh, wherein there are 3 testers who, uh, report to me over the total of 4 projects, uh, that we have in our current organization. Apart from that, like, uh, talking more about myself, uh, I've completed my, uh, engineering from Panjab University. Moving on to my experience, I was working in, uh, like, uh, with with Dehradu. With, uh, sorry. I was working in Dehradun with the creations. And after that, right now, currently working in Mohali with Technet Solutions, wherein we provide services to Intel. Um, I am working on client side. Like, I'm I I face the client, uh, on a daily basis. I do some amount of, uh, demo as well. I am regularly involved with the the sprint retrospective and sprint planning sessions. So, uh, these are my regular day to day activities. Uh, apart from that, there's one more thing that I, uh, you know, that I dedicate myself to on a daily basis is, uh, you know, sorting and training the, I would say, juniors as well because, uh, I have experience with, uh, performance testing, um, database testing, API testing in Postman, JMeter, and Microsoft SQL, MSCQL, MySQL, uh, by work with SSMS. So, you know, these are some some of the tests some of the testings that I have to regularly train my my juniors as well who work in other projects in the same organization. Um, right now, we are working in C Sharp Selenium WebDriver with CICD and Azure DevOps. So that is a framework that myself and one of my my colleagues have worked to design. And, uh, we are still building some of the test cases because it's it's, like, newly designed. We, uh, have incorporated these more test cases, and now we will be moving towards the regression test cases. Let me just, you know, improve my camera position once. So that is, like, um, um, you know, my my current day to day activities,
Okay, so I think I what the question is asking is when constructing test scripts for automation when constructing test scripts for automation how do I manage dependencies between test cases so what I'm assuming is you know you're trying to ask that how do you manage it like do I build independent test cases or dependent test cases you know like let's have the home page and from the home page I want to navigate to in my application I'll take an example of panel page panel module and inside that there is a edit panel page so if I have to go to edit panel page I will try to create independent test cases the reason for that is well we have a couple of reasons the major reason is being like you know if the if anyone of the previous test cases fails the following all of the test cases will fail and they'll be falsely marked as failed so that is one of the reason main reasons you should not create dependent test cases for a reliable test execution although the speed gets slower with that but then we again have the parallelizable or the parallel functionality in Selenium that can help us with that so I personally prefer and I you know I advise this as well to to my to myself and to my teammates to create independent test cases and how do I manage that with the help of utilities one of the major things that I focus in my framework is creating generic and overloaded like lots of utilities that help us drive this drive good test cases that are easy to understand yep thank you
Should come based testing. Sorry. Automation approach here. So I think, can I see your thought of it? Test results. Okay. So I'll I'll I'll, um, I'll I'll share a recent example. You know? Like, we have just moved from cookie based authentication, uh, like, Intel server authentication to SSO authentication, you know, single sign on with Microsoft Azure AD. Um, so once that was done, like, all of our all of our test cases are failing because of the new login system. And, um, the the intricacy with that was that, um, the test cases, uh, you know I mean, like, let just go through the flow. So when I when I, uh, you know, entered the application URL, the the URL got entered. And as soon as the JavaScript was loaded, it redirected us to SSO username login and then to password login and then to the application. So when on the final stage, it redirected us to the, um, uh, application page. We again had to do a a a hard refresh to to actually, you know, get the get all the functionalities of the application going. So so there's an issue with the application with that, but then we we were working on the same. So how do I create, like, a test case for that that ends in C sharp Selenium in Java Selenium? So, um, you know, we we can just, you know, do a normal login. And, uh, then after that, when when we had to hard refresh the application for that, I had to create the utility for login and do all the stuff that is needed. So that is taking, like, um, almost 30 seconds to to get over with. So, uh, you know, uh, and that that is happening with all the applications. So that is what I did. And there was 1 more thing. You know? Our applications, uh, you know, I'll I'm just going through the complexities. Not exactly the test results as well, but then our our our our test cases run on Intel servers. So, uh, Intel VMs, you know. And, uh, when I was running it in my local, it was working fine. The screen sizes, like, uh, you know, the the the particular, uh, 15.6 inches screen size. It was it was running normal. But then on the Intel servers, all test cases are failing. So we were quite unaware of, like, how do we sort this out if it's running fine. And you cannot exactly, you know, debug it in the in the in the VM because, uh, um, like, uh, you're not allowed access. Uh, you only can deploy it, and it runs on a it runs on a different VM. So, uh, how do you you know, finally, what we did, we we we used Chrome options, and we zoomed out by, like, 60% and then started working fine. So that was, uh, you know, something that that kept us confused for quite some time. Uh, some more, uh, complex testing scenario was included while, um, there was a there's a pop up that comes up like a motor box that comes up while updating a a tool electrical tool that we have. Um, and, uh, when the changes are saved, there's a toast message that comes at the top, uh, the bottom. And then when we we have to verify that, uh, the toast message. And when we click on the close button, the same is updated in the background grid after, like, 3 or 4 seconds. So we had to verify that, and, uh, this is something, you know, uh, that, uh, we had to use with explicit weights and fluent sessions. So, um, these are some of the regular challenges that we face with automation and maintenance and everything because the application is progressing at a very fast pace.
Testing mobile apps compatibility with different versions of operating systems. Yep. Sure. So, um, see, I mean, like, with the help of this Android development suit and APM, what we can do is, like, we have an example of, uh, we have an option of, um, choosing the different operating systems. Uh, you know, something like Sauce Labs that we can always run our tests on cloud servers with different operating systems and different screen size and all those things. So we have that in Android, uh, Android, uh, suite. So, um, what we can do is, you know, just, um, given the parameters in the JSON that we have to like, we currently use So even the parameters in the JSON that we wanted to run on and then just, you know, run our tests with the, um, the the required operating systems and the required screen size and all the permutations and combinations, um, that that we need. Now, um, yeah, explain how to do a test demo. Yep. Yep. Thanks. With the help of a simulator
Can Okay. So let's let's let's go for a hybrid framework. Let me just divide it into here so that it's quite easy to understand. So, uh, I'll I'll go with the with the page object model in APM with the hybrid framework that is data driven framework. And in the in the 1st layer, in the first, I would say, folder or the package that we have sorry. Um, we will use, uh, we will keep our page objects. Um, so so that is, like, the first layer that we have. For for every page, we will have 1 class. Uh, inside that class, we will have small functions or whatever we want for the functionality, uh, basic functions, you know, not very long functions. Um, then in the 2nd layer, we will have the so so that's the 1st layer. We have the page objects or the page elements. In 2nd layer, we will keep the test cases. You know? And, uh, on on top of these 2 is, like, the base class, the, uh, where the driver initialization is happening and, um, like, uh, all the functions of of class, like, maximize and all those things are happening. Not not maximize in in the mobile application. But then the driver initialization and the APM framework and the reporting is being set up. So that's happening in the base class. That is at the top of these 2. 2nd layer will be the test cases free layer where we'll have 1 class for 1 class for, um, 1 class for each page containing all the test cases. So this is the 2nd layer. Then in the of the I have all the test cases here with the help of n unit framework and the APM framework. And I think by and then Python. Then in the 3rd, we will have the utilities, uh, wherein we can have all the generic classes, like, uh, you know, all the utilities that we have that we want. Getting getting the, uh, the JavaScripts or anything that that we have that we want. Thirdly, we will have, uh, the test data. And inside that test data classes, we'll have the JSON and all the excels that we want. And and and in the 4th layer, we can have the reporting framework with the extent for extent report, extent HTML report, and the reported unit. Um, so, yeah, like, this is how we'll decide divide our frameworks for the automation architecture. And we can have database logging as well involved in this with the help of database log class. Uh, with the help of which we can, you know, record our results in the in the database as well. And email notifications, all these things will be all these things will be will be in the utilities. So yeah.
I want to ensure that you test data. Executing test cases, especially when testing complex. Accuracy of test data. Okay. Okay. So, um, executing test cases. Okay. That's interesting. Well, well, well, the first thing that I would do is, you know, contact the BA and the developer. You know, like, that's not what I would do. This is usually what I do on a regular basis, contact the BA and the developer, and maybe ask him to ask him some questions about the unit test cases that he has written, um, because that is the 1st layer of where he has handled the null exceptions, you know, using we use question mark and c sharp for that. So so, you know, uh, this is how we have checked for this is the 1st level of layer where we check for the accuracy of code and the test data. Second thing is, like, while providing the test case and the test data and the automation test cases, or if we do it in a manual case as well, we'll, you know, just put in our test we'll just, you know, write about file, write in our test cases. We have 1 column for test data. We will put in our test data in there and, uh, you know, go ahead with the test based on that test data. Now as a tester because I have, I mean, I should have. I I also have some good knowledge about the application, the ticket that I am testing, and the impact that it will have and why do the customers want it. So we have the base for that as well. So, um, we can, you know, acquire some some valid test data valid test data from from the business analyst as well. And because we are test as we I have you know, at least I have some logic of how to decide the test data. What is a positive test case? What is a negative test case? What is, like, you know, extreme test case for for any, uh, for for any functionality, for any, uh, test case test execution scenario? Now, um, we can also use, you know, some techniques like border value analysis wherein we provide the border case values for testing the functionality and how robust that functionality has been, uh, has been built. So, uh, with the help of teamwork, you can you know, I can actually convert that good test case execution to an excellent test case execution, um, while testing complex business scenarios. Now, like, this uh, testing complex business scenario is just like, you know because we we built this application for Intel. We have 4 applications, uh, there. So most of them are like, 3 of them are in Intel. And in that, we we usually do this on on, like, on a, you know, there's a, like, a daily task or testing complex business scenarios in electrical engineering scenarios because, like, that that's for electrical engineers.
In the following, just ask why the function does not correctly update that term. Okay. JavaScript. Um, function displays a profile user get element by ID user profile. Users. Just logged in. User.name. Age.user.netelse provide element dot inner text. User.nameanduser.age. Okay. Please log in. Display is a profile. Is logged in true, common name. Going through. She's a doubt. It is locked in. Where is this locked in? When I'm doing this, uh, let profile element equals to document dot get element by ID user profile, In this user profile in this user profile, am I sure I have this ID by user hyphen profile? Because if I don't have that, there will be an error and then profile element dot inner text. I don't think so. We can store. And are we printing it? Update that on. So I think there's there's an issue with the with the identifier user profile and then profile element.text that we are using.
Explain the Python code related to a cellular logic error that causes the test to always pass, even if it's supposed to be. What is causing this issue? That causes always to pass, okay? Okay, driver.grow, driver.getExample.login test.login.try.driver.findElementById.login.click.assert.welcomeUser and driver.baseSourceException Exception.exception as e. Well, so if we are saying that if the test is passing, so we are saying if the test is passing, that means I will assume that the assert is passing. So the assert is passing when we are on the login page. So in the login page, we have welcome user in the page source. So I think we should have, if we have actually logged in, we should have welcome, you know, the name of the person, the user, let's say Divyanshu, welcome, Divyanshu. By login, if we have welcome user in the page source or in the page, then this will pass. Now if I assume like, you know, the test is always passing. So let's assume like it's going to the exception. So in the exception, we also have, you know, like a print test passed. It will give us a view, you know, it will make us think that the test is passed, even if there's an exception. So it's like a try, catch block, yep. So this is the issue.
Yep. So so this is what I was talking about. Like, you know, this is because of our applications are, like, you know, uh, very changing at a very first place pace because of, like, Intel's demand or, like, say, their whatever their requirements are. We have, you know, we had a existing framework, and we had to, like, overhaul that over, you know, all of that framework because of this thing that the application interface was changing, uh, and everything was changing. So if the application interface is changing, that means that you the locators are changing. You know? Simply means that. So, uh, how do how do how did we go ahead with this thing? So what we did was we, um, uh, we did 2 things. You know? We we took 2 measures. Number 1 was writing generic locators using we we used experts this time. Uh, you know, like, let's say, uh, how do we explain this? Like, you know, something that we feel will not be changed in the coming times as well. So for example, I have a login button. So I have a class login button, the name login, and, um, the ID button. So I am pretty sure, like, in that page, there will only be 1 login button. So for that, I'll choose, uh, ID because there's 1 ID like button. Or I'll choose the XPath. With the with the help of XPath, I'll say contains. Um, what was that? Contains, uh, login button. Like, contains. So I think one attribute was login button. So I'll I'll use that attribute that is not prone to change, and that is generic, and that will be used off with the help of with the help of contains. So this is something that is, you know, more, um, more, uh, sustainable and going. It will it will go for a longer period of time instead of just instead of just, you know, using going from parent to child, uh, element and then to, let's say, some another child element that is, uh, more relative and more longer, and that is prone to change even with a slight change from a developer. So that is one thing. The second thing is we we created utilities where wherein we if if we are to get a table, we will create utility utility for that, and we will use that utility instead of instead of getting the elements of the table in the page object model, in the page object class, we will call that utility and we'll get the task done because that is a generic class and that will always work in event even when there is, like, you know, 3 columns and 4 rows or 5 columns and 10 rows. So, uh, like, these are the 2, um, 2, uh, steps that we took. 3rd one that we took was, like, to, you know, try to create and work with metadata, um, wherein we we, you know, frame the framework in such a way that even when there's metadata, like, you know, lots of data or things that are coming our way, it will work in spite of that. So, uh, these are the 3 steps actually that we took to get
And this is the testability of a new feature. Okay. So, um, how do I assess the testability for new feature? That's a good question. And and your testing strategy accordingly. Okay. See, uh, like, this is, like, you know, um, one of my, I would say, my habits that whenever I test any, um, any feature or any anything, any, you know, bug or retest anything, I I do it in the UI, make sure the functionality is right, and then I go ahead and check it in the SP view or the table as well. Because I've been doing it for so long, so it's now a habit now. And we have 1 application called Flash in Intel wherein we have no UI. We have APIs. And, uh, if we use Swagger to run that API, sometimes Postman as well. We verify we have to verify the results in the database only because there's no UI for that and you you know? Like, we are still creating reports, but then there's no automation done in that right now. But we have to do that manually. All the database testing is there. And it's quite a big API. So, uh, whenever I look at a task, I do 2 or 3 things. I check the impact of the task, like, on which are pages and which are modules will will the, uh, will the will the functionality be affected? What are the affected areas? Number 1. And then I checked that in cross application as well because I told you, like, our applications are dependent. And then I do it in the DB as well. Of course, as if it's good in the, it should be good in in the DB, but then that's how I go about with this thing. And I make sure to usually have a call with the BA and the developer for for for, like, 90% of the task that we have. Because what happens is, like, um, you know, the BA gets a fair idea of what the customers want and what are they expecting and what you should keep track in mind while while getting this done. Because based on one functionality, there will be another functionality tomorrow and then another functionality based on that tomorrow. So you don't want this to be broken again and again. Or, you know, you want this to be stable, very stable. So keeping that in mind, the developer has developed the task, of course. So then I go to the developer and ask them to, like, you know, what have you done and, you know, just an overview of the code, uh, with the with the unit test cases as well. So just to just to, you know, keep in check, like, what has been covered and what has not been covered and what can be a good test scenario to break down this task. Um, and what is an acceptable test data? What is not an acceptable test data? So this is how I go ahead and plan my test cases with with, uh, like, almost all of the tasks that we have.
Okay. So, uh, that that's a good question. Automating security testing. I you know, to be honest, I I don't think so. I have worked on security testing. Um, like, by by security, what do you mean, like, penetration testing or, like, SQL injection or something like that. I mean, security testing. More like it happens in the in the banking sections, banking financial sector wherein we have session expired, session timeouts, all these things that we have to cookie expiry. So, um, I mean, how we do it is, like, um, we, uh, the time out expiry and all those things, and that we test through you know, like, what we usually do is, you know, we'll just use the cookies. We'll just, you know, copy the cookies from, let's say today, I copy the cookies, and after 2 days, after one day tomorrow, I just use Postman and JMeter using those cookies, and I hit an API some APIs to check, like, if that should already expire. So, I mean so, I mean, like, that's what we do. I have not done penetration testing. No. Not