Rest- Assured API Testing Automation Interview Questions

  1. What is difference between API and WebService.
  2. What is difference between SOAP & Rest API.
  3. Can you write a sample of API(URL) and JSON.
  4. How do you handle Authentication token.
  5. How many type of Authentication in POSTMAN/ Rest-Assured.
  6. What is difference between OAuth1.0 and OAuth2.O ,When and where do you use and how. Can you write a sample code.
  7. What is baseURI in RestAssured.
  8. Can you explain RequestSpecification request = RestAssured.given();
  9. What will be returned type of response.jsonPath().getJsonObject(“XYZ”);
  10. How do you extract the values of JSON and how do you validate response.
  11. Can you write a code of save the response in a JSON file.
  12. How do you validate headers of response.
  13. What is difference between Headers and Header class.
  14. What is difference between response.header(“xyz”) and response.headers() methods.
  15. Can you extract all the headers from response at run time.
  16. What is JSONObject() , request.header(“xyz”), response.path(“lable”) , response.body().asString() , response.getBody().prettyPrint(); , RestAssured.given().queryParam(“xyz”,”abc”);
  17. What is difference between request.get(“https//dev-mode.com/api/allcustomers”) and request.request(Method.GET,"/ allcustomers ");
  18. What is difference between PUT and Patch . Have you ever used and where.
  19. What are status code(2xx ,3xx ,4xx, 5xx) in API.
  20. How do you print your response in JSON format.
  21. How do you post body in POST and how many way to post.
  22. What all are the dependency for Rest-Assured.

– Questions shared by Mr. Hari.
Happy Testing

Finally an online Software Testing Conference with great speakers, great agenda and decent price. #testcon2020

Testcon2020 Virtual Summit on 25th & 26th September.

Register and learn https://bit.ly/Testcon2020

[Must check Speakers and Agenda, even if you don’t want to register. You will love the topics].

Register Now & Get Flat 20% Discount on tickets. Use code [STT20]

  • India: INR 1200/-
  • Other Countries: USD 16

TestCon 2020 is the Software Testing Conference that brings together hundreds of Test Professionals, who seek to improve their skills to fit new market requirements and stay tuned with the latest trends!

Check out the exciting Speaker Lineup and Agenda here: https://bit.ly/Testcon2020

Understanding Parallel Testing in Test Automation (Part 1)

Parallel testing is an automated testing process that developers and testers can launch multiple tests against different real device combinations and browser configurations simultaneously. The goal of parallel testing is to resolve the constraints of time by distributing tests across available resources.

For example, if 20 test cases take a total of 100 minutes to complete, then 10 parallel execution could run 2 test cases each and bring the total testing time down to 10 minutes. Ideally, if you have sufficient resources, say 20 real mobile devices and real browsers for simultaneous execution of 20 test cases, then you’ll be able to significantly shrink the runtime to 5 minutes.


Benefits of Parallel Testing:

Speed: Sequential testing is time-consuming, while parallel testing will allow you to divide invested time by the number of environments. In order to test your application against ten devices, all you need to do is write only ONE script and run it against all your target devices, cutting your testing time by ten times.

Cost-Efficiency: The building, maintaining and keeping your own test environment up-to-date can burn a hole in your pocket. When it comes to parallel testing, maintenance isn’t a headache anymore — in fact, you lease the needed testing environment, always updated. Plus, cloud-based testing grids allow you to run tests at high concurrency, making the cost per test significantly lower.

Better Coverage: It’s always a good idea to run your application through as many platform-device-browser combinations as possible so that no bug sneaks in. Parallel testing will take your test coverage to the next level, giving you a significant ROI boost.

Optimization of Your CI/CD Processes: Parallel testing is the best friend of continuous integration and delivery. By testing in parallel, you can run tests as soon as developers submit new code updates throughout the entire SDLC. Timely reporting and quick feedback in parallel testing will also facilitate better communication between various departments.

Improvement of Testing Practices: Parallel testing improves the QA routine in your company. The reason is crystal clear: by testing at high speed, you can test more. This gives your QA team a chance to improve their testing practices and pinpoint bugs faster.

Read Part 2: Tips for Successful Parallel Testing

Source

Tips for Successful Parallel Testing (Part 2)

Read Part 1: What is Parallel Testing in Automation?

Here are some of the tips for Successfull Parallel Testing:

Tip 1: Test in the Cloud-Based Environment

Implementing a parallel testing strategy using in-house resources is one of the most typical mistakes. Building and maintaining your own infrastructure is not efficient. It is not just time- and cost-consuming – you won’t be able to make the most of parallel testing where the ability to test at high concurrency is a major advantage. Also, keeping your testing environment up-to-date requires a lot of resources. To this end, it’s a good idea to turn to cloud-based services that allow you to access the necessary devices at any time.

Tip 2: Avoid Dependencies Between Tests

Dependencies between different test cases are a primary reason why transitioning to parallel testing is so challenging for many teams. Simply put, when test cases are dependent on each other, you should run them in a particular order, which can destroy any parallel testing strategy. So, it is critical to creating your test cases to be atomic, independent from each other. Only then, you will be able to run them at any time and in any order, making your testing processes free of constraints.

Tip 3: Avoid Hard-Coding

Hard-coding is embedding data directly into the source code instead of generating it at runtime. This notion is an enemy of efficient parallelization since it creates dependencies between test cases, mentioned above. It is strongly recommended to avoid hard-coding values when scripting your test cases to ensure that each of your tests is self-sufficient and can be run whenever necessary. Use the data-driven approach to configure your test cases when they are running in parallel.

Tip 4: Manage Test Data Efficiently

Efficient test data management is the key to successful parallel test execution. But first, you need a strategy. There are two components you might want to add to it: a creational strategy (which creates test data needs) and a cleanup strategy (which cleans it up). Still, the only thing that matters is that your strategy is adapted to your particular case. The following ones are quite basic, and perhaps that’s what makes these data management strategies so effective:

  • elementary approach, which has no creational and cleanup strategies
  • refresh-your-data approach, which resets your source code in between test executions but has no creational strategy
  • selfish data generation approach, which has a creational strategy but no clean up one.

These are the most basic strategies. You can mix and match them to serve your own case or explore alternatives, such as generating other data or refreshing specific data.

Tip 5: Build Tests to be Run in Parallel

You don’t want to end up with a bunch of tests that can’t be run in parallel right before the release date. Planning correctly from the get-go will save you from nasty surprises when time’s up or when a deadline is due. Create your test cases with parallelization in mind should be your way to go. And believe us, it sounds harder than it actually is: test cases created for running in parallel are simpler, shorter and faster to build.

Source

[ebook] Mobile App Testing Strategy Combining Virtual and Real Devices

Mobile Apps Require Rigorous Testing & Smart Strategies

But with such a fragmented landscape and so many device/OS combinations to test, as well as specific device capabilities, how can a team make sure they test effectively?

The way to ensure this is through a well-defined testing strategy in each step of the app development lifecycle. By combining real devices with virtual ones, teams can create a risk-based testing strategy to mitigate the risk of escaped defects.

Download this eBook to learn:

  • The main differences between virtual iOS/Android simulators and emulators and real devices.
    • OS version differences.
    • Platform capabilities and feature support.
    • Varying environment conditions.
    • Using each platform within the right SDLC phase.
  • The differences between iOS and Android virtual platforms.
  • How to build a testing strategy with both real and virtual platforms.
  • How to get started with Appium on real and virtual devices.

Click Here to Download this eBook.

Job Alert: Test Engineer (Automation) – Google – Bengaluru, India

Minimum qualifications:

  • Bachelor’s degree in Computer Science or equivalent practical experience.
  • 6 years of software development and testing experience.
  • Experience with the following coding languages: C, C++, Java or Python.
  • Experience with test methodologies, writing test plans, creating test cases and debugging.

Preferred qualifications:

  • Master’s Degree in Computer Science.
  • Experience with the following coding languages: JavaScript or Shell.
  • Strong development management or testing management experience with a proven track record in scaling highly technical teams.

Responsibilities:

  • Develop test strategies.
  • Automate tests using test frameworks.
  • Take responsibility for monitoring product development and usage at all levels with an eye toward improving product quality.
  • Create test harnesses and infrastructure.

Click here to Apply.

Meeting CI/CD Requirements: Key Factors in Test Automation We Must Consider [Video]

Automated tests are a key component of CI (continuous integration) pipelines. They provide confidence that with newly added check-ins, the build will still work as expected. In some cases, the automated tests have the additional role of gating deployments upon failure.

With such a critical responsibility, it’s important that automated tests are developed to meet the needs of continuous integration.

In this video, Angie Jones (Applitools) and Jessica Deen (Microsoft) discussed key factors that should be considered when writing tests that will run as part of CI.

Angie and Jessica also showed a live demo of integrating frontend tests to your pipeline within an Azure DevOps build — so we can see all those theories come to life.

Job – Test Automation (~10years exp) (Delhi / NCR)

Job Opening: Immediate Joiner required:
Looking for a test automation expert/architect having ~10 years of experience on multiple automation frameworks.

Location: Delhi/NCR.
Candidate should be from Delhi/NCR only.

Remote working available.
Insterested candidates can drop email (with contact details/ resume) at leadershipintech@gmail.com

Testing Cycle in Agile Process

There are typically 3 types of automated tests that are run in a CI/CD pipeline

1. Sprint Level Tests

  • These are new automated tests that have been written to test the functionality of the Sprint. They should be contained in a
    separate test suite that runs once a day to ensure the newly implemented functionalities are working as expected.
  • They are later merged into the regression tests suite.
    In the acceptance testing phase, usually the teams have a high level acceptance test plan that ensure the critical functionality of the systems are still working as expected afer the new
    features have been merged into the main code branch. Also, the smoke and regression tests are run again in parallel.

2. High Level Smoke Tests

  • These high level automated tests run on every code check in to ensure the critical functionality of the system is still working as expected. This could be a mixture of UI, API and Unit Tests. The point of this test is to get quick feedback about the system and it usually should finish running within 5 – 10 minutes.

3. Daily Regression Tests

  • These tests are run to ensure the new code added to the system did not break existing functionalities. They are more detailed than smoke tests as they cover end-to-end flows through the system. They are usually run at least on a daily basis and probably multiple times before a release.
  • Throughout this process, teams may continue to do manual scripted testing, exploratory testing and/or risk-based testing depending on their level of continuous testing maturity, application complexity and risk tolerance.

Source: Getting started with Automation (testIM E-Book). Click here to download.