The Ultimate Guide to Automation Testing: 74 Resources, Tools & Best Practices for Test Automation
Automation testing takes a manual test case and applies a tool or programming language to automate its execution. As more teams shift left, there is a need for tests to run earlier and faster in the development lifecycle.
Traditionally, during regression testing, a manual tester will take an existing test case procedure and execute it step by step. This can be time-consuming and also error-prone, since it is done by hand.
Because of the reasons above, to save time, many companies try to take their manual test cases and convert them to an automated test case. An automated test tool then executes the test steps automatically without human intervention.
Sounds easy, but there are many pitfalls teams encounter when starting their test automation journey. In this tutorial I will address the most common automation issues folks face and hopefully help you begin your automation project off right.
This is a long guide, so here's an overview of what we're going to cover:
- What Is Automation Testing?
- Issues With Manual Testing
- Why Is Automation Testing Necessary?
- Automation Testing Considerations
- Automation Testing Pitfalls
- Automation Is a Team Effort
- Tests That Should Be Automated
- ROI: The Cost of Test Automation
- What Shouldn’t Be Automated?
- What Is an Automation Testing Framework?
- Automation Testing Design Patterns
- Test Automation Process
- Test Automation Best Practices
- How To Pick an Automation Testing Tool
- Test Automation Metrics
- Automation Testing Frameworks and Tools List
- API Automation Test Tools List
- Run Your Automated Test in the Cloud or on Mobile Devices
- Automation Test Management Tools List
- Automation Testing Courses
- Test Automation Conferences
Let’s start at the beginning by defining automation testing.
What Is Automation Testing?
Functional automation testing uses tools designed specifically for automation to emulate a user interacting with an application and verifying test steps using programming assertions.
Many folks also call these tests automation “checks” or automated checking.
This distinction is made to remind testers that automation and the implementation of automation is a checker—it doesn’t replace your testing strategy. An automated test is also, in a sense, dumb in that it can test only what you tell it to test; if you don’t assert it, it doesn’t get checked.
Also, it’s important to remember that “automation” does not apply to just user interface (UI) end-to-end tests. In fact, I would say that you get more benefit from a lower-level automated test, like a unit test, than you do from a big bloated test suite of end-to-end tests.
Before we take a more in-depth look at automated testing, let’s touch on some problems with manual testing.
Issues With Manual Testing
There are many reasons why having a testing strategy that relies heavily on just manual testing causes issues. Here are a few:
- It uses a lot of resources like time and testers.
- It’s time consuming.
- It sometimes lacks proper coverage.
- Due to its repetitiveness, testers may get bored and miss steps when executing manually, leading to possible inconsistencies.
- If you plan on moving toward a continuous integration/continuous deployment model, too many manual tests will slow your teams down.
Automation testing can help.
But one of the first hurdles you might face when introducing automated testing to your organization is the false belief that automation can replace all your testers and tests with automation.
Here’s the deal:
Automation Testing Does Not Replace Testers.
Some people assume because a test activity is automated, that means it replaces human testers. In fact, however, it’s the opposite. Automated tests are great for running tests precisely and quickly, but they in no way replace human testers.
Automation tests are also great for running the same steps over and over again, but they don’t think.
I like to think of automation in the way that Peter Thiel explains it in the “Man and Machine” chapter of his book Zero to One:
“Computers are complements to humans, not substitutes. The most valuable businesses of the coming decades will be built by entrepreneurs who seek to empower people rather than try to make them obsolete.”
Although we can agree that automation testing does not replace other testing activities, with today’s software development environment and continuous integration practices, it is critical and cannot be ignored.
So what are some reasons for using automated tests?
Why Is Automation Testing Necessary?
Practices like continuous integration and delivery require tests that run quickly and reliably. Lots of manual tests will stop your ability to achieve velocity with your software development.
I’d go so far as to say that in today’s modern development environment, we couldn’t succeed without automation.
Although the main reason teams try to create automated tests is to save the company both time and money, it’s also important to give developers quick feedback so that when they check in code, they are alerted as soon as possible that the change they checked in broke something.
Some other reasons for automated testing are:
- Verification of newer versions of software
- Frees up testers to focus on more exploratory-type testing
- Automated tests are more repeatable
- Data population
- Accurate benchmarking
- Less false failure due to human error
- Greater test coverage
- Quicker release of software
- Provides fast feedback to your developer on failing checked in software
- Saves time
- Ability to leverage programming capabilities
While these are good reasons for automation, many folks fail to factor in the amount of time and money it takes to maintain sizeable automated test suites.
So are there any other downsides to creating automated tests? What’s the real story?
Automation Testing Considerations
Since automated tests usually rely on programming languages for their creation, automation becomes a full-blown development effort. What you are doing is developing a piece of software to test another piece of software.
Automation testing is difficult and complicated, just like most other development software projects. It also presents many of the same issues other software programs do. Treating your automated code just like your development code is essential. Follow the same processes and best practices you would use for any other software development project.
To learn more, John Sonmez covers many of these best practices in his awesome Pluralsight course Creating an Automated Testing Framework With Selenium.
Automation Testing Pitfalls
Teams often claim that automation testing “doesn’t work.” But this attitude is usually caused by poorly designed test automation more than anything else.
If you keep the issues listed below in mind as you create your test automation framework, you can avoid many of these automation pitfalls ahead of time.
Many issues are caused by setting unrealistic goals, like, for example, having a goal of reaching 100% UI automated testing coverage. Teams often believe that automation tests will find more new defects, so they have a false sense of security. Your automation is only as good as your tests.
Teams also underestimate the amount of time it takes to maintain automation. They’ll often create large, end-to-end tests, but tests should be atomic so that when they fail, you know why.
Several other common issues that teams face are:
- Focusing on UI automation only
- Not having a controlled test environment
- Ignoring failing tests
- Not having a test data strategy in place
- Not reusing automation code
- Developers not making their code automatable
- Not using proper synchronization in your tests
- Not making your automated tests readable
- Creating automated tests that add no value
- Hard coding test data
Automation Is a Team Effort
For automation to be successful in an organization, you need to educate everyone on the team as to what the expectation should be for your testing. It’s also critical that you create a whole team approach to your automation efforts, meaning regardless of a person’s role on the team, development and testing (and ultimately, delivering a feature) requires a total team effort. Testing shouldn’t be an activity that is done only at the end of a sprint by a designated tester.
Quality needs to be baked into the software from the beginning, not after the fact. The only way to do this is to have everyone working toward making the application under development as testable as possible. This takes the whole team working together to deliver a quality product.
Many failures I see with automation are not caused by technical issues, but rather by a company’s cultural issues.
Once you have the whole team on board with automation, and your manager’s expectations have been correctly set, it’s time to write your automated tests.
Remember—automation is a time-consuming test activity. You want to use it only when it makes sense. Other testing activities, like exploratory-type work, should be encouraged.
Automation is just one of many types of test activities that can be used by testers.
At this point, another common question I’m frequently asked is, “Which tests should be automated?”
Tests That Should Be Automated
The biggest problem I usually see is that teams start off trying to automate everything. The problem is that not everything is automatable. When planning which test cases to automate, you should look for tests that are deterministic, don’t need human interaction, are hard to test manually, and need to run more than once.
You should also seek to automate any manual process that will save engineers time (not necessarily an official “testing” process), along with tests that focus on the money or risk areas of your application.
Other tests that are useful to automate are unit tests, as well as tests that run against different data sets, focus on critical paths of your application, need to run against multiple builds and browsers, and are used for load/stress testing.
The more repetitive the execution is, the better candidate a test is for automation testing. However, every situation is different.
Ultimately, you should consider using automation for any activity that saves your team time. It doesn’t have to be a pure testing activity; you can leverage automation to help reduce any time-consuming activity found anywhere in the software development lifecycle.
At this point, some of you may be asking, “What is the return on investment (ROI) of test automation?”
ROI: The Cost of Test Automation
Determining the ROI of your automation testing efforts can be tricky. Here is a common calculation some folks use to get a rough estimate of their test automation costs. This can also help you decide whether a test case is even worth automating as opposed to testing it manually.
Automation Cost = how much the tools cost + how much the labor costs to create an automated test + how much it costs to maintain the automated tests
Consequently, if your automation cost calculation is lower than the manual execution cost of the test, it’s an indicator that automation is a good choice.
Moreover, ROI quickly adds up with each re-run of your automated test suite.
Because it’s critical that you get a good return on your test automation investment, there are some things you shouldn’t automate.
What Shouldn’t Be Automated?
There are exceptions to everything, of course, but in general, you may not want to automate the following test case scenarios:
- One-time tests
- Ad hoc-based testing
- Tests that don’t have predictable results
- Usability testing
- Applications not developed to be testable
In addition to what not to automate, another element of a successful automation project is having an automation framework.
What Is an Automation Testing Framework?
An automation framework is a common set of tools, guidelines, and principles for your tests. Having a framework helps to minimize test script maintenance.
I like to break down an automation testing framework into specific areas of concern, or what I call the “four Ps” of an automation framework: people, planning, process, and performance.
The first one is the people aspect of a test automation framework.
As we already covered earlier, you want to make sure that you have set the expectations of your managers and team about your automation strategy. To help ensure that your automation is collaborative and a whole team effort, I recommend that you include automation on your sprint team’s definition of done.
The next stage is the planning piece of your automation framework.
Before writing one line of code, always check to see if there is an existing library or tool you can use before inventing your own. Break your automation framework into abstraction layers so that if anything changes, you just need to make the change in one place. Using established automation testing design patterns like the ones we cover later in this post should be part of the planning stage.
Separating your tests from your framework will also help when you have to make changes to your framework.
Also, plan on making your methods and utilities reusable to avoid code duplication. Making your test and code as readable as possible (they should read like English) will go a long way to prevent confusion and code duplication.
Finally, when planning your test, it's important to be aware of what test data your tests need. Many times, tests run against different environments that might not have the data you expect, so make sure to have a test data management strategy in place. Including support for mocking and stubbing in your framework can also help with some test data issues.
Having a process in place that holds team members accountable for automation is another vital piece of a framework. Since automation is just like any other development project, make sure to use the same process and best practices that developers already follow, like using version control and performing code reviews on all automated tests.
Always start your automation framework with the end in mind. Tests need to be reliable and maintainable. They also should run as fast as possible.
Along these lines, make sure that your developers are creating unique IDs for each element that you will have to interact with within your tests. Doing this will help you avoid resorting to lousy automation practices like relying on a coordinate-based way to communicate with an element.
Another top killer of test automation script performance is the failure to use proper synchronization/wait points in your tests. Too many hard-coded waits will slow down your test suite. Use the preferred explicit wait method for synchronization.
As you write your test scripts, think about how they would perform if you had to run in parallel. Thinking about possible parallel issues beforehand will avoid problems when you start to scale the running of your test suite again on a grid or a cloud-based service like Sauce Labs.
To ensure that your tests are as preformatted as possible, refactor slow or poorly written code whenever possible. Including reporting and logging in your framework will help you quickly identify poorly running code.
To make sure that teams follow all these guidelines, determine a strategy for training and retraining your framework users.
That’s not all …
Automation Testing Design Patterns
It’s a given that your applications are going to change over time. And since you know change is going to happen, you should start off right from the beginning using best practices or design patterns. Doing so will make your automation more repeatable and maintainable.
Here are some common automation testing design patterns that many teams use to help them create more reliable test automation.
One popular strategy to use when creating your test automation is to model the behavior on your application. Creating simple page objects that model the pieces of your software that you are testing against can do this.
So, for example, you would write a page object for login or a page object for a homepage. Following this approach correctly makes use of the single responsibility principle.
If anything changes—say, an element ID—you just need to go to one place to make the change and all of the tests that use the page object will automatically pick up the changes without you doing anything else. The test code needs to be updated in only one place.
Page objects also hide the technical details about HTML fields and CSS classes behind methods that have easy-to-understand names. Being mindful when naming your methods has the extra benefit of helping to create a nice readable test application programming interface (API) that a less technical automation engineer can quickly start using for automation.
Page objects are a good place to start making your test maintainable, but if you’re not careful, they can grow out of control over time. The Screenplay pattern takes page objects and chops them down into really tiny pieces. Some testers tell me that this has made their tests much more maintainable and reliable.
Another significant benefit is that it makes test scripts more readable.
Ports and Adapters
The ports and adapters design strives to make sure you are using the single responsibility principle so that an object should do only one thing and have one reason to change.
When you apply this to test automation, make sure to decouple your test code to allow you to swap slow components with fast simulators so that you can run your test and the app you’re testing in the same process.
Remove all networking and input/output so nothing is slowing down the test suite. Of course, this is not easy to do, but the more you try to do this when creating UI automation, the better off you will be.
Presenter First is a modification of the model-view-controller (MVC) way of organizing code and development behaviors to create completely tested software using a test-driven development (TDD) approach.
He mentioned that if you draw out the MVC pattern as blocks and arrows, you can see that the view, which is your UI, has well-defined channels of communication with the model and the controller. If you can replace those during runtime with models and controllers that your test creates and controls, then there is no reason why you can’t just test that the UI behaves in the way that you want.
You can also set your model and controller to mimic all sorts of odd behaviors, like a network going down.
Test Automation Process
I like to follow a six-step cyclical process when planning my test automation efforts that have the following phases: Analyze, Write, Execute, Evaluate, Communicate, and Repeat/Refactor.
- Analyze—understand your functional testing objectives. Understand what test data is needed. What needs to be tested? Know what needs to be verified. If you are part of an agile team, an excellent place to start would be in your definition of ready meeting. This will allow you to look at your upcoming sprint and see if all aspects of your automation have been taken into consideration before you start developing new functionality to test.
- Write—turn the requirements into an automated solution. Know what the start and end conditions are for each test. Tests should be completely independent of other tests. Add proper assertion checks to ensure your application is behaving according to your specifications. Each test should have a particular purpose.
- Execute—your tests should be reliable. Run each test at least three times in a row before checking in code. If you plan on using a continuous integration tool like Jenkins, start by getting your first test to execute in the environment you plan on using.
- Evaluate—verify that the automated script is doing what you expect it to do. Have manual testers prove that it is working as expected. Remember—if it’s not asserted, it’s not checked. Is the test reliable?
- Communicate—be sure that everyone on your team is aware of the results. Flaky tests should be fixed ASAP, or you’ll risk your team ignoring your test results.
- Repeat/refactor—if you notice a flaky test, refactor it to make it more reliable. Most importantly, delete any tests that are not reliable and haven’t been fixed within a given time frame. When looking at your automated regression test, ask the team if it is still needed or if it’s adding value. Pruning old tests will save you time in maintenance in the long run and ensure you are only running tests that give your team value.
Test Automation Best Practices
Here are some high-level good automation practices you should follow in more detail:
An atomic test is a strategy for ensuring that each test is entirely self-contained. That is, it should not depend on the outcome of other tests to establish its state, and another test should not affect its success or failure in any way.
Also, when an automated test fails, you need to know why. Having a well-named atomic test that tests only one thing will help you quickly identify what broke if your test fails.
Furthermore, you should endeavor to get feedback to your developers as quickly as possible, and the best way to do that is with a fast, well-named test.
This is also critical if you plan on running your automation test in parallel in a Selenium Grid.
Test size matters because tests need to run quickly.
At this point, many people visualize a traditional test pyramid, which has unit tests as its base, integration tests in the middle, and graphical user interface (GUI) tests at the top.
But I think more in terms of test size. By test size, I’m referring to tests that are faster than others.
While I understand the need to run UI tests if you have to create one, make it as fast as possible.
A quick point on test code readability—did you know that developers spend more time reading code than actually writing it?
It is rare that the person who wrote code will also be the one that needs to modify it. Even worse, how many times have you written code only to come back to it months later and have no idea what it is doing?
Since, as we mentioned, automation code is software development, you should create your test code with the reader of the code in mind—not the computer.
This will help not only to make your test more maintainable, but also will help ensure that you do not duplicate code because you didn’t know what an existing piece of code was doing.
This might seem like a minor issue, but ignore readability of your automation test at your peril.
The importance of code readability really hit home for me after watching Cory House’s Pluralsight course called Clean Code: Writing Code for Humans.
Testability needs to be baked into our applications right from the start. As a regular part of sprint planning, developers should be thinking about how they can make their application code more testable. They can do this by providing things like unique element IDs for their application fields and APIs to help create hooks into their application(s) that can be used in their automated tests.
They should also be thinking about how any code changes they make to the application are going to impact existing automated tests, and plan accordingly.
If you don’t do this, you’re not going to be successful with automation for very long.
Remember, you can’t automate that which is not testable.
This one is a really common hindrance for many teams with their automation efforts.
Without a stable test environment that is always in a known state, it will be tough for your teams to make progress with their automation efforts.
Tests failing due to environmental issues rather than actual application issues will cause your teams to lose confidence in your test feedback quickly.
Once teams start ignoring automation results, your test efforts become useless.
How To Pick an Automation Testing Tool
There is no “correct” test tool for automation testing. Ultimately, it all depends on your team’s unique needs and skill set.
I always recommend that you run a two-week proof of concept (POC) for each tool that you are considering and include your team’s feedback in the process before committing to a tool.
The first place to start is to look at the product roadmap and make sure the tools you select will handle future features and technologies. To avoid future compatibility issues and tedious framework refactoring due to false notions, don’t skip this step.
Next, you should evaluate the cost, including maintenance. If you plan on having your whole team help out with the automation effort, make sure to use a tool that leverages the same tools and languages your developers use.
Don’t just assume a tool will work for you. Create a small POC for each tool and get team feedback before committing to anything. Ask the team: Is the tool extensible? How easy is it to use and get started? Does it provide reporting and debugging capabilities? Does it recognize all the objects in your application? Can it integrate with other tools like version control, test management tools, and continuous integration tools?
Most importantly, find out if the tool has an active user base and select tools that other companies are using. You don’t want to select an open-source solution that is not actively maintained by the community. You may need to ask how much training it will take to get your teams up to speed with the tool. Finally, determine how easy is it to hire folks that have the skills needed to create your automated tests.
Test Automation Metrics
Coming up with metrics that teams can use to make sure they are on track is tough. These should be used just as a quick way to monitor your team’s progress and shouldn’t be used as hard and fast rules.
Mean Time to Diagnosis (MTD)—How long does it take you to debug a failing automated test? A high MTD is an indicator that your automated test code is not high-quality.
Bugs Found by Automation—This can be helpful to determine how much value your automation efforts are bringing.
Flaky Rate—Ideally, this should be zero, but this is a good indicator to know if your automated tests are reliable or not.
Automated to Manual Ratio—The more manual tests you have, the longer it will take to tell if your application is ready for release. This helps you keep a pulse on how long your release efforts will take. It also enables you to gain visibility into whether teams are automating (or not automating) the right things.
Automation Testing Frameworks and Tools
This is not an exhaustive list, but rather a quick summary of some of the more popular test tools that I’m aware of.
- Selenium—This has arguably become the de facto test tool standard for browser-based testing. Please remember: You cannot use Selenium for non-browser applications; not everything can be an automation testing Selenium script.
- Appium—Appium is automation for apps. Appium seems to be the winner in the mobile testing space so far.
- Watir—This is an open-source Ruby library for automating tests. Watir interacts with a browser the same way people do: clicking links, filling out forms, and validating text.
- WinAppDriver—Windows Application Driver is a service to support UI Test Automation of Windows Applications.
- White Framework—White is a framework for automating rich client applications based on Win32, WinForms, WPF, Silverlight, and SWT (Java) platforms. It’s .NET based and doesn’t require the use of any proprietary scripting languages. In fact, test automation programs using White support your writing with whatever .NET language, integrated development environment, and tools you are already using. White also provides a consistent, object-oriented API, hiding the complexity of Microsoft’s UI Automation library (upon which White is based) and Windows messages.
- AutoIt—AutoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. Many teams integrate AutoIT with Selenium to work around non-browser windows that appear in an automated test workflow.
- Serenity—This is one of my favorite automation frameworks around. Serenity is a great open-source tool because it acts like a wrapper over Selenium and behavior-driven development (BDD) tools like jBehave and Cucumber-JVM. That means that there’s a lot of built-in functionality available to you in Serenity that takes care of many things you would normally have to code from scratch if you had to create your own BDD framework. What Serenity is really awesome at is creating unbelievable reports. Out-of-the-box Serenity creates living documentation that can be used not only to view your Selenium BDD test results, but also as documentation for your application.
- Sahi—The first thing you need to know is that Sahi comes in two flavors: open-source and a pro version. Sahi Pro is the enterprise version of the open-source project. It includes lots of features coveted by larger organizations like pro style reporting.
- Robot Framework—If you want to use Python for your test automation efforts, you can’t go wrong using the Robot Framework. The Robot Framework is a mature solution that was created for testers and uses a keyword-driven approach to make tests readable and easy to create. It also has many test libraries and other tools you can use for editing, running, and building your tests.
- RedwoodHQ—This takes a little bit of a different approach from the other tools on this list. It creates a website interface that allows multiple testers to work together and run their tests from one web-accessible location.
- Galen Framework—If your automation efforts are focused on user experience design/layout testing, Galen might be a perfect fit for your needs.
- Cypress—Cypress is a more developer-centric test automation tool. It is aimed more toward making TDD a reality with developers.
Vendor-based Automation Test tools
- Applitools—Applitools integrates with both vendor and open-source solutions. Where most of the tools listed here are for functional test automation, Applitools helps you with visual validation testing from a user’s perspective.
- Unified Functional Testing (UFT ) UFT PRO (LeanFT)—Essentially combines the best of both the vendor-based and open-source worlds by morphing Selenium with some key functionality currently found in UFT (QuickTest Professional [QTP]).
- Microsoft Coded UI—Uses Selenium to help test Chrome and Firefox browsers. But unlike Selenium, which is only for web-based testing, Coded UI is unique in that it allows you to automate a bunch of different technologies and is not limited to the browsers.
- IBM Rational Functional Tester—Like most companies, IBM’s test portfolio has grown with the acquisition of tools like Rational and Green Hat. It appears that much of the strength of its functional test tools comes from its support of numerous technologies including Windows, Mac, and mobile platforms.
- Tricentis—Self-billed as “the continuous testing company,” which is in line with many independent tool reviews like Gartner’s finding that one of its strengths is its extensive efforts to support Agile testing and continuous improvement processes.
- Worksoft—Worksoft is well known for its enterprise resource planning business end-to-end solutions.
- Testplant—One of the few test automation tools listed that has strong support for Apple’s platform. In fact, because of its unique, image-based recognition approach, it has the ability to test hard-to-automate applications—especially those with object recognition issues. Unfortunately, anyone who has done image-based, functional test automation knows how difficult these types of tests can be to maintain, and some customers have noted that as an issue.
- Ranorex—Supports a ton of technologies across all kinds of platforms—all from one tool. Noteworthy, however, is that it lacks a full, end-to-end solution and focuses mainly on functional test automation.
- Progress—For those of you who may not be familiar with this company, Progress recently acquired Telerik, which is the home of the popular free debugging tool Fiddler. Also, I know a few test engineers who actually use Progress’s Test Studio as a front end for their Selenium test automation efforts. Strengths of Progress are its integration with Visual Studio and its supported languages.
- Automation Anywhere—Differentiates itself by being the only robotic “bots on demand” focused process automation platform. Whereas some of the other tools in this list are able to test a large set of technologies, Automation Anywhere does not support testing for packaged applications like SAP or support for native mobile apps testing.
- TestIm – Leverages machine learning to speed up the authoring, execution and–most importantly–the maintenance of automated tests. Their goal is to help you to start trusting your tests.
API Automation Test Tools
Open-source API tools
- Rest-Assured—Rest-Assured is an open-source Java domain-specific language that makes testing REST service simple. It simplifies things by eliminating the need to use boilerplate code to test and validate complex responses. It also supports XML and JSON Request/Responses.
- RestSharp—This is a simple REST and HTTP API Client for .NET.
- Postman—Postman is a REST client that started off as a Chrome browser plugin, but recently came out with native versions for both Mac and Windows.
- SoapUI—This is the world-leading open-source functional testing tool for API testing. It supports multiple protocols such as SOAP, REST, HTTP, JMS, and AMF.
- Fiddler—Fiddler is a tool that allows you to monitor, manipulate, and reuse HTTP requests. Fiddler does many things that allow you to debug website issues, and with one of its many extensions, you can accomplish even more. Check out my article on how to get started with Fiddler.
- Karate—Since Karate is built on top of Cucumber-JVM, you can run tests and generate reports like any standard Java project. But instead of Java, you write tests in a language designed to make dealing with HTTP, JSON, or XML simple.
- Citrus Framework—Use this tool to create automated integration tests for message protocols and data formats for technologies like HTTP, REST, JMS, TCP/IP, SOAP, FTP, SSH, XML, JSON, and more.
Vendor API Tools
- SoapUI Pro—Since the free version is open-source, you can actually gain access to the full source code and modify as needed. The pro version is more user-friendly, and has additional functionality, including a form editor, an assertion wizard for XPath, and a SQL query builder.
- UFT API—In previous releases, HP had separate products for functional testing. QTP was used for testing GUI applications, and Service Test was for testing non-GUI technologies. HP’s latest test tool release—UFT—combines both products and features a front end that merges the separate tools into one common UI.
Test Execution Report Tools
- Allure—This is an open-source framework designed to create test execution reports clear to everyone on the team.
Run Your Automated Test in the Cloud or on Mobile Devices
Here are some vendors that allow you to save a ton of time by running your test in the cloud and on multiple OS, devices, and configurations. Get rid of the headache of having to maintain your own in-house lab/grid.
Automation Test Management Tools
- Zephyr—Manage all aspects of software quality—integrate with JIRA and various test tools, foster collaboration, and gain real-time visibility.
- QASymphony—This tool has a platform called qTest for software testing and QA tools built for Agile.
Automation Testing Courses
Finally, here’s a list of my favorite automation testing courses:
- Automated Web Testing with Selenium—John Sonmez
- Creating an Automated Testing Framework With Selenium—John Sonmez
- Quick Guide to API Testing with HP’s Unified Functional Testing—Joe Colantonio
- Test Automation with CodedUI—Marcel de Vries
- Selenium 2 WebDriver Basics with Java—Alan Richardson
- Complete Selenium Webdriver with C# – Build a Framework—Nikolay Advolodkin
- Robot Framework—Bryan Lamb
- The Java Selenium Guidebook—Dave Haeffner
Test Automation Conferences
- Automation Guild—Automation Guild is an annual 100% online conference and community dedicated 100% to helping you perfect the craft of creating automation awesomeness and accelerate your automation career.
- SeleniumConf—SeleniumConf brings together Selenium developers and enthusiasts from around the world to share ideas, socialize, and work together on advancing the present and future success of the project.
- STPCon—The Software Test Professionals Conference is the leading event where test leadership, management, and strategy converge.
- SauceCon—brings together the global community of Sauce Labs users and automated testing experts.