John asked me to write a post based on my experience with writing automated tests. I’ve been working as a QA Automation Engineer for the past three and a half years. During this time I have had the opportunity to work with Ruby/Watir, C#/Watin, and Java/Selenium RC. In thinking on a topic, I decided that there’s no time like the present to talk about what I’m presently facing, and would like to share some insight straight from the trench I currently find myself in today:
Automation tools that claim to support multiple browsers are LYING
Currently I am in the process of getting 400 automated tests that run in Firefox to run in Internet Explorer and Chrome. While the tests are using Selenium RC, a tool that claims to work on all three browser platforms, in actuality, many tests are failing and not for good reasons (bugs).
Here’s the rundown on what these overly-ambitious tools aren’t telling you:
- Xpath in Browser A isn’t always supported in Browser B
- The Dom in Browser A may vary from the Dom in Browser B
- Browser B may have built-in security settings that interfere with testing
- Bugs in the automation tool itself may cause issues in specific browsers
When it comes to XPath, I fully expected that any given expression was either valid or invalid. But it didn’t take more than one test run in Internet Explorer to suddenly find myself with tests that claimed certain elements didn't exist. While Firefox accepted and successfully returned my element using the provided expression, IE claimed the expression itself was invalid. In this particular instance I discovered an issue with Selenium stripping off the end of my xpath statement, thus making it invalid.
But ultimately, whether it's the browser or the tool doesn't matter because the same expression is not working unequivocably in both browsers.
The Dom Issue
The next problem I immediately came across was the problem of Dom differences between browsers. In this particular case I had an element that occurred one time in Firefox but twice in IE. My test, expecting one instance of the element, failed. While I can’t tell you why this discrepancy exists (currently under investigation by some client-side engineers), I have a fair degree of confidence in saying this won’t be the last time I come across this issue.
In the quest to support many different browsers, it only follows that one is going to come across conditional browser logic, especially if a company is striving to support multiple browser versions (IE 6, 7, 8, 9). If a developer has to occasionally write browser-specific condition statements, it makes sense that we'd have to do the same to test specific browsers.
The Security Issue
If it seems that I’m picking on IE, it’s only because I can’t get very far in running our tests in Chrome. Chrome has special security settings that cause a Selenium exception if your test changes domains mid-run. This can easily be solved by passing a parameter to Chrome which will disable this security feature when the test runs, the trick is figuring out when/where/how to pass that parameter.
The Bug in the Tool Issue
Finally, the last obstacle I’ve come across in multi-browser testing: the tool itself. Any tool, regardless of how fabulous it is, is going to have bugs. Selenium RC has an issue where its select method (responsible for selecting a value in a dropdown menu) doesn’t trigger affiliated event handlers- in Internet Explorer. To work around this, one can manually call the event handler using Selenium’s runscript() but you will lose some degree of testing confidence in doing so (how do you know the event is really firing in response to a particular user action if you’re manually forcing it in automation?)
Another issue exists around multiple browser windows; if you need more than one open for your test, Selenium is unable to select the second window you open- another outstanding IE issue that has not been resolved.
Summary
In all fairness, some of these issues have more to do with the browsers themselves than with the automation tool. However, that makes it even more important to realize that when an automation tool says it supports multiple browsers, it’s not without some creativity, troubleshooting, legwork, and maintenance.
Knowing these kinds of issues exist is important in order to decide if and how much browser compatibility testing you want to automate. It's definitely something to realize before deciding on a tool just because the tool claims to support other browsers.
Just as there is no silver bullet for writing a web application that works in every browser imaginable, there is no silver bullet for automation either.