Today I would like to uncover the topic of parallel testing in context of test automation approaches: its potential benefits and possible scenarios.
So let’s dwell on the matter of test automation as we know it. What can we call a common benefit on this regard? The most obvious one can be referred to as “resource savings”. How about we stop for a minute and try to define the tasks that are most frequently automated. These can be – repetitive inputs, UI navigation, performance measuring, etc. The next point would be the product areas which are eligible for the automation itself. In case when one has to achieve 100% test automation coverage, he/she may claim – “It is possible to automate every bit of the program functionality!” – fine by me, but is it reasonable? Is it worth spending hours of work to produce an automated test that will be run only once or twice, but can actually be tested manually in just a few minutes? Don’t expect a unanimous answer, but in majority of such cases – resources spent are not worth results provided. Another point that can be hard to overcome during the automation tests implementation is the maintenance of test cases. This problem mostly affects products that tend to change frequently during the development process or during the product lifecycle.
For instance, let’s take a look at the MS Office suite. Notice that the change between version 2003 and 2007 was a considerable and the rework of the UI layout affected almost every component (except for Outlook client, which was actually “ribboned” in version 2010). In this case we are observing the change during the product lifecycle, but such modification might have taken place during the version development stages too. This statement is especially valid for the desktop applications with custom GUI (e.g. built using html, but not win controls). In this case there is a high probability that newly created test scripts are going to require immediate update and thus more resources to be spent. If time and money are not of the essence the concerns described above are not critical, but what if they are? What if there is a need to provide decent testing results of a fast-evolving application across several configurations? Such scenario is quite common during the compatibility, internationalization (i18n) and localization (L10n) testing processes. It can be a real headache to handle this matter in retrospective of automation.
So what can be of assistance in such a case? What would diminish the test case maintainability problem? One of the possible answers is multi-threaded manual testing approach. So what the multi-threaded testing is? How parallelism can resolve common automation issues? Let’s bring back the example of a rapidly evolving application which has to be tested against, say, 10 different environment configurations. What would be in this case the most common automation approach? First – one needs to prepare test scripts (probably convert from manual test cases), second – run tests on some primary environment and stabilize them afterwards, third – deploy and run scripts on all the rest of the configurations. And even after the last step there might be a need to fine-tune them according to each build and environment separately. In case we are talking about the multi-threaded (parallel) approach – there is no need to create, stabilize and fine-tune the test scripts. There is one simple reason – there is no need to create any, we may just use the available manual test cases. Besides, in order to execute testing in parallel there is no need to utilize resources from automation team. Original members of manual test team may utilize the parallel testing tools easily. As the result of automation team resources being freed up – they can be allocated to more challenging tasks.
So how does actually the parallelism look in action? The idea is quite simple, apart from the implementation part that is, – a specially crafted program monitors user activity on one environment and applies it to the applications across additional configurations. For instance, we need to perform a compatibility check of an application against Windows 7 x64, Windows 7 SP1, Windows Vista x64, Windows Vista SP2 and Windows Server 2008 R2 SP1. Normally we would have to either run tests on each environment manually or to take time creating and stabilizing the scripts. In case with parallelism, tester would need to execute test on one of the environments while the monitoring program would transfer user’s actions to all the rest of the configurations. Of course tester should pay attention to each environment, and thus the time allocated to each test may increase a bit; but taking into account the fact that check is performed against 5 instances simultaneously negates even a doubled increase in time. Similar scenario can be applied during the internationalization testing – when there is a need to verify application functionality across several OS locales. There are some peculiarities of parallel testing against different languages but they can be coped with if you know where to focus on.
So is parallelism in testing an ultimate cure for a failed automation? Again, I won’t wager for a unanimous answer. To use it or not – depends on the project peculiarity, but there are several cases when this approach is definitely a life saver. In fact there is a lot more to discuss on this matter, so every single feedback is appreciated.