Protecting investment: Tests that do not break with page changes

Discussions

News: Protecting investment: Tests that do not break with page changes

  1. "One of the critical failings of automated web application testing comes when existing (previously recorded) test cases break due to changes with web infrastructures and back-end changes. Organizations invest significant time and money in building test infrastructures and recording automated tests, to find that far too often tests break when changes are made to minor content or page structure." Read more at: http://www.jadeliquid.com/blogs/2009/protecting-investment-tests-that-do-not-break-with-page-changes.php

    Threaded Messages (8)

  2. Why not use selenium?[ Go to top ]

    Is there any reason why I should choose LiquidTest in favor of selenium? Selenium uses the same mechanism for locating page elements, it supports a wider range of browsers and, most importantly, it's free and has a large user community.
  3. Good question Filip. I'd like to understand the differences myself. One big advantage for LiquidTest is support for recording tests in IE. Selenium IDE is Firefox only. I don't see anything on the LiquidTest Web site for load and performance testing, so it looks like LiquidTest is a functional test tool. (PushToTest TestMaker repurposes Selenium tests as load and performance tests and business service monitors.) In addition to Selenium I recommend you consider Windmill and TestGen4Web. Both are competent, well supported open-source record/playback-oriented tools that provide a good object framework for building tests of Web and Ajax applications. -Frank
  4. Filip, interesting that you ask about Selenium, as many of our clients are ex-Selenium users. We have probably not done a good enough job in explaining the major architectural differences between Selenium and LiquidTest. Some LiquidTest/Selenium differences; - LiquidTest runs natively with browser. Selenium runs in-browser which means it is limited by Javascript Security restrictions (no file uploads, file downloads etc) and hit with cross-site scripting issues - Native browser based dialogs (remember Password, Modal Internet Explorer dialogs etc) are unsupported in Selenium - LiquidTest has an Eclipse plugin and therefore integrates right into the dev environment - LiquidTest runs languages natively, does not need to translate to intermediate layer - LiquidTest is server scalable and able to run Parallel browser instances without any corruption - The Selenium recording engine is not robust. Why not write scripts by hand? Well because it is a complicated and time consuming task. - LiquidTest is real browser event-Expectation based. Selenium sleep based - The deep native hooks that LiquidTest has into the underlying browsers means that LiquidTest receives much more information internally than Selenium, and can make more informed decisions regarding the cause of DOM mutations. This is an important factor that gives LiquidTest much more scope for recording succinct test cases that are not making a "best guess". - Selenium has no great reporting capability. LiquidTest integrates into JUnit etc, and therefore works without effort with all Continuous Integration servers - LiquidTest supports Headless test case replay - Selenium lacks support for recording in Internet Explorer - The way that Selenium hooks events on Nodes is prone to fail on "new age" Javascript frameworks (that run their own listeners) - LiquidTest is a commercially supported product. You find an issue or a bug and we are here to help! You point to the additional browser support. How does this work in reality? On complex pages and complex tests, there is no way that tests will robustly run across multiple browsers past simple pages. Think of the IE vs Firefox DOM. They are very different. Any use of reasonable XPath locators will break the test across browsers. What about Javascript and frameworks that have dynamic element ID's? What about frameworks that utilize different Javascript and layout per-browser to get a uniform look and feel? How does a Firefox recorded Selenium test (with a reasonably complex structure) including XPath locators run in Internet Explorer and Safari? Food for thought. Selenium does a great job to a point. Past that point, we are finding commercial organizations are coming to us for LiquidTest.
  5. Ajax and events?[ Go to top ]

    Hi Anthon: Good post, lots of useful answers. "LiquidTest is real browser event-Expectation based. Selenium sleep based." For Ajax applications the Selenium IDE recorder misses most Ajax events. It's ClickAndWait, for example, waits for a page to load. In Ajax applications this normally doesn't work. So I find myself adding WaitForElementPresent and a bunch of other techniques to synchronize the test with the application. How does LiquidTest solve this? -Frank
  6. HTMLUnit working on this too[ Go to top ]

    BTW, at the TSS Symposium last month the HTMLUnit team said they were working on a solution for HMTLUnit commands to wait until Ajax commands finished processing. They said that since they are the browser there are ways to understand the "idle" nature of the Ajax application. =Frank
  7. Re: Ajax events[ Go to top ]

    No problem Frank. A lot of the details that make LiquidTest a "special" framework have been lost in translation, so I am more than happy to clarify where necessary. Given the deep hooks into the browser, we get a lot more useful information from the browsers than one receives using Javascript injection (Seleniums approach). We use this information to track Ajax and other dynamic content at "record time". This means that the scripts that LiquidTest outputs (records) contain the Expectations for Ajax and other dynamic content. For some more information on the Expectation engine and Ajax with Google Finance see: http://www.jadeliquid.com/blogs/2009/testing-complex-ajax-content.php It is no trivial task determining what makes a "significant event". For instance, you only want a test script to include relevant actions and events, not say test cases littered with 100's of mouse moves which have no relevance. This is where the complexity comes in with LiquidTest. Working out what is important (say a mouse move that drags Ajax based content) from what is not is a very difficult task where all the internal information from the browser is required. LiquidTest is not perfect (remember it is 1.0), but we are working hard to get the recording as spot-on as possible.
  8. We have a team of experienced test automation specialists and novice software test engineers. We performed trials on several automated tools in the testing space over a three month period to try and find the right one for the task. We settled on LiquidTest for our web application projects for several factors, the main factors being financial ones: Total cost of ownership. TCO is what I need to contain. Time to create or code scripts, extra hardware, training, and script maintenance due to application design changes or scripts breaking, all increase the cost of having tool X. I did not buy any new hardware, training took less than a day and we could do it ourselves, the scripts broke far less often with adjustments to the application's UI, and as creating scripts was only around 10% slower than performing the test manually. We decided that LiquidTest had the lowest TCO of the tools we assessed. Return on Investment. The CFOs important ROI. LiquidTest licences were very cheap compared to the likes of Silk and QTP. The OpenSource tools that we assessed including Selenium, Watij, Canoo and MaxQ, all required more coding of scripts, were more fragile, and therefore required more scripting by way of maintenance. Essentially they would actually cost the company in the long run. Getting our hands dirty in coding the scripts takes time, and time is money. The developers and Testers now both use it and the developers get results direct from the CI server. The testers do not see the build until it passes their recorded tests and ours. I am now able to use my junior team members to create the tests with LiquidTest, I have been able to utilise the experienced team members elsewhere, and we are increasing test coverage faster than I had anticipated, which is awesome.
  9. I'm glad that LiquidTest is putting an emphasis on test maintenance. We have customers that started with the Mercury test tools just to find that they had to spend an equal amount of money to rewrite all their tests after the application they were testing changed just 15%. Test maintenance can be a big hidden cost. Consider what most application development shops are up against. Agile or waterfall, the development cycles are shrinking again. When an application changes 30% every 8 week release cycle, are you prepared to rewrite 30% of the tests? The testing space in IT development has come a long way: We've got developers writing tests, and testers writing code now. But there is little provided in the way of an object-oriented approach to building tests. If my application has a sign-in form then why should I not write a test unit object that handles the sign-in function? When the sign-in function changes in the application, just update the unit test object and all the uses of the sign-in function update automatically. Sadly most of the test record/playback oriented tools (including HP QTP) don't do objects, or they do them very poorly from what a Java developer expects. TestMaker addresses this in a component approach to test development. You create a set of tests using a record/playback tool and write an orchestration of a functional test as a test use case. Here's an example of a use case that signs-in to a ecommerce site, puts something into a shopping basket and signs-out. This isn't limited to Selenium, it's easy to write a ScriptRunner in TestMaker that can repurpose LiquidTest, QTP, and other test formats. Also, this works for JUnit TestCases and other object oriented test frameworks. And once you express the test usecase like this it is easy to repurpose the test to a load tests or business service monitor without changing the tests. Windmill is intriguing in its approach to writing object-oriented tests. For example, its output uses the Python module hierarchy to support subclassing. client = WindmillTestClient(__name__) client.click(id=u'viewNavCenterRight') client.waits.sleep(milliseconds=2000) client.doubleClick(id=u'hourDiv1-1200') client.waits.sleep(milliseconds=2000) The only weirdness for Java developers is Windmill's use of Python or JavaScript as a language. But I haven't found that to be a big deal. Of course, I love Jython too. -Frank