Improving Reliability of Tests

I have been working with eggplant over the past few months, after being put in charge of building a smoke test suite for our application.

Although Eggplant is fabulous and is incredibly powerful if used correctly, I run into some problems that I am at a loss on how to fix.

The Test suites I have built work. Functionally on a succesful run they work great.

However say 1 run in 10 or sometimes even 1 in 5 run fail due to some previously identified image not being recognized (usually resulting in my having to replace the picture).

Now I use the tolerant option during all my image captures and make sure that each picture can be identified. When something fails, I capture the screen and find to my dismay that the picture that could not be found was present on the screen, (I am not talking about pictures that change… i.e, different icons, text, font), but could not be found by the running script.

Although Eggplant is supposed to be resolution-independent, I have frozen my SUT to ensure that there are no external factors that could result in images not being found.

I have messed around with the default time taken for imagefound and other times, with varying success.

Is a 100% success rate impossible to achieve? What are some of the strategies you guys use to test eggplant to make sure that the script fails due to a genuine bug and not because the program didn’t run correctly?

Hello, bharath:

A few comments and suggestions:
[list=1]
[*] If your images contain text (e.g., menu items, labels, window titles), you should use the Text search type, especially if you are running against Windows XP or any version of Mac OS X. The dynamic anti-aliasing used by these systems can cause text to be drawn using very different pixels from one run to the next – the Text search type is designed to accomodate those differences.

[*] You say that the picture was present on the screen. There is a simple test for whether the item on the screen is different, or whether it just appeared too late for Eggplant to find it. You can simply highlight the line that failed and click the Run Selection item on the toolbar; if Eggplant now finds the image, then there is a timing problem, if it doesn’t find the image, then the item was drawn differently than it was when you captured it. Another version of this test is to just type the following command into the Ad-Hoc Do Box at the bottom of the Run window:

MoveTo "myImage"

If you haven’t done so already, you should read the article entitled "Failure to Find an Image that Is on the Screen " on pages 18-20 of the Using Eggplant manual.

[*] You probably don’t expect this from a software company, but we’re here to support you, even if you haven’t purchased the software. When you run into problems like this, you should feel free to contact us via phone or e-mail (or through these forums if you prefer) and we are happy to provide you with as much assistance as you need to resolve any issues you are having. In the particular case where you believe Eggplant failed to find an image that existed on the screen, you can send the screenshot and the image being searched for to support@redstonesoftware.com and we will analyze the images to determine if there is a difference and what that difference is. (I can assure you that if Eggplant isn’t finding it, then one or more pixels are more different than the chosen tolerance can compensate for – I have yet to see a case where Eggplant’s search algorithms don’t work as designed.)
[/list:o]
You can achieve near 100% reliability (certainly better than the 80-90% you are describing) and we are here to help you do it. Please let us know what we can do to assist you.

Regards,
Matt

Matt,

Could you please explain to me more how the text search work? Sample code would be greatly appreciated.

Thanks

Best Regards,

“Text” is a search type that you set, like Tolerant or Pulsing. It’s not text recognition, if that’s what you were hoping for. There is no code – you just pick “Text” as a search type and that tells Eggplant that it needs to use a search algorithm that takes antialiasing into account.

The problem is that I’m trying to make the system work OS-independent, so I cannot use the text feature (as in the software could run in any version of windows). I don’t expect the Windows related images to generate the same, but the application has the same look and feel across all the windows platforms.

The only reason I don’t directly email you guys is because I feel what I say may be useful to the others. Let me try and see if I can tweak my scripts around a bit more. Maybe I should start converting all my imagefounds to imagefoundnows. That should save me a little more time.

I changed the speed of typing text to .1 from .001 and now my scripts have become very reliable (touch wood)

I suppose VNC wasn’t able to relay the text over every time.

I think it’s less a case of VNC not receiving or relaying the key press events than it is that the system simply wasn’t able to process them as fast as they were being given to it. The default speed is much faster than any human typist is capable of, so it’s not surprising that some systems or applications simply can’t keep up. It’s also possible that the system or application simply discards keypresses that are so fast that they don’t seem “deliberate” – the default speed may be right on the threshold of what the system will accept and thus some of the key presses make it through and others are ignored.

  • Matt