Reporting of results

Hello,
I want to make reports of the results that are logged.
Could you please guide me doing so?

Himani:

Can you be more specific about what sort of report you want to create? What would the level of detail be and what specific information would you want to include? What format do you want the report in?

  • Matt

Hi,
I want the report(preferably in excel sheet format) showing whether the test case has passed or failed and the reason of it’s failure and also total passes and failures.

[quote=“Himani”]Hi,
I want the report(preferably in excel sheet format) showing whether the test case has passed or failed and the reason of it’s failure and also total passes and failures.[/quote]

All this information is readily available, except that it’s sometimes difficult to extract the exact reason for a failure. A reason is always given as the last line in logfile.txt, but if the failure is due to a LogError statement being called earlier in the script, then the reason given will just be that an error was logged.

You won’t create the output as an excel spreadsheet, but Excel can easily read comma-separated values (CSV).

Here’s a script that will generate a file containing the information you asked for:

params scriptname, logfileName

// get the top-level result directory for the script
put items 1 to -3 delimited by "/" of (the last item of scriptResults(scriptname)).Logfile into resultDir
repeat with each line of file (resultDir & "/RunHistory.csv")
	// extract the date/time of the run and the result
	// and write them to the custom result file
	put item 1 of it & "," & item 2 of it after file logfileName
	// increment the success or failure count
	add 1 to outcomes.(item 2 of it)
	if item 2 of it is "Failure" then 
		// if the result was a failure, extract the reason from the specific result file
		put item 4 delimited by tab of the last line of file (resultDir & "/" & item 7 of it) into reason
		// remove extraneous information from reason string
		delete characters offset ("Execution", reason) to -1 of reason
		// write the reason to the custom result file
		put "," & reason after file logfileName
	end if
	put return after file logfileName
end repeat
// write the success/failure totals to the custom result file
put "Failures:" && outcomes.Failure & ", Successes:" && outcomes.Success & return after file logfileName
open logFileName with "TextEdit"

I’ve also attached this script to this post with the name “Results.script”. When you call this script, you need to provide the name of the script that you want to generate a results file for, and the full path and file name for the result file you want to create:

Results "myGreatScript","~/Desktop/myGreatResults.txt"

Let me know how this works for you.

Regards,
Matt

Point to be noted though.

When writing your script, you may run and abort your script many times, or just have failures during the development of the script. Make sure that before you start recording statistics, you reset the stats, or, if the scripts have to be changed constantly, keep a final version of the script different from the version to be written.

Otherwise those numbers can be extremely misleading.

:?

bharath:

You make a good point about scripts being run during the development phase. I don’t know if anyone in the user community has any special techniques they use for dealing with this issue. My suggestion would be to append a suffix – such as “_dev” – to the names of scripts that are under development. When you are ready to “go live” with the script, you should then do a Save As to make a copy of the script (do not simply rename the script – the results folder will also be renamed).

I should also note that script runs that are aborted are not counted in the suite statistics. They do generate a result folder and can be viewed in the Results tab, but they do not count in the tally of successes, failures, total runs, or average run time.

  • Matt

Thanks… Did not know that. I always thought aborted scripts incremented the counters.

I basically have two folders, One folder contains code-frozen versions of suites that I develop (something like a baseline), and another folder that contains suites that I am working on.

This makes my maintenance easier as I don’t renaming probs.