Devbridge is officially transitioning to the Cognizant brand at the end of 2023. Come visit us at our new home as part of the Cognizant Software Engineering team.

Integrating Selenium IDE Tests into CruiseControl.Net

There are many resources describing how to use Selenium in various continuous integration engines, but most of them focus on using Selenium drivers for various programming languages. We were looking for a way to automatically run tests created by the Selenium IDE plugin without converting them to other languages. We found that it is a rather unexplored topic. The most useful example we found is this article.

Selenium IDE has the benefit of recording and viewing test actions directly in your browser window and can be prepared without the knowledge of any programming language (although HTML and CSS knowledge is still required for more advanced test cases).

This article will describe our approach for how to automate Selenium IDE tests in the CruiseControl.Net integration engine by using readily available tools and without modifying or customizing them.

List of used tools:

Preparing tests in Selenium IDE

Selenium IDE is the primary tool we are using to write and maintain automated UI test suites. All of the major good points and features are already described on the tool’s main page.

Although it has some limitations (e.g. limited capability for large test data sets, branching or other advanced algorithms), it is a really handy tool for not only writing and updating your test cases, but also for debugging them in place.

Selenium IDE supports plugins so missing features can be added (e.g. branching). However, this means that these plugins will need to be included in the Selenium Server for them to work in automatic testing process.

I will not delve further into how to use this tool as it is rather straightforward and self-explanatory and not the theme of this article.

Note: Using an inspector tool like Firebug with Selenium IDE helps a lot.

Invoking Selenium Testrunner

We are using Nant to iterate through all test suites and gather test results, but first we need to run them using Selenium Server Testrunner. Testrunner invocation was put into a separate batch file to avoid Nant reporting test suite errors. We will be doing our own error reporting later from generated report files, and we want to run all test suites to detect as many problems as possible in one go. Of course, you can invoke Testrunner directly from Nant if you want your tests to stop after the first failed test suite or do not mind Nant warnings in the build output.

The batch file looks like this:

@ECHO OFF?
REM Run test suite with Selenium server?
java -jar selenium-server-standalone-2.23.1.jar -trustAllSSLCertificates -htmlSuite 
%1 %2 %3 %4 >>selenium.log?
REM Ignore individual test suite failure and avoid Nant warning in the output?
EXIT 0

Note: Option trustAllSSLCertificates was added to avoid a browser SSL certificate confirmation window.

Parameters for htmlSuite switch can be inferred from their usage in Nant section.

There were some limitations we ran into when using Selenium Testrunner (at least up to Selenium Server version 2.23.1):

  • Test suite and test case HTML files should reside in the same directory.

  • Test suite and test case file names do not allow spaces and most of the other non-alphanumeric characters allowed by file system.

  • There is also an issue in the Testrunner which does not allow sharing variables between test cases in the same test suite even if Selenium IDE has no problem with it (there is a work around by storing values in JavaScript variables).

Using Nant to run tests and collect test reports

Nant tool is used to run all test suites and generate a final report for CruiseControl.Net.

Running tests

Here is the build target we use for running tests:

<!-- Run tests -->     
  <target name="runTests" description="Runs Selenium Tests">    
     <foreach item="File" property="testSuite">         
       <in>         
             <items>             
                    <include name="${testSuitePath}\\\*\*\\\*TestSuite.html" />             
             </items> 
        </in>      
        <do>       
             <echo message="Executing ${testSuite}..." />       
             <exec program="runtest.cmd">         
                  <arg value="${browser}" />         
                  <arg value="${testSiteUrl}" />         
                  <arg value="${testSuite}" />         
                  <arg value="${testReportPath}\\${path::get-file-name-without-
extension(testSuite)}.${reportExtension}" />       
            </exec>     
        </do>    
       </foreach>  
   </target>

It scans for html files ending with string “TestSuite” (example: ManageUsersTestSuite.html) and runs Selenium Testrunner batch file as described earlier for each of them sequentially. The parameters for build target are:

  • testSuitePath – specifies root directory where test suites are located

  • browser – which browser to use for test suite (e.g. “*firefox”, “*iexplore”, …)

  • testSiteUrl – URL of the site to test

  • testReportPath – path to directory where reports will be placed

  • reportExtension – extension of the generated report file

We are using following directory structure for both tests and reports:

  • Projects

    • TestProject1

      • Test1

        • Test1TestSuite.html

        • Test1Case1.html

        • Test1CaseN.html

      • TestN

        • TestNTestSuite.html

        • TestNCase1.html

        • TestNCaseN.html

      • Reports

    • ...

    • TestProjectN

So for parameters testSuitePath=”Projects\TestProject1” and testReportPath=”Projects\TestProject1\Reports”, all test suites from Test1TestSuite to TestNTestSuite would be run and report files would be placed in Projects\TestProject1\Reports directory.

Generating final report

Selenium Testrunner generates a report file in HTML format for each test suite that was run for the test project. We need to check if any of them failed and output some statistics for CruiseControl.Net build task. The general idea for how it can be done was borrowed from this article, but we modified it to handle this task without using any additional tools. The final report contains the number of test cases run, how many passed, how many failed and the names of test suites that failed. More detailed information can be gathered from the report files themselves (which can be merged into build output if necessary).

Here is the build target we are using to generate the report:

 <!-- Generate report -->
 <target name="generateReport" description="Generate final report">
    <property name="totalTests" value="0" />
    <property name="totalTestsFailed" value="0" />
    <foreach item="File" property="testReport">
     <in>
         <items>
             <include name="${testReportPath}\\\*.${reportExtension}" />
         </items>
     </in>
     <do>
       <loadfile file="${testReport}" property="report" />
       <if test="${string::get-length(report) == 0}">
         <fail message="Invalid report: ${testReport}" />
       </if>
       <regex pattern="&lt;td&gt;&lt;b&gt;(?'suiteName'\\w+)" input="${report}" 
options="IgnoreCase" />
       <regex pattern="numTestTotal:&lt;/td&gt;\\s\*&lt;td&gt;(?'numTests'\\d+)" 
input="${report}" options="IgnoreCase" />
       <regex pattern="numTestFailures:&lt;/td&gt;\\s\*&lt;td&gt;(?'numFailures'\\d+)" 
input="${report}" options="IgnoreCase" />
       <property name="totalTests" value="${int::parse(totalTests) + 
int::parse(numTests)}" />
       <property name="totalTestsFailed" value="${int::parse(totalTestsFailed) + 
int::parse(numFailures)}" />
        <if test="${int::parse(numFailures) != 0}">
         <property name="failedTestSuits" value="${failedTestSuits}
${suiteName}" />
       </if>
     </do>
    </foreach>
    <property name="results" value="Tests: ${totalTests} total, 
${int::parse(totalTests) - int::parse(totalTestsFailed)} passed, ${totalTestsFailed} 
failed." />
    <echo message="${results}" />
    <if test="${int::parse(totalTestsFailed) != 0}">
     <fail message="${results}
Test suites that failed:${failedTestSuits}"/>
    </if>
 </target> 

It loads report files and extracts required information using regular expressions. In the end, a summary report is given and if there were any failed tests it forces CruiseControl.Net to fail the build process.

Integrating into CruiseControl.Net

The only thing left now is to execute our Nant build file from CruiseControl.Net. It is done by simply invoking the Nant task with the appropriate parameters.

Example:

 <nant description="Run Selenium tests.">
     <buildFile>SeleniumTest.build</buildFile>
     <buildArgs>-D:browser=\*firefox -D:testSiteUrl=http://testsite/ -
D:testSuitePath=Projects\\TestProject</buildArgs>
     <buildTimeoutSeconds>7200</buildTimeoutSeconds>
    </nant>

Note: To run tests on several browsers, you’ll need to invoke the Nant task for each browser separately.

Conclusion

We have started using the described process just recently and it shows promising results. The fact that tests can be written and updated very quickly by almost anyone with basic knowledge of HTML and CSS is very appealing. This approach is perfect for fast smoke tests to detect if something is wrong with a project build before running other more advanced tests, or before manually testing a project with a quality assurance team, therefore potentially saving a lot of time. Of course, it can be easily adopted for any other continuous integration engine, too.