Tuesday, 11 September 2012

Continuous Integration with Jenkins

What is Continuous Integration ?

Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly. (Martin Fowler)

What is Jenkins ?

Jenkins is a highly popular open-source continuous integration server written in the Java for testing and reporting on isolated changes in a larger code base in real time. Jenkins enables developers to find and solve defects in a code base rapidly and to automate testing of their builds.


How Jenkins can help Testers ?

Working in an Agile environment as a Tester you need robust process where you can run your automated test efficiently (without manual intervention), publish test execution reports which should be accessible to all stakeholders and save test execution reports as artifacts which should be available any time for comparison with previous reports. 

As a tester you can integrate your sanity, functional and load/performance tests in Jenkins which can be easily run on (CI, Test, Staging and Performance) environments, schedule to be run for any time, save test results reports for analysis/review and also decide whether build is stable/unstable or fail. 

By integrating automated tests in Jenkins we can save lots of time, provide quick feedback to developers and spend more time in writing more tests to increase the test coverage (which is your ultimate goal). 

In Jenkins every task/operation is consider as a Job and and everything that is done through Jenkins can be thought of as having a few discreet components:
  • Triggers - What causes a job to be run
  • Location - Where do we run a job
  • Steps - What actions are taken when the job runs
  • Results - What is the outcome of the job
While creating jobs in Jenkins you need to be very careful for above components. 

Let's see an example of JMeter test which is integrated in Jenkins and running against CI build.

Prerequisite:
  • JDK should be installed.
  • Jenkins should be installed and running to deploy the builds.
  • Apache-ant.
  • Apache-JMeter.
Steps:

1. Open Jenkins select the view/project (optional) and click on the "New Job" link.


2. Enter the Job name and select "Build a free-style software project" and click OK.


You can also create new job by copying settings from an existing job, to do so you need to click "Copy existing Job" option, give the name in copy from text field and click OK.

3. Now you can see job configuration page is open.
4. Give brief description about the job you are going to create.
5. Select the available JDK version which you want to use for this job.


6. Select source code management and give link where your source code resides.
7. Enter you local directory name (optional).


8. Under the "Build Triggers" you can specify multiple criteria to trigger this job. If you want to run your sanity tests after project is built than select "Build after other projects are built" option. In my case i want Jenkins to run this job every day 7am. So i am selecting "Build periodically" and specifying the time.


9. Select "Add timestamps to the Console Output" under Build Environment, so you can see timestamps and trace the errors easily.


10. Under the Build section click "Add build step" drop down and select "Invoke Ant". I will be using Ant to run the jmeter tests ( check my last post in which i have explained how to run jmeter tests with ant ).



11. Select the available ant version which you are going to use from the "ant version" drop down menu.

12. Give the absolute path of your build.xml file in the "Build file" text field.

13. Specify your ant properties based on build.xml. ( This will be your command line ant script )


14. Now select "Post build-action" for your job.


15. I want to publish report so i am selecting "Publish performance test result report" (This is an additional plugin/add-on for Jenkins for JMeter so you need to install it separately)

But you can customize this job as much as you can. You can set-up  Email Notifications, and will receive email if build is failed/unstable. You can also customise the email notification message and many more.
 
16. Select the type of report you need to publish (Jmeter).

 

17. Give the absolute path of your report files.
18. Specify the error percentage threshold that set the build unstable or failed.
19. Click "Apply" and "Save" button to save your job configuration settings.


20. If your job is successfully created you can see the similar to above screen shot.

21. As this build is schedule to be run everyday 7am so if you want to run it first time manually (to check job is created properly) than click "Build Now".


22. Once you click the "Build Now" link you will see your job is started and progress bar will appear.


23. To see what actually has been running, check the console. Click on the progress bar and click "console output", this will show you your build xml steps and project checkout steps in detail with timestamps.

24. Once job is successfully completed without errors, you will see "BUILD SUCCESSFUL" written in your console output logs, if build is stable and not failed green sign will appear.

25. To see the report of your test, click on the "Performance report" link which will open the detail report.

26. Report will appear like this.


We can create multiple jobs like this (Sanity, Functional, Regression, Performance and GUI), which can be run easily on any specified (CI, Test, Performance) environment(s).

I hope this detail wizard could help you in creating jobs in Jenkins.

Thursday, 6 September 2012

Running JMeter tests with Apache Ant


As we all know JMeter can run tests very efficiently but when it comes to reporting and publishing results, it is not good as other tools available in the market.


What i have learned recently is we can create fancy .html reports of our JMeter tests results with apache-ant and publish to anywhere you like (example: wiki or confluence). 

So following are the steps for running JMeter with Apache Ant.
  1. Download apache-ant-1.8.4 and extract to your “/opt/” folder.
  2. Download xalan-2.1.0.jar & serializer-2.7.1.jar and put them in the “/opt/apache-ant-1.8.4/lib” folder of ant.
  3. Make sure you have ant-jmeter-*.jar in your jmeter/extras folder, if not then download it and save into same folder.
  4. Now set the "ANT_HOME" environment variable, "/opt/apache-ant-1.8.4" in your (vi ~/.bashrc). Example: export ANT_HOME=/opt/apache-ant-1.8.4
  5. Add Apache Ant to your class path element "$ANT_HOME/bin" in your (vi ~/.bashrc). Example: export PATH=$JAVA_HOME/bin:$GRADLE_HOME/bin:$JODA:$ANT_HOME/bin:$PATH
  6. To make sure Apache Ant is successfully added into classpath, type echo $PATH and you should see something similar like Pic 1.
  7.  Now finally download this build.xml and replace with the one which comes with ant, save the downloded build.xml into "/optapache-ant-1.8.4/bin" folder.
Pic1
You are now ready to run your jmeter tests with ant now. But first let's understand what build.xml is doing.

Pic 2

The above build.xml has 3 properties jmeter.home, test and test.plan which we will define while running the ant command. <jmeter.home> is the home directory of jmeter. <test> is the test script name which we want to run with ant. <test.plan> is the test plan directory where jmeter scripts are exist.

If you need to make you ant script more fixable and want to pass optional parameters and properties to JMeter than use similar like following.

<jvmarg value="-Xincgc"/>
<jvmarg value="-Xmx128m"/>
<jvmarg value="-Dproperty=value"/>
<property name="request.threads" value="5"/>
<property name="request.loop" value="50"/>
<property name="jmeter.save.saveservice.assertion_results" value="all"/>
<property name="jmeter.save.saveservice.output_format" value="xml"/>

under the target name run-functional-tests you can see testplans dir element id which triggers the jmeter.

<testplans dir="${jmeter.home}/bin" includes="${test}.jmx"/>

Code under the <xslt> tag basically do the transformation for us and create the fancy .html result report of JMeter test.

Now let' see command to run the JMeter test with ant

Open terminal and navigates to $ /opt/apache-ant-1.8.4/bin

Type:  ant -Djmeter.home=/opt/JMeter/apache-jmeter-2.6 -Dtest=SanityTest

If everything is perfectly setup you will see following message in your console

Pic 3

After running tests you will see the BUILD SUCCESSFUL message in your console like this.



Pic 4

To see .html report you need to navigate to the target folder of JMeter and open the Sa.html in browser, which will appear like this

Pic 5
Tips:
1. If your looking NaN in you report under (Min Time & Max Time) column it means xalan-2.1.0.jar & serializer-2.7.1.jar are not present in the ant. 

You can also reference them in your build.xml as well.

Well that's it for now and i hope this is very helpful.

Thursday, 24 May 2012

Bromine - Test Management for Selenium


Excellent article on Bromine - Test Management for Selenium. This tool really have some good features like


  1. Running your test simultaneously on different nodes.
  2. Proper traceability between requirements and test cases.
  3. Test execution on different browsers and different operating systems.
  4. Capturing the screen when test fail.
  5. Integration with Hudson & much more - Worth Reading

Friday, 30 March 2012

How to use JMeter for Distributed Testing


In order to run JMeter in distributed environment for stress testing you must setup your all host (slave) machines with the same configuration which you currently have for your master machine.

Here are steps which you need to follow to setup Master machine:
  1. Install the latest JDK on your Master machine.
  2. Install the latest JMeter on your Master machine.
  3. Navigates to “C:\JMeter\apache-jmeter-2.6\bin\jmeter.properties” file.
  4. Find (remote_hosts=127.0.0.1) replace 127.0.0.1 with you host (slave) IP address
     Example: remote_hosts=192.168.16.239,192.168.16.107,192.168.16.24

     5.  Save and close the jmeter.properties file.


Follow the steps to setup you host (slave) machine:
  1. Install the latest JDK on your host (slave) machine.
  2. Install the latest JMeter on your host (slave) machine.
  3. On the slave systems, go to jmeter/bin directory and execute jmeter-server.bat (jmeterserveron unix). On windows, you should see a dos window appear with “jre\[version] \bin\rmiregistry.exe”. If this doesn't happen, it means either the environment settings are not right, or there are multiple JRE installed on the system. Note: [version] would be the jre version installed on the system. Screenshot is attached below for the reference.

Follow the steps to start the test on the Master Machine:
  1. Open JMeter on your Master Machine.
  2. Open your test plan.
  3. Click Run at the top.
  4. Select Remote start à (Now you should be able to see your host machines)
  5. Select the IP address to run your test from your selected host machine.
          Screenshot is attached below


Or

6.   If you want to run your test from all your test machine then select “Remote Start All”.

Troubleshooting:

If you are unable to run test form the above machine and see below error (screenshot is attached) just ask owner of host machine to run the jmeter-server.bat file.


 Tips:

In some cases, the firewall may still be blocking RMI traffic.

Symantec Anti-Virus and Firewall

In some cases, Symantec firewall needs to be stopped from windows services.
  1. Open control panel
  2. Open administrative tools
  3. Double click services
  4. Go to down to symantec anti virus, right click and select stop.
Windows firewall
  1. Open network connections
  2. Select the network connection
  3. Right click and select properties
  4. Select advanced tab
  5. Uncheck internet connection firewall
Linux
  1. On Suse linux, ipchains is turned on by default. For instructions, please refer to the “remote testing” in the user manual.
Listener

Use “Aggregated Report” listener to see the aggregated report from your all host machine.


Limitations:

There are some basic limitations for distributed testing. Here's the list of the known items in no specific order.
  1. RMI cannot communicate across subnets without a proxy; therefore neither can JMeter without a proxy.
  2. Since JMeter sends all the test results to the controlling console, it is easy to saturate the network IO. It is a good idea to use the simple data writer to save the results and view the file later with one of the graph listeners.
  3. Unless the server is a large multi-processor system, in most cases 1-2 clients is sufficient to overwhelm the server.
  4. A single JMeter client running on a 1.4-3 GHz CPU can handle 100-300 threads depending on the type of test. The exception to the is web services. XML processing is CPU intensive and will rapidly consume all the CPU cycles. As a general rule, the performance of XML centric applications will perform 4-10 slower than applications using binary protocols.
       References:

               1.   http://jmeter.apache.org/usermanual/

Friday, 8 July 2011

Why does bad software get released?

Why does bad software get released?

Have you ever come across a piece of software that just  does not work and wonder, ‘How the heck did this get released?’

Or maybe you have actually worked on a project were you tell the powers to be that the software should not be released, but lo and behold, they go and do it anyway. I had cause to reflect on this question just recently …….. why does bad code get released?

There are of course a myriad of reasons and circumstances that lead to poor code being released, however once cause can be the company culture.

If developers are highly valued, and testers are seen of less value, then it is easy to see why the voice of the tester can go unheard and their view ignored or trodden over. 

If testers are no tsufficiently technical or unable to write a coherent and convincing report, their message can get shot down in the general debate, or considered of little value due to the poor presentation.

If the release decision is taken place behind closed doors with no opportunity for the test report to be presented and discussed then the information regarding the true
state of the software might never be made available to those making the decisions.

Of course, there may be a number of reasons why, even though the test report is fully understood, the release is still sanctioned and bad code is released.  However these tend to be few and far between
in my experience, the main reason is that the test department of one reason or another is excluded from the process.

Who do I blame?   The test department. We need to do more to ensure we are respected, valued and listened to.

The chart below takes a slightly tongue in cheek look at an all too familiar release decision tree.



Interested article, original posted by: TONY SIMMS - TEST MANAGER