<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://training-course-material.com/index.php?action=history&amp;feed=atom&amp;title=Performance_Test_Sample_Report</id>
	<title>Performance Test Sample Report - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://training-course-material.com/index.php?action=history&amp;feed=atom&amp;title=Performance_Test_Sample_Report"/>
	<link rel="alternate" type="text/html" href="https://training-course-material.com/index.php?title=Performance_Test_Sample_Report&amp;action=history"/>
	<updated>2026-05-13T09:29:03Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.1</generator>
	<entry>
		<id>https://training-course-material.com/index.php?title=Performance_Test_Sample_Report&amp;diff=23922&amp;oldid=prev</id>
		<title>Cesar Chew at 16:50, 24 November 2014</title>
		<link rel="alternate" type="text/html" href="https://training-course-material.com/index.php?title=Performance_Test_Sample_Report&amp;diff=23922&amp;oldid=prev"/>
		<updated>2014-11-24T16:50:35Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Cat|JMeter}}&lt;br /&gt;
== Goals of this Report ==&lt;br /&gt;
# To clarify metrics and factors the Drupal application should comply with&lt;br /&gt;
# Explicitly state the assumptions&lt;br /&gt;
# Describe the process of testing and analysis&lt;br /&gt;
# Suggest improvements&lt;br /&gt;
&lt;br /&gt;
== Assumptions ==&lt;br /&gt;
These assumptions should be revised by the people closely related to the business and specific part of the application.&lt;br /&gt;
&lt;br /&gt;
=== Software and Hardware ===&lt;br /&gt;
* CPU&lt;br /&gt;
* Network Connection&lt;br /&gt;
* Hard Drive&lt;br /&gt;
* Memory&lt;br /&gt;
* Version of Operating System&lt;br /&gt;
* Version of Software&lt;br /&gt;
** Web Server&lt;br /&gt;
** Database&lt;br /&gt;
** Application Server&lt;br /&gt;
** Load Balancer&lt;br /&gt;
&lt;br /&gt;
=== Performance requirements ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Metric !!	Value	!! Descriptions&lt;br /&gt;
|-&lt;br /&gt;
| Average Page Load Time || &amp;lt; 4 sec	|| &lt;br /&gt;
|-&lt;br /&gt;
| Max Page Load Time || &amp;lt; 60 sec || &lt;br /&gt;
|-&lt;br /&gt;
| Minimum Throughput || per scenario || &lt;br /&gt;
|-&lt;br /&gt;
| Max Number of Registered Users || 30k || &lt;br /&gt;
|-&lt;br /&gt;
| Max Concurrent Connection || 2000 || &lt;br /&gt;
|-&lt;br /&gt;
| Max Records in the DB || 300k || e.g. 10000 invoices, 3mln customers, etc...&lt;br /&gt;
|-&lt;br /&gt;
| Max DB Size || 11GB || &lt;br /&gt;
|}&lt;br /&gt;
The metrics above are created for commonly used functionality, defined as scenarios used at least once a day by majority of&lt;br /&gt;
&lt;br /&gt;
=== Scenarios Frequency (Throughput per scenario) ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Scenario (Thread Group)&lt;br /&gt;
! Expected Normal Throughput in peaks [scenarios/min] (100%, Load Test) &lt;br /&gt;
! Number of concurrent threads (100%)&lt;br /&gt;
! Throughout in JMeter plan  [scenarios/min] for Load Test (100%)&lt;br /&gt;
! Number of concurrent threads (200%)&lt;br /&gt;
! Throughout in JMeter plan  [scenarios/min] for Stress Test (200%)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| Anonymous Views a front page &lt;br /&gt;
| 120 || 10 || 140 || 20 || 180&lt;br /&gt;
|-&lt;br /&gt;
| Anonymous Browse Course catalogue and an outline &lt;br /&gt;
| 60 || 10 || 50 || 20 || 65&lt;br /&gt;
|-&lt;br /&gt;
| Training Coordinator Edits a node &lt;br /&gt;
| 2 || 1 || 4 || 2 || 8&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Performance Testing ==&lt;br /&gt;
=== Method Used ===&lt;br /&gt;
&lt;br /&gt;
Due to time restrictions, not all aspect of the application will be covered.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# The business users of the application has been queried about they most frequent activities&lt;br /&gt;
# The most frequent activities has been captured in text (human readable) form and are available at: URL here&lt;br /&gt;
# The scenarios has been recorded using JMeter v. 2.5.1&lt;br /&gt;
&lt;br /&gt;
== Stages ==&lt;br /&gt;
# Single Threaded (running samples sequentially)&lt;br /&gt;
# Multi-threaded (100% and 200% of required (assumed)throughput)&lt;br /&gt;
# SOAK test, running the test for 48 hours of 100% capacity&lt;br /&gt;
&lt;br /&gt;
In order to prepare reliable test plan, we need to focus on the current utilization from business perspective.&lt;br /&gt;
&lt;br /&gt;
Jmeter should mimic the real world, and try to run as many appropriate scenarios within a specific time as in the real world.&lt;br /&gt;
&lt;br /&gt;
There are a lot of ways of achieving that. There are no the best way. Each case should be considered separately. &lt;br /&gt;
It means that our test plan should mimic the real world cases as much as possible, but on the other hand we want to keep it simple.&lt;br /&gt;
&lt;br /&gt;
In order to achieve assumed throughput you can change:&lt;br /&gt;
# Number of concurrent threads per scenario (Threads)&lt;br /&gt;
# Delay between scenarios (a timer at the end of the scenario)&lt;br /&gt;
# Delay between samplers (but it should represent the reasonable real world value)&lt;br /&gt;
&lt;br /&gt;
== Results (100%) ==&lt;br /&gt;
Samples Response Monitoring (Jmeter results)&lt;br /&gt;
&lt;br /&gt;
 Metric    	Value	Comment&lt;br /&gt;
 Error          Level	0.2%	&lt;br /&gt;
 No of threads	301	Distribution shown in a separate table above&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
=== Errors ===&lt;br /&gt;
 Label	                                      # Samples	Average	Min	Max   Std. Dev.	Error %&lt;br /&gt;
 CLI_new_business:11 Sort by created date 2	14	989	609	1,660	290	7.1%&lt;br /&gt;
 CLI_new_business:11 Select party elipsis 	14	34	18	63	12	7.1%&lt;br /&gt;
 CLI_new_business:11 Confirm policy selection	14	746	31	1,904	415	7.1%&lt;br /&gt;
 CLI_new_business:11 Click no case required	14	359	129	731	162	7.10%&lt;br /&gt;
&lt;br /&gt;
=== Average Response Time Above 4 seconds ===&lt;br /&gt;
 Label	# Samples	Average	Min	Max	Std. Dev.	Error %&lt;br /&gt;
 PMS:01 Select Scheme	8	12,260	10,546	14,137	1,117	0.0%&lt;br /&gt;
 PMS:03 enter user details pmsuser3	4	5,462	38	21,729	9,392	0.0%&lt;br /&gt;
&lt;br /&gt;
=== Maximum Responses Time above 60 seconds ===&lt;br /&gt;
 Label      # Samples	Average	Min	Max	Std. Dev.	Error %&lt;br /&gt;
 Scenario1	291	1,183	88	108,534	7,941	0.3%&lt;br /&gt;
 Scenario2	155	2,482	22	88,698	8,198	0.6%&lt;br /&gt;
 Scenario2	300	1,697	714	87,309	5,395	0.0%&lt;br /&gt;
 CLE_customer service:close task	152	1,559	261	85,362	7,557	0.0%&lt;br /&gt;
 CLE_premium_collections:05 Exit task page	305	590	46	85,259	4,972	0.3%&lt;br /&gt;
 CLE_policy_alterations:02 Select task step	281	644	8	83,813	4,975	1.8%&lt;br /&gt;
&lt;br /&gt;
=== The Most Expensive Requests ===&lt;br /&gt;
The column &amp;#039;&amp;#039;Expensive Samples&amp;#039;&amp;#039; is calculated as the product of number samples and the average response time (i.e. total time all people have waited for the response)&lt;br /&gt;
&lt;br /&gt;
 Label	       # Samples Average Min	Max	Std.Dev. Error %  ExpensiveSamples Expensive Samples %&lt;br /&gt;
 Scenario10     310	 3,194	 2,088	7,446	790	 0.0%	  990,140	   2.46%&lt;br /&gt;
 Scenario20     283	 3,465	 9	22,511	2,101	 1.4%	  980,595          2.44%&lt;br /&gt;
 Scenario30     300	 2,833	 1,041	37,595	2,574	 0.0%	  849,900	   2.11%&lt;br /&gt;
&lt;br /&gt;
== Backend resources Results ==&lt;br /&gt;
&lt;br /&gt;
=== Web Servers ===&lt;br /&gt;
 &lt;br /&gt;
[[File:Web Server CPU Utilization.png]]&lt;br /&gt;
&lt;br /&gt;
Time	Was 1	Was 2	Was 3&lt;br /&gt;
Average CPU Utilization (during the test)	27	25	30&lt;br /&gt;
&lt;br /&gt;
=== Free memory on the servers ===&lt;br /&gt;
 &lt;br /&gt;
[[File:Memory Utilization.png]]&lt;br /&gt;
&lt;br /&gt;
== Errors Messages ==&lt;/div&gt;</summary>
		<author><name>Cesar Chew</name></author>
	</entry>
</feed>