Bench Reports can be customized using many items (charts, tables, ...). Each one of these items can be configured using performance metrics.
This section list all the metrics available in OctoPerf.
Hit Metrics List¶
All the following metrics are available in OctoPerf. To know which report item car display this metrics, please report to the hit metrics availability table.
These metrics comes in various types (Minimum, average, count, rate, etc.). Refer to the performance metrics types table to know them.
|UserLoad||Number of active users.||Many other metrics should not changed as the user load increase.|
|Response time||Time between the request and the end of the response, in milliseconds. The response time includes both the latency and the connect time.||The lower the better. Should be less than 4 seconds.|
|Connect time||Time between the request and the server connection, in milliseconds.||The lower the better. If you get high connect times your servers may be running out of available sockets, or your database may be overloaded.|
|Latency||Time between the request and the first response byte, in milliseconds.||The lower the better. If you get high response times and low latencies your servers may be running out of bandwidth. Check the throughput to confirm this.|
|Throughput||Bit rate in Bytes per second. Amount of data exchanged between the clients and the servers.||Must grow along the user load. If it reaches an unexpected plateau, you may be running out of bandwidth.|
|Errors||Count or rate of errors that occurred.||Errors may happen if you did not validate your virtual user. Otherwise, errors may be the sign that your servers or database are overloaded.|
|Hits||Count or rate of hits (requests) that occurred.||Should increase as the user load goes up.|
|Assertions||Count of assertions in error, failed, or successful.||Assertions in error or failed lets your know that your servers did not answer as you expected.|
Hit Metrics Types¶
Each metric comes in various types. The table below list all of them.
|Maximum||Maximum value of a metric.|
|Minimum||Minimum value of a metric.|
|Average||Average value of a metric.|
|Variance||The variance quantifies the dispersion of the metric. A variance close to 0 indicates that the metric values tend to be very close to the mean, while a high variance indicates that the values are spread out over a wider range. Its unit is the square of the metric unit.|
|Standard deviation||Simply the square root of the variance. It's easier to compare to other metric types using a common unit.|
|Percentile 90||A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 90th percentile is the value below which 90 percent of the observations may be found.|
|Percentile 95||A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 90th percentile is the value below which 95 percent of the observations may be found.|
|Median||Simply a 50th percentile: the value below which 50 percent of all the values may be found.|
|Total||Count of a metric. Number of occurrences of an event.|
|Rate||Count of a metric per second.|
|Apdex||Apdex (Application Performance Index) defines a standard method for reporting the performance of software applications, by specifying a way to analyze the degree to which measured performance meets user expectations. Score is between 0 and 1, at 1 all users are satisfied.|
The following table defines the metrics and their associated statistics:
|Metric||Min. Avg. and Max.||Std Dev. and Variance||Med. Percentile||Total||Rate||Apdex|
Hit Metrics Availability¶
The table below displays all performance metrics per type and which report items can display them.
|Metric||Type||Line Chart||Summary||Top Chart||Percentiles Chart||Results Table/Tree|
|Response time||Standard deviation||X||X||X||X|
|Response time||Percentile 90||X|
|Response time||Percentile 95||X|
|Connect time||Standard deviation||X||X||X||X|
|Assertions in error||Total||X||X||X||X|
Monitoring Metrics List¶
The following monitoring metrics are collected for each load generator involved during the load tests.
|Memory Usage||Memory usage in percent.||The memory usage should stay under 80%. The lower the better. Excessive memory usage can lead to load generator failure.|
|CPU Usage||cpu usage in percent.||The cpu usage should stay under 100%. The lower the better. Excessive cpu usage can lead to load generator failure.|
|Network Sent||Outbound network usage.||Must grow along the user load. If it reaches a plateau, you may be running out of bandwidth.|
|Network Received||InBound network usage.||Must grow along the user load. If it reaches a plateau, you may be running out of bandwidth.|
|TCP Connections||Established TCP connections.||Must grow along with the user load. If it reaches a plateau, your server network capacity may be exceeded.|
|TCP Retransmits||TCP segments retransmitted.||The lower the better. If it increases abnormally, your server network capacity may be exceeded.|
Many other dynamic monitoring are available if you configure the monitoring for your server infrastructure.
Monitoring Metrics Availability¶
The table below displays all monitoring metrics per type and which report items can display them.
|Metric||Type||Line Chart||Summary||Top Chart||Percentiles Chart||Results Table|
Pie chart metrics¶
There are 3 performance metrics left that can only be displayed in pie charts. These metrics show the repartition of certain data.
|HTTP methods||HTTP methods (GET, POST, DELETE, ...) repartition|
|HTTP response codes||HTTP response codes (2xx, 3xx, 4xx, 5xx, ...) repartition. You should avoid error codes such as 4xx and 5xx.|