Skip to content

Performance Metrics

Bench Reports can be customized using many items (charts, tables, ...). Each one of these items can be configured using performance metrics.

This section list all the metrics available in OctoPerf.

Hit metrics

Hit Metrics List

All the following metrics are available in OctoPerf. To know which report item car display this metrics, please report to the hit metrics availability table.

These metrics comes in various types (Minimum, average, count, rate, etc.). Refer to the performance metrics types table to know them.

Metric Description Performance
UserLoad Number of active users. Many other metrics should not changed as the user load increase.
Response time Time between the request and the end of the response, in milliseconds. The response time includes both the latency and the connect time. The lower the better. Should be less than 4 seconds.
Connect time Time between the request and the server connection, in milliseconds. The lower the better. If you get high connect times your servers may be running out of available sockets, or your database may be overloaded.
Latency (Server time) Time between the request and the first response byte, in milliseconds. The lower the better. If you get high response times and low latencies your servers may be running out of bandwidth. Check the throughput to confirm this.
Network time Response time - Latency The lower the better. If you get high network times your servers may be running out of bandwidth. Check the throughput to confirm this.
Throughput Bit rate in Bytes per second. Amount of data exchanged between the clients and the servers. Must grow along the user load. If it reaches an unexpected plateau, you may be running out of bandwidth.
Errors Count or rate of errors that occurred. Errors may happen if you did not validate your virtual user. Otherwise, errors may be the sign that your servers or database are overloaded.
Hits Count or rate of hits (requests) that occurred. Should increase as the user load goes up.
Assertions Count of assertions in error, failed, or successful. Assertions in error or failed lets your know that your servers did not answer as you expected.

Hit Metrics Types

Each metric comes in various types. The table below list all of them.

Type Description
Minimum Minimum value of a metric.
Average Average value of a metric.
Maximum Maximum value of a metric.
Variance The variance quantifies the dispersion of the metric. A variance close to 0 indicates that the metric values tend to be very close to the mean, while a high variance indicates that the values are spread out over a wider range. Its unit is the square of the metric unit.
Standard deviation Simply the square root of the variance. It's easier to compare to other metric types using a common unit.
Percentile 90 A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 90th percentile is the value below which 90 percent of the observations may be found.
Percentile 95 A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 95th percentile is the value below which 95 percent of the observations may be found.
Percentile 99 A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 99th percentile is the value below which 99 percent of the observations may be found.
Median Simply a 50th percentile: the value below which 50 percent of all the values may be found.
Total Count of a metric. Number of occurrences of an event.
Rate Count of a metric per second.
Apdex Apdex (Application Performance Index) defines a standard method for reporting the performance of software applications, by specifying a way to analyze the degree to which measured performance meets user expectations. Score is between 0 and 1, at 1 all users are satisfied.

The following table defines the metrics and their associated statistics:

Metric Min. Avg. and Max. Std Dev. and Variance Med. Percentile Total Rate Apdex
Response Time X X X X
Connect Time X X X
Latency X X X
Network Time X X
Errors X X
Hits X X
Assertions X
Throughput X X

Hit Metrics Availability

The table below displays all performance metrics per type and which report items can display them.

Metric Type Line Chart Summary Top Chart Percentiles Chart Results Table/Tree
Userload Total X
Response time Average X X X X X
Response time Maximum X X X X X
Response time Minimum X X X X X
Response time Variance X X X X
Response time Standard deviation X X X X
Response time Apdex X X X
Response time Median X
Response time Percentile 90 X X
Response time Percentile 95 X X
Response time Percentile 99 X X
Network time Average X X X X X
Network time Maximum X X X X X
Network time Minimum X X X X X
Network time Variance X X X X
Connect time Average X X X X X
Connect time Maximum X X X X X
Connect time Minimum X X X X X
Connect time Variance X X X X
Connect time Standard deviation X X X X
Connect time Apdex X X X
Latency Average X X X X X
Latency Maximum X X X X X
Latency Minimum X X X X X
Latency Variance X X X X
Latency Standard deviation X X X X
Latency Apdex X X X
Errors Rate X X X X
Errors Total X X X X
Errors % Error X X X X
Hits Rate X X X X X
Hits Total X X X X
Hits Total Successful X X X X
Hits % Successful X X X X
Throughput Rate X X X X
Throughput Total X X X X
Response Size Total X X X X
Assertions in error Total X X X X
Assertions failed Total X X X X
Assertions successful Total X X X X

Monitoring Metrics

Monitoring Metrics List

The following monitoring metrics are collected for each load generator involved during the load tests.

Metric Description Performance
Memory Usage Memory usage in percent. The memory usage should stay under 80%. The lower the better. Excessive memory usage can lead to load generator failure.
CPU Usage cpu usage in percent. The cpu usage should stay under 100%. The lower the better. Excessive cpu usage can lead to load generator failure.
Network Sent Outbound network usage. Must grow along the user load. If it reaches a plateau, you may be running out of bandwidth.
Network Received InBound network usage. Must grow along the user load. If it reaches a plateau, you may be running out of bandwidth.
TCP Connections Established TCP connections. Must grow along with the user load. If it reaches a plateau, your server network capacity may be exceeded.
TCP Retransmits TCP segments retransmitted. The lower the better. If it increases abnormally, your server network capacity may be exceeded.

Info

Many other dynamic monitoring are available if you configure the monitoring for your server infrastructure.

Monitoring Metrics Availability

The table below displays all monitoring metrics per type and which report items can display them.

Metric Type Line Chart Summary Top Chart Percentiles Chart Results Table
Memory Usage Maximum X
CPU Usage Maximum X
Network Sent Maximum X
Network Received Maximum X
TCP Connections Maximum X
TCP Retransmits Maximum X

Pie chart metrics

There are 4 performance metrics left that can only be displayed in pie charts and in area charts

These metrics show the repartition of certain data.

Metric Description
HTTP methods HTTP methods (GET, POST, DELETE, ...) repartition
HTTP response codes HTTP response codes (2xx, 3xx, 4xx, 5xx, ...) repartition. You should avoid error codes such as 4xx and 5xx.
Media types count Media types (html, css, json, javascript, xml, ...) repartition by request count. Useful to check resources repartition by type.
Media types throughput Media types (html, css, json, javascript, xml, ...) repartition by bandwidth usage. Useful to know what resources use your bandwidth.