Skip to content

Advanced Configuration

The following documentation provides guidelines to configure various aspects of OctoPerf Enterprise-Edition (EE), like:

  • System Settings: setup docker behind a proxy, Elasticsearch system configuration etc.,
  • Mailing: support resetting account password via email,
  • Storage: store resources (recorded requests/responses and more) either on local disk or Amazon S3,
  • High Availability: HA is supported via built-in Hazelcast clustering.

OctoPerf EE is a Spring Boot application which uses YML configuration files.

Docker Proxy Settings

Docker can run from behind a proxy if required. The procedure is detailed in Docker Documentation.

Elasticsearch

Elasticsearch Database requires the systemctl vm.max_map_count to be set at least to 262144:

sysctl -w vm.max_map_count=262144

To set this value permanently, update the vm.max_map_count setting in /etc/sysctl.conf. To verify after rebooting, run sysctl vm.max_map_count.

See Elasticsearch VM Max Map Count for more information.

Custom Application YML

To define your own configuration settings, you must provide your own application.yml and put it in the right location.

How to define your own configuration

  1. Edit docker-compose.yml provided in OctoPerf EE Setup,
  2. Add a new volume to enterprise-edition service, for Example: /home/ubuntu/docker/enterprise-edition/config:/home/octoperf/config, The mapping follows the syntax HostFolder:ContainerFolder,
  3. Create an application.yml with the wanted configuration and place it in the HostFolder,
  4. Restart application using docker-compose. (docker-compose down, but beware it deletes all containers including data within)

The configuration defined in your own application.yml takes precedence over the default built-in configuration.

Environment Variables

When you need to simply override a single setting, defining an environment variable when launching the OctoPerf EE container is the good way to go.

How to define an environment variable

  1. Edit docker-compose.yml provided in OctoPerf EE Setup,
  2. locate environment: section and add relevant environment variables,
  3. Restart the application using docker-compose.

Environment variables take precedence over settings defined in YML configuration files.

Custom Settings

Advertised Server

The backend advertises itself using server settings:

server:
  scheme: https
  port: 8090 # internal server port
  hostname: api.octoperf.com
  public:
    port: 443 # publicly exposed port

The server port is configurable via:

  • server.port: it defines the internal server port running inside the enterprise-edition container,
  • server.public.port: it defines the port advertised to agents and JMeter containers. If not defined, it falls back to server.port setting.

Info

server.public.port setting is useful when the backend is behind a load-balancer. Typically load balancers run on port 80 or 443. For security reasons, the internal server cannot run port <= 1024. That's why we have both settings.

This must be configured to be accessible from all load generators. The JMeter containers launched on agent hosts by OctoPerf EE needs to be able to access to the backend through this hostname. The configuration above shows our Saas server configuration.

Warning

Never use localhost because it designates the OctoPerf EE container itself.

Server Port

As the backend is behind HAProxy by default, changing the default port (which is 80) requires a few steps. Let's say you want to run the server on port 443:

  • In docker-compose.yml, locate 80:80 and replace by 443:80,
  • Set server.public.port environment setting to 443,
  • Stop and destroy containers by running docker-compose down,
  • Restart containers by running docker-compose --build -d.

HAProxy frontend port must always be the same as backend server port because the backend advertises this port to docker agents (to communicate).

Elasticsearch

General Settings

The backend connects to Elasticsearch database using elasticsearch.hostname setting:

elasticsearch:
  hostname: elasticsearch
  indices:
    prefix: octoperf_

Elasticsearch can be configured with:

  • hostname: This must be configured to an IP or a hostname accessible from the OctoPerf EE container. Multiple hosts can be defined separated by comas,
  • indices:

    • prefix: octoperf_ by default, indices are created with octoperf_ prefix. Example: octoperf_user indice contains the user accounts. This setting is useful to point to use aliases to upgrade a database with no service interruption.

Info

Never use localhost hostname. Use the IP address of the machine hosting your database, even if it's on the same machine.

Snapshots Settings

Snapshots are the equivalent of an incremental backup of all the database indices. The backend can trigger an elasticsearch snapshot creation every night with the configuration below:

elasticsearch:
  snapshots:
    enabled: true 
    repository: nfs-server # name of the repository to backup
    keep-last: 7 # 7 days rolling window

Docker Registry

Agent and JMeter images are pull from registry.hub.docker.com docker registry by default. A custom Docker Registry can be configured as following:

docker:
  image:
    registry: registry.company.com

This setting affects:

  • Agent launch command-line,
  • And from where the Agent pulls the JMeter docker images.

Info

When changing this setting, all the connected agents will be upgraded using the image from the configured registry.

SMTP Server

an SMTP mail server can be specified in order to support user account password recovery:

spring:
  mail:
    enabled: true
    host: smtp.mycompany.com
    port: 587
    username: username@mycompany.com
    password: passw0rd
    properties:
      mail.smtp.socketFactory.class: javax.net.ssl.SSLSocketFactory
      mail.smtp.auth: true
      mail.smtp.starttls.enable: true
      mail.smtp.starttls.required: true
    from:
      name: MyCompany Support
      email: from@mycompany.com

For a complete list of supported properties, see Javax Mail Documentation.

High Availability

Prerequisites:

  • You need to have at least 3 Hosts in a cluster with networking enabled between them.

To enable high-availability, the different backend servers need to form a cluster. Each backend server must be able to communicate with others. To enable HA using Hazelcast, define the following configuration:

clustering:
  driver: hazelcast
  members: enterprise-edition
  quorum: 2

It takes the following configuration:

  • driver: either noop (no clustering), hazelcast (using Hazelcast) or ignite (using Apache Ignite),
  • members: comma separated list of hostnames or IPs of the members of the cluster,
  • quorum: minimum number of nodes necessary to operate. quorum = (n+1) / 2, where n is the total number of nodes in the cluster.

3 Containers Setup

Let's describe what's being configured here:

  • driver: hazelcast indicates to use Hazelcast clustering, Hazelcast is a Distributed Java Framework we use internally to achieve HA,
  • hazelcast.members: hostnames of all the backends separated by comas,
  • hazelcast.quorum: The quorum indicates the minimum number of machines within the cluster to be operational. quorum = (n + 1) / 2, where n is the number of members within the cluster.

hazelcast.members can be either a DNS hostname pointing to multiple IPs or multiple IPs separated by comas.

Resources Storage

Resources like recorded requests, recorded responses or files attached to a project are stored inside the container in folder /home/octoperf/data:

storage:
  driver: fs
  fs:
    folder: data

Folder location is resolved inside the Docker container

When no Docker volume mapping is configured, the data is lost when the container is destroyed. In order to avoid this, you need to setup a volume mapping like for the configuration. Example: /home/ubuntu/docker/enterprise-edition/data:/home/octoperf/data

When running the backend in HA, it's not possible to store data on the local disk. The request being sent to get resources may hit any backend server, but the wanted resource may be stored on another server. In this case, it's better to store resources on a global service shared by all backends like Amazon S3.

Configuring Amazon S3 Storage

storage:
  driver: s3
  s3:
    region: eu-west-1
    bucket: my-bucket
    access-key: 
    secret-key: 

Let's describe what is being configured here:

  • driver: Amazon S3 driver,
  • s3:

    • region: specifies the region where the target S3 bucket is located,
    • bucket: S3 bucket name, usually my-bucket.mycompany.com,
    • access-key: an AWS access key with access granted to the given bucket,
    • secret-key: an AWS secret key associated to the access key.

We suggest to setup an Amazon User with permission granted only to the target S3 bucket. Here is an example Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1427454857000",
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*",
                "s3:Put*",
                "s3:Delete*"
            ],
            "Resource": [
                "arn:aws:s3:::bucket.mycompany.com",
                "arn:aws:s3:::bucket.mycompany.com/*"
            ]
        }
    ]
}

User Management

User management can be customised using the following configuration:

users:
  registration: 
    enabled: true # Are new users allowed to create an account?
  password-recovery:
    enabled: true # Can users recover their account password by email?
  explicit-login:
    enabled: true # Can users login using the regular login form?

Single Sign-On (SSO)

OpenID Connect (Oauth2)

OctoPerf EE supports Oauth2 / OpenID Connect authentication through Spring Boot properties. Here is an example application.yml:

spring:
  security:
    oauth2:
      client:
        registration:
          okta:
            client-id: okta-client-id
            client-secret: okta-client-secret
        provider:
          okta: 1
            authorization-uri: https://your-subdomain.oktapreview.com/oauth2/v1/authorize
            token-uri: https://your-subdomain.oktapreview.com/oauth2/v1/token
            user-info-uri: https://your-subdomain.oktapreview.com/oauth2/v1/userinfo
            jwk-set-uri: https://your-subdomain.oktapreview.com/oauth2/v1/keys
            user-name-attribute: sub

The Oauth2 provider (here okta) must then be declared in ui configuration config-ee.json to enable Oauth2 login. The user-name-attribute setting must match the username of the user logging in. user-name-attribute must point to an email attribute. If not, please also define:

users:
  oauth2:
    username-suffix: @mydomain.com

Only emails are allowed as usernames. In that case, when the user jsmith logs in, it creates an account with username jsmith@mydomain.com.

LDAP

By default, OctoPerf EE creates and manages users inside its own database.

OctoPerf EE supports seamsless integration with a third party authentication server (Single Sign On, aka SSO) based on the LDAP protocol.

Info

It's strongly recommended to be assisted by your System Administrator when configuring OctoPerf to support LDAP authentication.

Configuring LDAP Authentication

users:
  driver: ldap
  ldap:
    # ldap or ldaps
    protocol: ldap
    hostname: localhost
    port: 389
    base: dc=example,dc=com
    principal-suffix: "@ldap.forumsys.com"

    authentication:
      # anonymous, simple, default-tls, digest-md5 or external-tls
      method: simple
      username: cn=read-only-admin,dc=example,dc=com
      password: password

    # Cache Ldap users
    cache:
      enabled: true
      durationSec: 300

    # user password encryption
    password:
      type: plain
      # sha, ssha, sha-256, md5 etc.
      algorithm: sha

    # LDAP Connection pooling
    pooled: false

    # User mapping
    object-class: person
    attributes:
      id: uid
      name: cn
      mail: mail

    # Ignorable errors
    ignore:
      partial-result: false
      name-not-found: false
      size-limit-exceeded: true

Info

The configuration above shows the default configuration used by the OctoPerf server when ldap driver is configured.

The LDAP authentication supports a variety of settings:

  • protocol: either ldap or ldaps,
  • hostname: hostname of the LDAP server,
  • port: 389 by default, LDAP server connection port,
  • base: an LDAP query defines the directory tree starting point,
  • principal-suffix: Empty by default. Example: @ldap.forumsys.com. Suffix which completes the User name to authenticate with the LDAP server,

  • authentication:

    • method: (authentication method to use. Can be anonymous, simple, default-tls, digest-md5 or external-tls. The last one requires additional JVM configuration.
    • username: depending on the authentication method. Must be a valid Ldap Distinguished Name (DN),
    • password: depending on the authentication method. Password associated to the given username,
  • cache:

    • enabled: true by default. Enables User cache to reduce the load on the LDAP server,
    • durationSec: 300 sec┬áby default. Any change on the LDAP server takes up to this duration to be taken into account,
  • password: Defines how the user password should be encrypted,

    • type: plain by default. Set hash and refer to algorithm to set the right hashing algorithm,
    • algorithm: sha by default, required if type is set to hash. Supports: sha, sha-256, sha-384, sha-512, ssha, ssha-256, ssha-384, ssha-512, md5, smd5, crypt, crypt-md5, crypt-sha-256, crpyt-sha-512, crypt-bcrypt and pkcs5s2.
  • pooled: whenever to use a connection pool to connect to the LDAP server. Pooling speeds up the connection,

  • object-class: objectClass defines the class attribute of the users. Users are filtered based on this value,

  • attributes:

    • id: required. Maps the given attribute (uid by default) to the user id,
    • name: optional. Maps the given attribute (cn by default) to the user firstname and lastname,
    • mail: optional. Maps the given attribute (mail by default) to the user mail. If empty, it uses the id concatenated with the principal-suffix.
  • ignore:

    • partial-result: optional. Set to true to ignore PartialResultException. AD servers typically have a problem with referrals. Normally a referral should be followed automatically, but this does not seem to work with AD servers. The problem manifests itself with a PartialResultException being thrown when a referral is encountered by the server. Setting this property to true presents a workaround to this problem,
    • name-not-found: Specify whether NameNotFoundException should be ignored in searches. In previous version, NameNotFoundException caused by the search base not being found was silently ignored,
    • size-limit-exceeded: Specify whether SizeLimitExceededException should be ignored in searches. This is typically what you want if you specify count limit in your search controls.

Warning

Choosing between Internal and LDAP authentication must be done as soon as possible. Existing internal users won't match LDAP ones if changed later, even if they share the same username.

When using LDAP authentication, a few features related to user management are disabled or non-functional:

  • changing or resetting a user password,
  • editing user profile information,
  • and registering a new user through the registration form.

Virtual User Validation Storage

Virtual user validation http requests and responses are stored on local disk by default.

validation:
  driver: blob # or elasticsearch
  retention-days: 1 # how much time validation data should be kept

Validation http requests can either be stored in blob storage (with validation.driver: blob; see Resources Storage section), or within the database (with validation.driver: elasticsearch).

reCAPTCHA

Google's reCAPTCHA can be used to prevents bots from creating accounts on your OctoPerf platform.

recaptcha:
 enabled: true
 endpoint: https://www.google.com
 private.key: your-private-key

It takes the following configuration:

  • enabled: true to activate Recaptcha,
  • endpoint: https://www.google.com to use Google's recaptcha service,
  • private.key: your reCAPTCHA private key (obtained in the Google's reCAPTCHA admin console).

Warning

The public reCAPTCHA key must be configured in the config-ee.json file of the frontend, using the json key "captcha": "you-public-key". Please have a look at the GUI / Frontend configuration section to know how to map the configuration file.

GUI / Frontend configuration

The file config-ee.json lets you configure the UI of OctoPerf. Its default content is :

{
  "baseUrl": "",
  "docUrl": "/doc",
  "adminEmail": "support@octoperf.com",
  "modules": {
    "login": true,
    "register": true
  }
}

A more complete example of this configuration file:

{
  "baseUrl": "https://api.octoperf.com",
  "docUrl": "https://doc.octoperf.com",
  "adminEmail": "support@octoperf.com",
  "modules": {
    "cloud": true,
    "twitter": true,
    "login": true,
    "social": [
      {
        "id": "google",
        "label": "Google",
        "color": "#d34836",
        "icon": "fab fa-google-plus"
      }
    ],
    "register": true,
    "captcha": "reCAPTCHA_public_key",
    "check": {
      "login": "Login custom label.",
      "profile": "User profile custom label."
    }
  }
}

This file supports a variety of settings:

  • baseUrl: the URL of the backend server, i.e. https://api.octoperf.com for OctoPerf production,
  • docUrl: the URL of the documentation, i.e. https://doc.octoperf.com for OctoPerf's public documentation,
  • adminEmail: the email address the users may contact for Administrator operations such as removing a Workspace,
  • modules: multiple modules may be activated in OctoPerf's frontend,
    • cloud: [true/false] activates a tab in the Administration page that lists all currently started Cloud instances,
    • twitter: [true/false] displays a twitter feed on the Login page,
    • login: [true/false] (de)activates the login using credentials,
    • social: a list of OAuth2 configurations, used to display Social Login buttons,
    • register: [true/false] (de)activates the registration of new users,
    • captcha: "reCAPTCHA_public_key" your reCAPTCHA public key, see backend's configuration,
    • check: displays custom labels on the login and user profile pages.

You can map a docker volume to this file /usr/share/nginx/html/app/config-ee.json to customize its content, and for example, update the administrator email address.

This must be done in the docker-compose.yml file. Simply edit the enterprise-ui service to map the file:

  enterprise-ui:
  volumes:
    - /path/to/config-ee.json:/usr/share/nginx/html/app/config-ee.json