Skip to content

Server Configuration

OctoPerf Enterprise-Edition (EE) backend configuration can be customized to better fit your needs.

  • Mailing: support resetting account password via email,
  • Storage: store resources (recorded requests/responses and more) either on local disk or Amazon S3,
  • High Availability: HA is supported via built-in Hazelcast clustering.

OctoPerf EE is a Spring Boot application which uses YML configuration files.

Custom Application YML

To define your own configuration settings, you must provide your own application.yml and put it in the right location. The procedure below supposes you have a Rancher server with OctoPerf EE running as a service.

How to define your own configuration

  1. Go to Stacks, then click on Upgrade enterprise-edition service,
  2. Add a new volume, for Example: /home/ubuntu/docker/enterprise-edition/config:/root/config, The mapping follows the syntax HostFolder:ContainerFolder,
  3. Create an application.yml with the wanted configuration and place it in the HostFolder,
  4. Click on Upgrade.

Rancher Upgrade OctoPerf EE Mapping a new Docker volume to local disk.

The configuration defined in your own application.yml takes precedence over the default built-in configuration.

Environment Variables

When you need to simply override a single setting, defining an environment variable when launching the OctoPerf EE container is the good way to go.

How to define an environment variable

  1. Go to Stacks, then click on Upgrade enterprise-edition service,
  2. Click on "Add Environment Variable", then enter name and value,
  3. Click on Upgrade.

Rancher Service Env Variable Defining rancher.hostname as an environment variable.

Environment variables take precedence over settings defined in YML configuration files.

Custom Settings

Advertised Server

The backend advertises itself using server settings:

server:
  scheme: https
  hostname: api.octoperf.com
  port: 443

This must be configured to be accessible from all load generators. The JMeter containers launched on Rancher hosts by OctoPerf EE needs to be able to access to the backend through this hostname. The configuration above shows our Saas server configuration.

Note: Never use localhost because it designates the OctoPerf EE container itself.

Rancher Settings

The backend connects to Rancher using rancher.hostname setting:

rancher:
  project: Default
  hostname: rancher.octoperf.com
  access-key: 
  secret-key: 

When running OctoPerf EE backend server via our Catalog, rancher.hostname is automatically provided by Rancher through the CATTLE_URL environment variable. You may want to define rancher.hostname if OctoPerf EE is running on a separate Rancher server.

By default, the backend connects to the Default Rancher Environment.

This must be configured to an IP or a hostname accessible from the OctoPerf EE container. When Rancher is secured, an access-key and a secret-key must be provided. See Rancher API Keys for more information.

Note: Never use localhost because it designates the OctoPerf EE container itself.

Elasticsearch Hostname

The backend connects to Elasticsearch database using elasticsearch.hostname setting:

elasticsearch.hostname: elasticsearch

This must be configured to an IP or a hostname accessible from the OctoPerf EE container. Multiple hosts can be defined separated by comas.

Note: Never use localhost because it designates the OctoPerf EE container itself.

SMTP Server

an SMTP mail server can be specified in order to support user account password recovery:

mail:
  enabled: true
  host: smtp.mycompany.com
  port: 587
  username: username@mycompany.com
  password: passw0rd
  from:
    name: MyCompany Support
    email: from@mycompany.com

High Availability

Prerequisites:

To enable high-availability, the different backend servers need to form a cluster. Each backend server must be able to communicate with others. To enable HA using Hazelcast, define the following configuration:

clustering:
  driver: hazelcast
  hazelcast:
    members: enterprise-edition
    quorum: 2

Typical 3 containers setup with Rancher.

Let's describe what's being configured here:

  • driver: hazelcast indicates to use Hazelcast clustering, Hazelcast is a Distributed Java Framework we use internally to achieve HA,
  • hazelcast.members: hostnames of all the backends separated by comas,
  • hazelcast.quorum: The quorum indicates the minimum number of machines within the cluster to be operational. quorum = (n + 1) / 2, where n is the number of members within the cluster.

Rancher has built-in DNS and networking support when running containers on multiple hosts. When specifying enterprise-edition as members, it's equivalent to enter all the IPs of the enterprise-edition docker containers running within Rancher. Rancher provides inter-host networking and DNS resolution for docker containers.

Resources Storage

Resources like recorded requests, recorded responses or files attached to a project are stored inside the container in folder /home/root/data:

storage:
  driver: fs
  fs:
    folder: data

Folder location is resolved inside the Docker container

When no Docker volume mapping is configured, the data is lost when the container is destroyed. In order to avoid this, you need to setup a volume mapping like for the configuration. Example: /home/ubuntu/docker/enterprise-edition/data:/root/data

When running the backend in HA, it's not possible to store data on the local disk. The request being sent to get resources may hit any backend server, but the wanted resource may be stored on another server. In this case, it's better to store resources on a global service shared by all backends like Amazon S3.

Configuring Amazon S3 Storage

storage:
  driver: s3
  s3:
    region: eu-west-1
    bucket: my-bucket
    access-key: 
    secret-key: 

Let's describe what is being configured here:

  • driver: Amazon S3 driver,
  • s3:

    • region: specifies the region where the target S3 bucket is located,
    • bucket: S3 bucket name, usually my-bucket.mycompany.com,
    • access-key: an AWS access key with access granted to the given bucket,
    • secret-key: an AWS secret key associated to the access key.

We suggest to setup an Amazon User with permission granted only to the target S3 bucket. Here is an example Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1427454857000",
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*",
                "s3:Put*",
                "s3:Delete*"
            ],
            "Resource": [
                "arn:aws:s3:::bucket.mycompany.com",
                "arn:aws:s3:::bucket.mycompany.com/*"
            ]
        }
    ]
}

Single Sign-On (SSO)

By default, OctoPerf EE creates and manages users inside its own database.

OctoPerf EE supports seamsless integration with a third party authentication server (Single Sign On, aka SSO) based on the LDAP protocol.

Info

It's strongly recommended to be assisted by your System Administrator when configuring OctoPerf to support LDAP authentication.

Configuring LDAP Authentication

users:
  driver: ldap
  ldap:
    # ldap or ldaps
    protocol: ldap
    hostname: localhost
    port: 389
    base: dc=example,dc=com
    principal-suffix: "@ldap.forumsys.com"

    authentication:
      # anonymous, simple, default-tls, digest-md5 or external-tls
      method: simple
      username: cn=read-only-admin,dc=example,dc=com
      password: password

    # Cache Ldap users
    cache:
      enabled: true
      durationSec: 300

    # user password encryption
    password:
      type: plain
      # sha, ssha, sha-256, md5 etc.
      algorithm: sha

    # LDAP Connection pooling
    pooled: false

    # User mapping
    object-class: person
    attributes:
      id: uid
      name: cn
      mail: mail

Info

The configuration above shows the default configuration used by the OctoPerf server when ldap driver is configured.

The LDAP authentication supports a variety of settings:

  • protocol: either ldap or ldaps,
  • hostname: hostname of the LDAP server,
  • port: 389 by default, LDAP server connection port,
  • base: an LDAP query defines the directory tree starting point,
  • principal-suffix: Empty by default. Example: @ldap.forumsys.com. Suffix which completes the User name to authenticate with the LDAP server,

  • authentication:

    • method: (authentication method to use. Can be anonymous, simple, default-tls, digest-md5 or external-tls. The last one requires additional JVM configuration.
    • username: depending on the authentication method. Must be a valid Ldap Distinguished Name (DN),
    • password: depending on the authentication method. Password associated to the given username,
  • cache:

    • enabled: true by default. Enables User cache to reduce the load on the LDAP server,
    • durationSec: 300 sec┬áby default. Any change on the LDAP server takes up to this duration to be taken into account,
  • password: Defines how the user password should be encrypted,

    • type: plain by default. Set hash and refer to algorithm to set the right hashing algorithm,
    • algorithm: sha by default, required if type is set to hash. Supports: sha, sha-256, sha-384, sha-512, ssha, ssha-256, ssha-384, ssha-512, md5, smd5, crypt, crypt-md5, crypt-sha-256, crpyt-sha-512, crypt-bcrypt and pkcs5s2.
  • pooled: whenever to use a connection pool to connect to the LDAP server. Pooling speeds up the connection,

  • object-class: objectClass defines the class attribute of the users. Users are filtered based on this value,

  • attributes:

    • id: required. Maps the given attribute (uid by default) to the user id,
    • name: optional. Maps the given attribute (cn by default) to the user firstname and lastname,
    • mail: optional. Maps the given attribute (mail by default) to the user mail. If empty, it uses the id concatenated with the principal-suffix.

Warning

Choosing between Internal and LDAP authentication must be done as soon as possible. Existing internal users won't match LDAP ones if changed later, even if they share the same username.

When using LDAP authentication, a few features related to user management are disabled or non-functional:

  • changing or resetting a user password,
  • editing user profile information,
  • and registering a new user through the registration form.