Benchmark Tool


Introduction

Optimizing performances is something really hard because it involves many factors, including hardware, infrastructure, software configuration and so on.

TRAX LRS comes with a benchmark tool that simulates some realistic situations in order to help you evaluate the performances of your LRS environment.

Use Cases

The benchmark tool simulates the following situations:

  • UC1. Posting statements one by one (low variability)
  • UC2. Posting statements one by one (high variability)
  • UC3. Posting small batches of statements (low variability)
  • UC4. Posting small batches of statements (high variability)
  • UC5. Posting large batches of statements (low variability)
  • UC6. Posting large batches of statements (high variability)
  • UC7. Posting statements including groups of learners
  • UC8. Getting statements filtered by the agent
  • UC9. Getting statements filtered by the activity
  • UC10. Getting statements filtered by a combination of agent and activity
  • UC11. Getting statements filtered by a combination of agent, verb and activity
  • UC12. Getting statements filtered by the agent with the "related" option
  • UC13. Getting statements filtered by the activity with the "related" option

The variability is the size of the agents and activities dataset (low variability means a small dataset of agents and activities)

Protocol

Statements are generated randomly on top of a common template which includes 3 contextual activities, as well as a contextuel agent.

Actors, verbs and activities are generated with a certain level of variability. For example, we may want to draw 1 actor among a set of 100 or 10000 potential actors. This variability may affect the performance of some requests.

For each use case, the benchmark makes as many requests as possible during 10 seconds. Then, it keeps the average executing time.

When posting batches of statements, the executing time is divided by the number of statements in the batch. So we get the average time to record 1 statement.

This protocol is executed in 5 contexts in order to evaluate the scalability:

  • Empty LRS
  • LRS with 250 000 existing statements
  • LRS with 500 000 existing statements
  • LRS with 1 000 000 existing statements
  • LRS with 2 000 000 existing statements

Settings

The benchmark tool must be configured in the config/trax-dev.php file.

In the following example, we define local_test with its LRS endpoint and credentials, and we indicate that we want to run the unit tests 1 to 12 (i.e. all the available unit tests).

'benchmark' => [

    'local_test' => [
        'endpoint' => 'http://extendeddev.test/trax/api/8ac634df-3382-4475-bdd4-0b275b643f00',
        'username' => 'testsuite',
        'password' => 'password',
        'from' => 1,
        'to' => 12,
    ],
],

Note that the endpoint is the endpoint provided by TRAX LRS, without the /xapi/std/ at the end!

You can take a look at the Trax\Dev\Benchmarke\UseCases class if you want to understand the different unit tests and their respective configurations.

Commands

Running a testing sequence

The following command asks you the test configuration want to use (local_test in our above example). Then, it runs the tests in order to evaluate all the use cases.

Results are displayed in the console and recorded in a log file: storage/logs/benchmark.log.

php artisan benchmark:run

Generating statements

Before running a test, you may want to generate a large number of statements into the LRS in order to evaluate its scalability.

The following command asks you the number of statements you want in your LRS. The generation progress is displayed in the console. It can take time!

php artisan benchmark:seed

Clearing the LRS

After running a test, you may want to remove all the generated xAPI data and go back to a clean LRS:

php artisan benchmark:clear

Running the full protocol

The full protocol can be run with the following command.

php artisan benchmark:macro

The full protocol steps are:

  1. Clear the LRS.
  2. Run a testing sequence
  3. Generate 250 000 statements.
  4. Run a testing sequence
  5. Generate 500 000 statements.
  6. Run a testing sequence
  7. Generate 1 000 000 statements.
  8. Run a testing sequence
  9. Generate 2 000 000 statements.
  10. Run a testing sequence

All the steps and results are displayed in the console and recorded in the log file: storage/logs/benchmark.log.