Benchmark Tool


Introduction

Optimizing performances is something really hard because it involves many factors, including hardware, infrastructure, software configuration and so on.

TRAX LRS comes with a benchmark tool that simulates some realistic situations in order to help you evaluate the performances of your LRS environment.

Use Cases

The benchmark tool simulates the following situations.

UC1. Posting statements one by one

That's what most of the LRPs (Learning Record Providers) which send statements in real time do. This use case is declined in 2 sub-cases:

  1. Statements with low variability of actors and activities
  2. Statements with high variability of actors and activities

UC2. Posting small batches of statements

That's what some LRPs which try to optimize data flows do. This use case is declined in 2 sub-cases:

  1. Statements with low variability of actors and activities
  2. Statements with high variability of actors and activities

UC3. Posting large batches of statements

This is a usual situation when transfering data between 2 LRSs. This use case is declined in 2 sub-cases:

  1. Statements with low variability of actors and activities
  2. Statements with high variability of actors and activities

UC4. Posting statements including groups of learners

This is a specific situation which may have a significant impact on performances.

UC5. Getting statements filtered by the agent

This is a very common use of the standard API.

UC6. Getting statements filtered by the activity

This is a very common use of the standard API.

UC7. Getting statements filtered by a combination of agent and activity

This is a very common use of the standard API.

UC8. Getting statements filtered by the agent with the "related" option

This is a specific use of the standard API which may have a significant impact on performances.

UC9. Getting statements filtered by the activity with the "related" option

This is a specific use of the standard API which may have a significant impact on performances.

Protocol

Statements are generated randomly on top of a common template which includes 3 contextual activities, as well as a contextuel agent.

Actors, verbs and activities are generated with a certain level of variability. For example, we may want to draw 1 actor among a set of 100 or 10000 potential actors. This variability may affect the performance of some requests.

For each use case, the benchmark makes as many requests as possible during 10 seconds. Then, it keeps the average executing time.

When posting batches of statements, the executing time is divided by the number of statements in the batch. So we get the average time to record 1 statement.

This protocol is executed in 5 contexts in order to evaluate the scalability:

  • Empty LRS
  • LRS with 250 000 existing statements
  • LRS with 500 000 existing statements
  • LRS with 1 000 000 existing statements
  • LRS with 2 000 000 existing statements

Settings

The benchmark tool must be configured in the config/trax-dev.php file.

In the following example, we define local_test with its LRS endpoint and credentials, and we indicate that we want to run the unit tests 1 to 12 (i.e. all the available unit tests).

'benchmark' => [

    'local_test' => [
        'endpoint' => 'http://extendeddev.test/trax/api/8ac634df-3382-4475-bdd4-0b275b643f00/xapi/std',
        'username' => 'testsuite',
        'password' => 'password',
        'from' => 1,
        'to' => 12,
    ],
],

You can take a look at the Trax\Dev\Benchmarke\Scenario class if you want to understand the different unit tests and their respective configurations.

Commands

Running a testing sequence

The following command asks you the test configuration want to use (local_test in our above example). Then, it runs the tests in order to evaluate all the use cases.

Results are displayed in the console and recorded in a log file: storage/logs/benchmark.log.

php artisan benchmark:run

Generating statements

Before running a test, you may want to generate a large number of statements into the LRS in order to evaluate its scalability.

The following command asks you the number of statements you want in your LRS. The generation progress is displayed in the console. It can take time!

php artisan benchmark:seed

Clearing the LRS

After running a test, you may want to remove all the generated xAPI data and go back to a clean LRS:

php artisan benchmark:clear

Running the full protocol

The full protocol can be run with the following command.

php artisan benchmark:macro

The full protocol steps are:

  1. Clear the LRS.
  2. Run a testing sequence
  3. Generate 250 000 statements.
  4. Run a testing sequence
  5. Generate 500 000 statements.
  6. Run a testing sequence
  7. Generate 1 000 000 statements.
  8. Run a testing sequence
  9. Generate 2 000 000 statements.
  10. Run a testing sequence

All the steps and results are displayed in the console and recorded in the log file: storage/logs/benchmark.log.

Results

TRAX LRS 1.0 / 2.0

To compare TRAX LRS 1.0 and 2.0, we must use the basic configuration profile. We ran the test with a MySQL 5.7 database for both versions, on the same server configuration.

Compared to version 1.0, version 2.0 is significantly faster during writing operations, and slightly slower during reading operations. So it's globally a good result for version 2.0!

"Relational" configuration profile

Currently, the relational configuration profile is significantly slower both for reading and writing operations.

It's not surprising for writing operations because populating additional tables and indexes has a cost.

It's more surprising for reading operations but we are still working on it and have several optimization strategies to test.

"Full Pseudo" configuration profile

The full_pseudo configuration profile is significantly slower both for reading and writing operations.

This is a waited behavior because pseudonimization operations have a reading and writing cost.

Impact of PHP versions

Using TRAX LRS 2.0 with PHP 7.4 is slightly faster than using it with PHP 7.2. But nothing spectacular.

We plan to support PHP 8 in a near future. We will see if the new JIT compiler has an impact.

Impact of database engine

We currently run all our tests with MySQL 5.7.

We will compare it with MySQL 8.0, MariaDB 10.4 and PostgreSQL 12 in a near future.