Optimizing performances is something really hard because it involves many factors, including hardware, infrastructure, software configuration and so on.
TRAX LRS comes with a benchmark tool that simulates some realistic situations in order to help you evaluate the performances of your LRS environment.
The benchmark tool simulates the following situations.
That's what most of the LRPs (Learning Record Providers) which send statements in real time do. This use case is declined in 2 sub-cases:
That's what some LRPs which try to optimize data flows do. This use case is declined in 2 sub-cases:
This is a usual situation when transfering data between 2 LRSs. This use case is declined in 2 sub-cases:
This is a specific situation which may have a significant impact on performances.
This is a very common use of the standard API.
This is a very common use of the standard API.
This is a very common use of the standard API.
This is a specific use of the standard API which may have a significant impact on performances.
This is a specific use of the standard API which may have a significant impact on performances.
Statements are generated randomly on top of a common template which includes 3 contextual activities, as well as a contextuel agent.
Actors, verbs and activities are generated with a certain level of variability. For example, we may want to draw 1 actor among a set of 100 or 10000 potential actors. This variability may affect the performance of some requests.
For each use case, the benchmark makes as many requests as possible during 10 seconds. Then, it keeps the average executing time.
When posting batches of statements, the executing time is divided by the number of statements in the batch. So we get the average time to record 1 statement.
This protocol is executed in 5 contexts in order to evaluate the scalability:
The benchmark tool must be configured in the config/trax-dev.php
file.
In the following example, we define local_test
with its LRS endpoint and credentials,
and we indicate that we want to run the unit tests 1 to 12 (i.e. all the available unit tests).
'benchmark' => [
'local_test' => [
'endpoint' => 'http://extendeddev.test/trax/api/8ac634df-3382-4475-bdd4-0b275b643f00/xapi/std',
'username' => 'testsuite',
'password' => 'password',
'from' => 1,
'to' => 12,
],
],
You can take a look at the Trax\Dev\Benchmarke\Scenario
class if you want to understand
the different unit tests and their respective configurations.
The following command asks you the test configuration want to use (local_test
in our above example).
Then, it runs the tests in order to evaluate all the use cases.
Results are displayed in the console and recorded in a log file:
storage/logs/benchmark.log
.
php artisan benchmark:run
Before running a test, you may want to generate a large number of statements into the LRS in order to evaluate its scalability.
The following command asks you the number of statements you want in your LRS. The generation progress is displayed in the console. It can take time!
php artisan benchmark:seed
After running a test, you may want to remove all the generated xAPI data and go back to a clean LRS:
php artisan benchmark:clear
The full protocol can be run with the following command.
php artisan benchmark:macro
The full protocol steps are:
All the steps and results are displayed in the console and recorded in the log file:
storage/logs/benchmark.log
.
To compare TRAX LRS 1.0 and 2.0, we must use the basic
configuration profile.
We ran the test with a MySQL 5.7
database for both versions, on the same server configuration.
Compared to version 1.0, version 2.0 is significantly faster during writing operations, and slightly slower during reading operations. So it's globally a good result for version 2.0!
Currently, the relational
configuration profile is significantly slower both for
reading and writing operations.
It's not surprising for writing operations because populating additional tables and indexes has a cost.
It's more surprising for reading operations but we are still working on it and have several optimization strategies to test.
The full_pseudo
configuration profile is significantly slower both for
reading and writing operations.
This is a waited behavior because pseudonimization operations have a reading and writing cost.
Using TRAX LRS 2.0 with PHP 7.4 is slightly faster than using it with PHP 7.2. But nothing spectacular.
We plan to support PHP 8 in a near future. We will see if the new JIT compiler has an impact.
We currently run all our tests with MySQL 5.7
.
We will compare it with MySQL 8.0
, MariaDB 10.4
and PostgreSQL 12
in a near future.