Optimizing performances is something really hard because it involves many factors, including hardware, infrastructure, software configuration and so on.
TRAX LRS comes with a benchmark tool that simulates some realistic situations in order to help you evaluate the performances of your LRS environment.
The benchmark tool simulates the following situations:
The variability is the size of the agents and activities dataset (low variability means a small dataset of agents and activities)
Statements are generated randomly on top of a common template which includes 3 contextual activities, as well as a contextuel agent.
Actors, verbs and activities are generated with a certain level of variability. For example, we may want to draw 1 actor among a set of 100 or 10000 potential actors. This variability may affect the performance of some requests.
For each use case, the benchmark makes as many requests as possible during 10 seconds. Then, it keeps the average executing time.
When posting batches of statements, the executing time is divided by the number of statements in the batch. So we get the average time to record 1 statement.
This protocol is executed in 5 contexts in order to evaluate the scalability:
The benchmark tool must be configured in the
In the following example, we define
local_test with its LRS endpoint and credentials,
and we indicate that we want to run the unit tests 1 to 12 (i.e. all the available unit tests).
'benchmark' => [ 'local_test' => [ 'endpoint' => 'http://extendeddev.test/trax/api/8ac634df-3382-4475-bdd4-0b275b643f00', 'username' => 'testsuite', 'password' => 'password', 'from' => 1, 'to' => 12, ], ],
Note that the
endpoint is the endpoint provided by TRAX LRS, without the
/xapi/std/ at the end!
You can take a look at the
Trax\Dev\Benchmarke\UseCases class if you want to understand
the different unit tests and their respective configurations.
The following command asks you the test configuration want to use (
local_test in our above example).
Then, it runs the tests in order to evaluate all the use cases.
Results are displayed in the console and recorded in a log file:
php artisan benchmark:run
Before running a test, you may want to generate a large number of statements into the LRS in order to evaluate its scalability.
The following command asks you the number of statements you want in your LRS. The generation progress is displayed in the console. It can take time!
php artisan benchmark:seed
After running a test, you may want to remove all the generated xAPI data and go back to a clean LRS:
php artisan benchmark:clear
The full protocol can be run with the following command.
php artisan benchmark:macro
The full protocol steps are:
All the steps and results are displayed in the console and recorded in the log file: