Control-Plane Benchmark

Reference Implementation


You need 4 nodes. We label them CLIENT, SERVER1, SERVER2 and SUT as per spec. You need a copy of impl-yyyy-mm-dd.tar.gz on all nodes, please satisfy its dependencies as listed in the file README. In the instructions, replace SUTIP with the IP that the SUT is using to communicate with the CLIENT, similarly, CLIENTIP with the IP the client is using to communicate with the SUT.

Benchmark is run by following these steps:

  1. Set up OpenSER and utilities on SUT.
  2. Set up SIPp on SERVER1 & 2 and CLIENT.
  3. OPTIONALLY Run the echo test.
  4. Calibrate the rate for the SUT.
  5. Perform the actual benchmark.

These are covered in order.

Set up OpenSER and load generator on SUT

Build the SUT and tools:

SUT> tar xzvf impl-yyyy-mm-dd.tar.gz
SUT> cd impl
SUT> ./
If you wish to place artificial load on the SUT, first find out the operating parameter with iters:
SUT> ./iters
Run SUT with the provided script:
SUT> ./ SUTIP threads memory CPUload CPUiters auxlogfile
Where threads is the number of threads openser shall use (8 or more recommended), memory is the amount of memory needed for openser (should be 32M for every 1,000 users or more), CPUload is the desired amount of background CPU load percentage (use 0 for no load), CPUiters is the number reported by iters in the previous step, or 0 if not needed and auxlogfile is the file where memory and CPU load statistics are collected.

Set up SIPp on SERVER1 & SERVER2.

Build and run on both nodes:

SERVER1> tar xzvf impl-yyyy-mm-dd.tar.gz
SERVER1> cd impl
SERVER1> ./sipp -sn uas
SERVER2> tar xzvf impl-yyyy-mm-dd.tar.gz
SERVER2> cd impl
SERVER2> ./sipp -sn uas

This will place all the scripts and 3rd party software in the directory impl . Further instructions are relative to that directory.

OPTIONALLY Run echo test

Build impl also on CLIENT, and run echo test:

CLIENT> tar xzvf impl-yyyy-mm-dd.tar.gz
CLIENT> cd impl
CLIENT> ./sipp -sf options.xml SUTIP


First, find out the CPS (Calls Per Second) rate the SUT is capable of handling, with the supplied script :

CLIENT> ./ 900 start step SUTIP CLIENTIP
Replace start with a CPS rate the SUT can easily handle, and step with approx. 2% of start. Larger values of step will run faster, but are less accurate.

Perform the actual benchmark

For the benchmark you will need the CPS rate from the calibration run: (It is also recommended to clear Syslog before benchmarking, so that the to be analyzed data is absolutely correct)

After the benchmark is run, run the log analyzator on the SUT:
SUT> ./alog -o basename -b begin -e end < /var/log/messages [or /var/log/syslog or some another used syslogging facility]
Where begin and end are timestamps between which the benchmark took place and basename is the prefix for output filenames. See "alog -h" for details. Collect statistics and produce graphs from the analyzed log files through:
SUT> ./gencallstats.octave basename-calls.octave
SUT> ./gencallgraphs.octave basename-cpu.octave [x11 | postscript | aqua]
CPU and memory usage statistics can be obtained by:
SUT> ./ auxlogfile begindate enddate
Where begindate and enddate should be in the form given by the SUT system's date command. Logo