Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
/* GObject - GLib Type, Object, Parameter and Signal Library
|
|
|
|
* Copyright (C) 2009 Red Hat, Inc.
|
2022-12-13 00:45:18 +01:00
|
|
|
* Copyright (C) 2022 Canonical Ltd.
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
*
|
|
|
|
* This library is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
2017-05-28 14:09:39 +02:00
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
*
|
|
|
|
* This library is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
* Lesser General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU Lesser General
|
2014-01-23 12:58:29 +01:00
|
|
|
* Public License along with this library; if not, see <http://www.gnu.org/licenses/>.
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <math.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <glib-object.h>
|
2022-06-08 12:29:15 +02:00
|
|
|
#include "../testcommon.h"
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
#define WARM_UP_N_RUNS 50
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
#define WARM_UP_ALWAYS_SEC 2.0
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
#define ESTIMATE_ROUND_TIME_N_RUNS 5
|
|
|
|
#define DEFAULT_TEST_TIME 15 /* seconds */
|
|
|
|
/* The time we want each round to take, in seconds, this should
|
|
|
|
* be large enough compared to the timer resolution, but small
|
2020-06-12 15:02:30 +02:00
|
|
|
* enough that the risk of any random slowness will miss the
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
* running window */
|
2014-07-30 12:09:01 +02:00
|
|
|
#define TARGET_ROUND_TIME 0.008
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
static gboolean verbose = FALSE;
|
2024-03-08 09:39:01 +01:00
|
|
|
static gboolean quiet = FALSE;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
static int test_length = DEFAULT_TEST_TIME;
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
static double test_factor = 0;
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
static GTimer *global_timer = NULL;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
static GOptionEntry cmd_entries[] = {
|
|
|
|
{"verbose", 'v', 0, G_OPTION_ARG_NONE, &verbose,
|
|
|
|
"Print extra information", NULL},
|
2024-03-08 09:39:01 +01:00
|
|
|
{"quiet", 'q', 0, G_OPTION_ARG_NONE, &quiet,
|
|
|
|
"Print extra information", NULL},
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{"seconds", 's', 0, G_OPTION_ARG_INT, &test_length,
|
|
|
|
"Time to run each test in seconds", NULL},
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
{"factor", 'f', 0, G_OPTION_ARG_DOUBLE, &test_factor,
|
|
|
|
"Use a fixed factor for sample runs (also $GLIB_PERFORMANCE_FACTOR)", NULL},
|
2021-05-13 22:16:46 +02:00
|
|
|
G_OPTION_ENTRY_NULL
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct _PerformanceTest PerformanceTest;
|
|
|
|
struct _PerformanceTest {
|
|
|
|
const char *name;
|
|
|
|
gpointer extra_data;
|
|
|
|
|
|
|
|
gpointer (*setup) (PerformanceTest *test);
|
|
|
|
void (*init) (PerformanceTest *test,
|
|
|
|
gpointer data,
|
|
|
|
double factor);
|
|
|
|
void (*run) (PerformanceTest *test,
|
|
|
|
gpointer data);
|
|
|
|
void (*finish) (PerformanceTest *test,
|
|
|
|
gpointer data);
|
|
|
|
void (*teardown) (PerformanceTest *test,
|
|
|
|
gpointer data);
|
|
|
|
void (*print_result) (PerformanceTest *test,
|
|
|
|
gpointer data,
|
|
|
|
double time);
|
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
|
|
|
run_test (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
gpointer data = NULL;
|
|
|
|
guint64 i, num_rounds;
|
2014-07-30 12:09:01 +02:00
|
|
|
double elapsed, min_elapsed, max_elapsed, avg_elapsed, factor;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
GTimer *timer;
|
|
|
|
|
2024-03-08 09:39:01 +01:00
|
|
|
if (verbose || !quiet)
|
|
|
|
g_print ("Running test %s\n", test->name);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
/* Set up test */
|
|
|
|
timer = g_timer_new ();
|
|
|
|
data = test->setup (test);
|
|
|
|
|
|
|
|
if (verbose)
|
|
|
|
g_print ("Warming up\n");
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
g_timer_start (timer);
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
/* Warm up the test by doing a few runs */
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
for (i = 0; TRUE; i++)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
test->init (test, data, 1.0);
|
|
|
|
test->run (test, data);
|
|
|
|
test->finish (test, data);
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
|
|
|
|
if (test_factor > 0)
|
|
|
|
{
|
|
|
|
/* The caller specified a constant factor. That makes mostly
|
|
|
|
* sense, to ensure that the test run is independent from
|
|
|
|
* external factors. In this case, don't make warm up dependent
|
|
|
|
* on WARM_UP_ALWAYS_SEC. */
|
|
|
|
}
|
|
|
|
else if (global_timer)
|
|
|
|
{
|
|
|
|
if (g_timer_elapsed (global_timer, NULL) < WARM_UP_ALWAYS_SEC)
|
|
|
|
{
|
|
|
|
/* We always warm up for a certain time where we keep the
|
|
|
|
* CPU busy.
|
|
|
|
*
|
|
|
|
* Note that when we run multiple tests, then this is only
|
|
|
|
* performed once for the first test. */
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
g_clear_pointer (&global_timer, g_timer_destroy);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (i >= WARM_UP_N_RUNS)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (test_factor == 0 && g_timer_elapsed (timer, NULL) > test_length / 10)
|
|
|
|
{
|
|
|
|
/* The warm up should not take longer than 10 % of the entire
|
|
|
|
* test run. Note that the warm up time for WARM_UP_ALWAYS_SEC
|
|
|
|
* already passed. */
|
|
|
|
break;
|
|
|
|
}
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
g_timer_stop (timer);
|
|
|
|
elapsed = g_timer_elapsed (timer, NULL);
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
if (verbose)
|
2014-07-30 12:09:01 +02:00
|
|
|
{
|
|
|
|
g_print ("Warm up time: %.2f secs\n", elapsed);
|
|
|
|
g_print ("Estimating round time\n");
|
|
|
|
}
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
min_elapsed = 0;
|
|
|
|
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
if (test_factor > 0)
|
|
|
|
{
|
|
|
|
factor = test_factor;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Estimate time for one run by doing a few test rounds. */
|
|
|
|
for (i = 0; i < ESTIMATE_ROUND_TIME_N_RUNS; i++)
|
|
|
|
{
|
|
|
|
test->init (test, data, 1.0);
|
|
|
|
g_timer_start (timer);
|
|
|
|
test->run (test, data);
|
|
|
|
g_timer_stop (timer);
|
|
|
|
test->finish (test, data);
|
|
|
|
|
|
|
|
elapsed = g_timer_elapsed (timer, NULL);
|
|
|
|
if (i == 0)
|
|
|
|
min_elapsed = elapsed;
|
|
|
|
else
|
|
|
|
min_elapsed = MIN (min_elapsed, elapsed);
|
|
|
|
}
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
factor = TARGET_ROUND_TIME / min_elapsed;
|
|
|
|
}
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
if (verbose)
|
2014-07-30 12:09:01 +02:00
|
|
|
g_print ("Uncorrected round time: %.4f msecs, correction factor %.2f\n", 1000*min_elapsed, factor);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
/* Calculate number of rounds needed */
|
2024-06-28 15:43:26 +02:00
|
|
|
num_rounds = (guint64) (test_length / TARGET_ROUND_TIME) + 1;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
if (verbose)
|
|
|
|
g_print ("Running %"G_GINT64_MODIFIER"d rounds\n", num_rounds);
|
|
|
|
|
|
|
|
/* Run the test */
|
2018-03-27 17:41:40 +02:00
|
|
|
avg_elapsed = 0.0;
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
min_elapsed = 1e100;
|
2018-03-27 17:41:40 +02:00
|
|
|
max_elapsed = 0.0;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
for (i = 0; i < num_rounds; i++)
|
|
|
|
{
|
|
|
|
test->init (test, data, factor);
|
|
|
|
g_timer_start (timer);
|
|
|
|
test->run (test, data);
|
|
|
|
g_timer_stop (timer);
|
|
|
|
test->finish (test, data);
|
|
|
|
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
if (i < num_rounds / 20)
|
2014-07-30 12:09:01 +02:00
|
|
|
{
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
/* The first 5% are additional warm up. Ignore. */
|
|
|
|
continue;
|
2014-07-30 12:09:01 +02:00
|
|
|
}
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
|
|
|
|
elapsed = g_timer_elapsed (timer, NULL);
|
|
|
|
|
|
|
|
min_elapsed = MIN (min_elapsed, elapsed);
|
|
|
|
max_elapsed = MAX (max_elapsed, elapsed);
|
|
|
|
avg_elapsed += elapsed;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
2018-03-27 17:41:40 +02:00
|
|
|
if (num_rounds > 1)
|
|
|
|
avg_elapsed = avg_elapsed / num_rounds;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
if (verbose)
|
|
|
|
{
|
|
|
|
g_print ("Minimum corrected round time: %.2f msecs\n", min_elapsed * 1000);
|
|
|
|
g_print ("Maximum corrected round time: %.2f msecs\n", max_elapsed * 1000);
|
|
|
|
g_print ("Average corrected round time: %.2f msecs\n", avg_elapsed * 1000);
|
|
|
|
}
|
2018-03-27 17:41:40 +02:00
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
/* Print the results */
|
2024-03-05 10:46:21 +01:00
|
|
|
g_print ("%s: ", test->name);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
test->print_result (test, data, min_elapsed);
|
|
|
|
|
|
|
|
/* Tear down */
|
|
|
|
test->teardown (test, data);
|
|
|
|
g_timer_destroy (timer);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*************************************************************
|
|
|
|
* Simple object is a very simple small GObject subclass
|
|
|
|
* with no properties, no signals, implementing no interfaces
|
|
|
|
*************************************************************/
|
|
|
|
|
2012-11-02 16:19:20 +01:00
|
|
|
static GType simple_object_get_type (void);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
#define SIMPLE_TYPE_OBJECT (simple_object_get_type ())
|
|
|
|
typedef struct _SimpleObject SimpleObject;
|
|
|
|
typedef struct _SimpleObjectClass SimpleObjectClass;
|
|
|
|
|
|
|
|
struct _SimpleObject
|
|
|
|
{
|
|
|
|
GObject parent_instance;
|
|
|
|
int val;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct _SimpleObjectClass
|
|
|
|
{
|
|
|
|
GObjectClass parent_class;
|
|
|
|
};
|
|
|
|
|
2014-10-17 12:54:02 +02:00
|
|
|
G_DEFINE_TYPE (SimpleObject, simple_object, G_TYPE_OBJECT)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
static void
|
|
|
|
simple_object_finalize (GObject *object)
|
|
|
|
{
|
|
|
|
G_OBJECT_CLASS (simple_object_parent_class)->finalize (object);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
simple_object_class_init (SimpleObjectClass *class)
|
|
|
|
{
|
|
|
|
GObjectClass *object_class = G_OBJECT_CLASS (class);
|
|
|
|
|
|
|
|
object_class->finalize = simple_object_finalize;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
simple_object_init (SimpleObject *simple_object)
|
|
|
|
{
|
|
|
|
simple_object->val = 42;
|
|
|
|
}
|
|
|
|
|
|
|
|
typedef struct _TestIfaceClass TestIfaceClass;
|
|
|
|
typedef struct _TestIfaceClass TestIface1Class;
|
|
|
|
typedef struct _TestIfaceClass TestIface2Class;
|
|
|
|
typedef struct _TestIfaceClass TestIface3Class;
|
|
|
|
typedef struct _TestIfaceClass TestIface4Class;
|
|
|
|
typedef struct _TestIfaceClass TestIface5Class;
|
|
|
|
typedef struct _TestIface TestIface;
|
|
|
|
|
|
|
|
struct _TestIfaceClass
|
|
|
|
{
|
|
|
|
GTypeInterface base_iface;
|
|
|
|
void (*method) (TestIface *obj);
|
|
|
|
};
|
|
|
|
|
2012-11-02 16:19:20 +01:00
|
|
|
static GType test_iface1_get_type (void);
|
|
|
|
static GType test_iface2_get_type (void);
|
|
|
|
static GType test_iface3_get_type (void);
|
|
|
|
static GType test_iface4_get_type (void);
|
|
|
|
static GType test_iface5_get_type (void);
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
#define TEST_TYPE_IFACE1 (test_iface1_get_type ())
|
|
|
|
#define TEST_TYPE_IFACE2 (test_iface2_get_type ())
|
|
|
|
#define TEST_TYPE_IFACE3 (test_iface3_get_type ())
|
|
|
|
#define TEST_TYPE_IFACE4 (test_iface4_get_type ())
|
|
|
|
#define TEST_TYPE_IFACE5 (test_iface5_get_type ())
|
|
|
|
|
|
|
|
static DEFINE_IFACE (TestIface1, test_iface1, NULL, NULL)
|
|
|
|
static DEFINE_IFACE (TestIface2, test_iface2, NULL, NULL)
|
|
|
|
static DEFINE_IFACE (TestIface3, test_iface3, NULL, NULL)
|
|
|
|
static DEFINE_IFACE (TestIface4, test_iface4, NULL, NULL)
|
|
|
|
static DEFINE_IFACE (TestIface5, test_iface5, NULL, NULL)
|
|
|
|
|
|
|
|
/*************************************************************
|
|
|
|
* Complex object is a GObject subclass with a properties,
|
|
|
|
* construct properties, signals and implementing an interface.
|
|
|
|
*************************************************************/
|
|
|
|
|
2012-11-02 16:19:20 +01:00
|
|
|
static GType complex_object_get_type (void);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
#define COMPLEX_TYPE_OBJECT (complex_object_get_type ())
|
|
|
|
typedef struct _ComplexObject ComplexObject;
|
|
|
|
typedef struct _ComplexObjectClass ComplexObjectClass;
|
|
|
|
|
|
|
|
struct _ComplexObject
|
|
|
|
{
|
|
|
|
GObject parent_instance;
|
|
|
|
int val1;
|
2021-09-29 13:17:46 +02:00
|
|
|
char *val2;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
struct _ComplexObjectClass
|
|
|
|
{
|
|
|
|
GObjectClass parent_class;
|
|
|
|
|
|
|
|
void (*signal) (ComplexObject *obj);
|
2012-02-22 19:44:24 +01:00
|
|
|
void (*signal_empty) (ComplexObject *obj);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static void complex_test_iface_init (gpointer g_iface,
|
|
|
|
gpointer iface_data);
|
|
|
|
|
|
|
|
G_DEFINE_TYPE_EXTENDED (ComplexObject, complex_object,
|
|
|
|
G_TYPE_OBJECT, 0,
|
2014-10-17 12:54:02 +02:00
|
|
|
G_IMPLEMENT_INTERFACE (TEST_TYPE_IFACE1, complex_test_iface_init)
|
|
|
|
G_IMPLEMENT_INTERFACE (TEST_TYPE_IFACE2, complex_test_iface_init)
|
|
|
|
G_IMPLEMENT_INTERFACE (TEST_TYPE_IFACE3, complex_test_iface_init)
|
|
|
|
G_IMPLEMENT_INTERFACE (TEST_TYPE_IFACE4, complex_test_iface_init)
|
|
|
|
G_IMPLEMENT_INTERFACE (TEST_TYPE_IFACE5, complex_test_iface_init))
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
#define COMPLEX_OBJECT(object) (G_TYPE_CHECK_INSTANCE_CAST ((object), COMPLEX_TYPE_OBJECT, ComplexObject))
|
|
|
|
|
|
|
|
enum {
|
|
|
|
PROP_0,
|
|
|
|
PROP_VAL1,
|
2021-09-29 13:17:46 +02:00
|
|
|
PROP_VAL2,
|
|
|
|
N_PROPERTIES
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
};
|
|
|
|
|
2021-09-29 13:17:46 +02:00
|
|
|
static GParamSpec *pspecs[N_PROPERTIES] = { NULL, };
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
enum {
|
|
|
|
COMPLEX_SIGNAL,
|
2012-02-22 19:44:24 +01:00
|
|
|
COMPLEX_SIGNAL_EMPTY,
|
|
|
|
COMPLEX_SIGNAL_GENERIC,
|
|
|
|
COMPLEX_SIGNAL_GENERIC_EMPTY,
|
2013-02-21 18:47:08 +01:00
|
|
|
COMPLEX_SIGNAL_ARGS,
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
COMPLEX_LAST_SIGNAL
|
|
|
|
};
|
|
|
|
|
|
|
|
static guint complex_signals[COMPLEX_LAST_SIGNAL] = { 0 };
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_object_finalize (GObject *object)
|
|
|
|
{
|
2021-09-29 13:17:46 +02:00
|
|
|
ComplexObject *c = COMPLEX_OBJECT (object);
|
|
|
|
|
|
|
|
g_free (c->val2);
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
G_OBJECT_CLASS (complex_object_parent_class)->finalize (object);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_object_set_property (GObject *object,
|
|
|
|
guint prop_id,
|
|
|
|
const GValue *value,
|
|
|
|
GParamSpec *pspec)
|
|
|
|
{
|
|
|
|
ComplexObject *complex = COMPLEX_OBJECT (object);
|
|
|
|
|
|
|
|
switch (prop_id)
|
|
|
|
{
|
|
|
|
case PROP_VAL1:
|
|
|
|
complex->val1 = g_value_get_int (value);
|
|
|
|
break;
|
|
|
|
case PROP_VAL2:
|
2021-09-29 13:17:46 +02:00
|
|
|
g_free (complex->val2);
|
|
|
|
complex->val2 = g_value_dup_string (value);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_object_get_property (GObject *object,
|
|
|
|
guint prop_id,
|
|
|
|
GValue *value,
|
|
|
|
GParamSpec *pspec)
|
|
|
|
{
|
|
|
|
ComplexObject *complex = COMPLEX_OBJECT (object);
|
|
|
|
|
|
|
|
switch (prop_id)
|
|
|
|
{
|
|
|
|
case PROP_VAL1:
|
|
|
|
g_value_set_int (value, complex->val1);
|
|
|
|
break;
|
|
|
|
case PROP_VAL2:
|
2021-09-29 13:17:46 +02:00
|
|
|
g_value_set_string (value, complex->val2);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
G_OBJECT_WARN_INVALID_PROPERTY_ID (object, prop_id, pspec);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_object_real_signal (ComplexObject *obj)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_object_class_init (ComplexObjectClass *class)
|
|
|
|
{
|
|
|
|
GObjectClass *object_class = G_OBJECT_CLASS (class);
|
|
|
|
|
|
|
|
object_class->finalize = complex_object_finalize;
|
|
|
|
object_class->set_property = complex_object_set_property;
|
|
|
|
object_class->get_property = complex_object_get_property;
|
2012-02-22 19:44:24 +01:00
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
class->signal = complex_object_real_signal;
|
|
|
|
|
|
|
|
complex_signals[COMPLEX_SIGNAL] =
|
|
|
|
g_signal_new ("signal",
|
|
|
|
G_TYPE_FROM_CLASS (object_class),
|
|
|
|
G_SIGNAL_RUN_FIRST,
|
|
|
|
G_STRUCT_OFFSET (ComplexObjectClass, signal),
|
|
|
|
NULL, NULL,
|
|
|
|
g_cclosure_marshal_VOID__VOID,
|
|
|
|
G_TYPE_NONE, 0);
|
|
|
|
|
2012-02-22 19:44:24 +01:00
|
|
|
complex_signals[COMPLEX_SIGNAL_EMPTY] =
|
|
|
|
g_signal_new ("signal-empty",
|
|
|
|
G_TYPE_FROM_CLASS (object_class),
|
|
|
|
G_SIGNAL_RUN_FIRST,
|
|
|
|
G_STRUCT_OFFSET (ComplexObjectClass, signal_empty),
|
|
|
|
NULL, NULL,
|
|
|
|
g_cclosure_marshal_VOID__VOID,
|
|
|
|
G_TYPE_NONE, 0);
|
|
|
|
|
|
|
|
complex_signals[COMPLEX_SIGNAL_GENERIC] =
|
|
|
|
g_signal_new ("signal-generic",
|
|
|
|
G_TYPE_FROM_CLASS (object_class),
|
|
|
|
G_SIGNAL_RUN_FIRST,
|
|
|
|
G_STRUCT_OFFSET (ComplexObjectClass, signal),
|
|
|
|
NULL, NULL,
|
|
|
|
NULL,
|
|
|
|
G_TYPE_NONE, 0);
|
|
|
|
complex_signals[COMPLEX_SIGNAL_GENERIC_EMPTY] =
|
|
|
|
g_signal_new ("signal-generic-empty",
|
|
|
|
G_TYPE_FROM_CLASS (object_class),
|
|
|
|
G_SIGNAL_RUN_FIRST,
|
|
|
|
G_STRUCT_OFFSET (ComplexObjectClass, signal_empty),
|
|
|
|
NULL, NULL,
|
|
|
|
NULL,
|
|
|
|
G_TYPE_NONE, 0);
|
|
|
|
|
2013-02-21 18:47:08 +01:00
|
|
|
complex_signals[COMPLEX_SIGNAL_ARGS] =
|
|
|
|
g_signal_new ("signal-args",
|
|
|
|
G_TYPE_FROM_CLASS (object_class),
|
|
|
|
G_SIGNAL_RUN_FIRST,
|
|
|
|
G_STRUCT_OFFSET (ComplexObjectClass, signal),
|
|
|
|
NULL, NULL,
|
|
|
|
g_cclosure_marshal_VOID__UINT_POINTER,
|
|
|
|
G_TYPE_NONE, 2, G_TYPE_UINT, G_TYPE_POINTER);
|
|
|
|
|
2021-09-29 13:17:46 +02:00
|
|
|
pspecs[PROP_VAL1] = g_param_spec_int ("val1", "val1", "val1",
|
|
|
|
0, G_MAXINT, 42,
|
|
|
|
G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT | G_PARAM_READWRITE);
|
|
|
|
pspecs[PROP_VAL2] = g_param_spec_string ("val2", "val2", "val2",
|
|
|
|
NULL,
|
|
|
|
G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
2021-09-29 13:17:46 +02:00
|
|
|
g_object_class_install_properties (object_class, N_PROPERTIES, pspecs);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_object_iface_method (TestIface *obj)
|
|
|
|
{
|
|
|
|
ComplexObject *complex = COMPLEX_OBJECT (obj);
|
|
|
|
complex->val1++;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_test_iface_init (gpointer g_iface,
|
|
|
|
gpointer iface_data)
|
|
|
|
{
|
|
|
|
TestIfaceClass *iface = g_iface;
|
|
|
|
iface->method = complex_object_iface_method;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
complex_object_init (ComplexObject *complex_object)
|
|
|
|
{
|
2021-09-29 13:17:46 +02:00
|
|
|
complex_object->val1 = 42;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*************************************************************
|
|
|
|
* Test object construction performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
#define NUM_OBJECT_TO_CONSTRUCT 10000
|
|
|
|
|
|
|
|
struct ConstructionTest {
|
|
|
|
GObject **objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_objects;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
GType type;
|
|
|
|
};
|
|
|
|
|
|
|
|
static gpointer
|
|
|
|
test_construction_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct ConstructionTest, 1);
|
2012-11-02 16:19:32 +01:00
|
|
|
data->type = ((GType (*)(void))test->extra_data)();
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_construction_init (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double count_factor)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
n = (unsigned int) (NUM_OBJECT_TO_CONSTRUCT * count_factor);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
if (data->n_objects != n)
|
|
|
|
{
|
|
|
|
data->n_objects = n;
|
2022-06-23 15:15:47 +02:00
|
|
|
data->objects = g_renew (GObject *, data->objects, n);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_construction_run (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
GObject **objects = data->objects;
|
|
|
|
GType type = data->type;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_objects;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
n_objects = data->n_objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < n_objects; i++)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
objects[i] = g_object_new (type, NULL);
|
|
|
|
}
|
|
|
|
|
2021-09-29 13:17:46 +02:00
|
|
|
static void
|
|
|
|
test_construction_run1 (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
GObject **objects = data->objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_objects;
|
2021-09-29 13:17:46 +02:00
|
|
|
|
|
|
|
n_objects = data->n_objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < n_objects; i++)
|
2021-09-29 13:17:46 +02:00
|
|
|
objects[i] = (GObject *) g_slice_new0 (SimpleObject);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_complex_construction_run (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
GObject **objects = data->objects;
|
|
|
|
GType type = data->type;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_objects;
|
2021-09-29 13:17:46 +02:00
|
|
|
|
|
|
|
n_objects = data->n_objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < n_objects; i++)
|
2021-09-29 13:17:46 +02:00
|
|
|
objects[i] = g_object_new (type, "val1", 5, "val2", "thousand", NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_complex_construction_run1 (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
GObject **objects = data->objects;
|
|
|
|
GType type = data->type;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_objects;
|
2021-09-29 13:17:46 +02:00
|
|
|
|
|
|
|
n_objects = data->n_objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < n_objects; i++)
|
2021-09-29 13:17:46 +02:00
|
|
|
{
|
|
|
|
ComplexObject *object;
|
|
|
|
object = (ComplexObject *)g_object_new (type, NULL);
|
|
|
|
object->val1 = 5;
|
|
|
|
object->val2 = g_strdup ("thousand");
|
|
|
|
objects[i] = (GObject *)object;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_complex_construction_run2 (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
GObject **objects = data->objects;
|
|
|
|
GType type = data->type;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_objects;
|
2021-09-29 13:17:46 +02:00
|
|
|
|
|
|
|
n_objects = data->n_objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < n_objects; i++)
|
2021-09-29 13:17:46 +02:00
|
|
|
{
|
|
|
|
objects[i] = g_object_new (type, NULL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
static void
|
|
|
|
test_construction_finish (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_objects; i++)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
g_object_unref (data->objects[i]);
|
|
|
|
}
|
|
|
|
|
2021-09-29 13:17:46 +02:00
|
|
|
static void
|
|
|
|
test_construction_finish1 (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_objects; i++)
|
2021-09-29 13:17:46 +02:00
|
|
|
g_slice_free (SimpleObject, (SimpleObject *)data->objects[i]);
|
|
|
|
}
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
static void
|
|
|
|
test_construction_teardown (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
g_free (data->objects);
|
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
2022-05-22 13:37:01 +02:00
|
|
|
static void
|
|
|
|
test_finalization_init (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double count_factor)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n;
|
2022-05-22 13:37:01 +02:00
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
n = (unsigned int) (NUM_OBJECT_TO_CONSTRUCT * count_factor);
|
2022-05-22 13:37:01 +02:00
|
|
|
if (data->n_objects != n)
|
|
|
|
{
|
|
|
|
data->n_objects = n;
|
2022-06-23 15:15:47 +02:00
|
|
|
data->objects = g_renew (GObject *, data->objects, n);
|
2022-05-22 13:37:01 +02:00
|
|
|
}
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_objects; i++)
|
2022-05-22 13:37:01 +02:00
|
|
|
{
|
|
|
|
data->objects[i] = g_object_new (data->type, NULL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_finalization_run (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
GObject **objects = data->objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_objects;
|
2022-05-22 13:37:01 +02:00
|
|
|
|
|
|
|
n_objects = data->n_objects;
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < n_objects; i++)
|
2022-05-22 13:37:01 +02:00
|
|
|
{
|
|
|
|
g_object_unref (objects[i]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_finalization_finish (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
static void
|
|
|
|
test_construction_print_result (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
g_print ("Millions of constructed objects per second: %.3f\n",
|
|
|
|
data->n_objects / (time * 1000000));
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
2022-05-22 13:37:01 +02:00
|
|
|
static void
|
|
|
|
test_finalization_print_result (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct ConstructionTest *data = _data;
|
|
|
|
|
|
|
|
g_print ("Millions of finalized objects per second: %.3f\n",
|
|
|
|
data->n_objects / (time * 1000000));
|
|
|
|
}
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
/*************************************************************
|
|
|
|
* Test runtime type check performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
#define NUM_KILO_CHECKS_PER_ROUND 50
|
|
|
|
|
|
|
|
struct TypeCheckTest {
|
|
|
|
GObject *object;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_checks;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static gpointer
|
|
|
|
test_type_check_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct TypeCheckTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct TypeCheckTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
|
|
|
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_type_check_init (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double factor)
|
|
|
|
{
|
|
|
|
struct TypeCheckTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_KILO_CHECKS_PER_ROUND);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Work around g_type_check_instance_is_a being marked "pure",
|
|
|
|
and thus only called once for the loop. */
|
|
|
|
gboolean (*my_type_check_instance_is_a) (GTypeInstance *type_instance,
|
|
|
|
GType iface_type) = &g_type_check_instance_is_a;
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_type_check_run (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct TypeCheckTest *data = _data;
|
2020-11-11 19:30:36 +01:00
|
|
|
GObject *object = data->object;
|
|
|
|
GType type, types[5];
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
|
|
|
types[0] = test_iface1_get_type ();
|
|
|
|
types[1] = test_iface2_get_type ();
|
|
|
|
types[2] = test_iface3_get_type ();
|
|
|
|
types[3] = test_iface4_get_type ();
|
|
|
|
types[4] = test_iface5_get_type ();
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
type = types[i%5];
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int j = 0; j < 1000; j++)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
my_type_check_instance_is_a ((GTypeInstance *)object,
|
|
|
|
type);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_type_check_finish (PerformanceTest *test,
|
|
|
|
gpointer data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_type_check_print_result (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct TypeCheckTest *data = _data;
|
|
|
|
g_print ("Million type checks per second: %.2f\n",
|
|
|
|
data->n_checks / (1000*time));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_type_check_teardown (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct TypeCheckTest *data = _data;
|
|
|
|
|
|
|
|
g_object_unref (data->object);
|
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*************************************************************
|
2013-02-21 18:44:56 +01:00
|
|
|
* Test signal emissions performance (common code)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
#define NUM_EMISSIONS_PER_ROUND 10000
|
|
|
|
|
|
|
|
struct EmissionTest {
|
|
|
|
GObject *object;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_checks;
|
|
|
|
unsigned int signal_id;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
};
|
2011-09-27 10:15:17 +02:00
|
|
|
|
2013-02-21 18:44:56 +01:00
|
|
|
static void
|
|
|
|
test_emission_run (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2013-02-21 18:44:56 +01:00
|
|
|
g_signal_emit (object, data->signal_id, 0);
|
|
|
|
}
|
|
|
|
|
2013-02-21 18:47:08 +01:00
|
|
|
static void
|
|
|
|
test_emission_run_args (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2013-02-21 18:47:08 +01:00
|
|
|
g_signal_emit (object, data->signal_id, 0, 0, NULL);
|
|
|
|
}
|
|
|
|
|
2013-02-21 18:44:56 +01:00
|
|
|
/*************************************************************
|
|
|
|
* Test signal unhandled emissions performance
|
|
|
|
*************************************************************/
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
static gpointer
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_unhandled_setup (PerformanceTest *test)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
struct EmissionTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct EmissionTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
2024-06-28 15:41:48 +02:00
|
|
|
data->signal_id = complex_signals[GPOINTER_TO_UINT (test->extra_data)];
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_unhandled_init (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double factor)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_EMISSIONS_PER_ROUND);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_unhandled_finish (PerformanceTest *test,
|
|
|
|
gpointer data)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_unhandled_print_result (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double time)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
|
|
|
|
g_print ("Emissions per second: %.0f\n",
|
|
|
|
data->n_checks / time);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_unhandled_teardown (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
|
|
|
|
g_object_unref (data->object);
|
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
2011-09-27 10:15:17 +02:00
|
|
|
/*************************************************************
|
|
|
|
* Test signal handled emissions performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_emission_handled_handler (ComplexObject *obj, gpointer data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static gpointer
|
|
|
|
test_emission_handled_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct EmissionTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct EmissionTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
2024-06-28 15:41:48 +02:00
|
|
|
data->signal_id = complex_signals[GPOINTER_TO_UINT (test->extra_data)];
|
2011-09-27 10:15:17 +02:00
|
|
|
g_signal_connect (data->object, "signal",
|
|
|
|
G_CALLBACK (test_emission_handled_handler),
|
|
|
|
NULL);
|
2012-02-22 19:44:24 +01:00
|
|
|
g_signal_connect (data->object, "signal-empty",
|
|
|
|
G_CALLBACK (test_emission_handled_handler),
|
|
|
|
NULL);
|
|
|
|
g_signal_connect (data->object, "signal-generic",
|
|
|
|
G_CALLBACK (test_emission_handled_handler),
|
|
|
|
NULL);
|
|
|
|
g_signal_connect (data->object, "signal-generic-empty",
|
|
|
|
G_CALLBACK (test_emission_handled_handler),
|
|
|
|
NULL);
|
2013-02-21 18:47:08 +01:00
|
|
|
g_signal_connect (data->object, "signal-args",
|
|
|
|
G_CALLBACK (test_emission_handled_handler),
|
|
|
|
NULL);
|
2011-09-27 10:15:17 +02:00
|
|
|
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_emission_handled_init (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double factor)
|
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_EMISSIONS_PER_ROUND);
|
2011-09-27 10:15:17 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_emission_handled_finish (PerformanceTest *test,
|
|
|
|
gpointer data)
|
|
|
|
{
|
|
|
|
}
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
2011-09-27 10:15:17 +02:00
|
|
|
static void
|
|
|
|
test_emission_handled_print_result (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
|
|
|
|
g_print ("Emissions per second: %.0f\n",
|
|
|
|
data->n_checks / time);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_emission_handled_teardown (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct EmissionTest *data = _data;
|
|
|
|
|
|
|
|
g_object_unref (data->object);
|
|
|
|
g_free (data);
|
|
|
|
}
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
2022-12-13 00:45:18 +01:00
|
|
|
/*************************************************************
|
|
|
|
* Test object notify performance (common code)
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
#define NUM_NOTIFY_PER_ROUND 10000
|
|
|
|
|
|
|
|
struct NotifyTest {
|
|
|
|
GObject *object;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_checks;
|
2022-12-13 00:45:18 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_run (PerformanceTest *test,
|
|
|
|
void *_data)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2022-12-13 00:45:18 +01:00
|
|
|
g_object_notify (object, "val1");
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_by_pspec_run (PerformanceTest *test,
|
|
|
|
void *_data)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2022-12-13 00:45:18 +01:00
|
|
|
g_object_notify_by_pspec (object, pspecs[PROP_VAL1]);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*************************************************************
|
|
|
|
* Test notify unhandled performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
static void *
|
|
|
|
test_notify_unhandled_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct NotifyTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_unhandled_init (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double factor)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_NOTIFY_PER_ROUND);
|
2022-12-13 00:45:18 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_unhandled_finish (PerformanceTest *test,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_unhandled_print_result (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
|
|
|
|
g_print ("Notify (unhandled) per second: %.0f\n",
|
|
|
|
data->n_checks / time);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_unhandled_teardown (PerformanceTest *test,
|
|
|
|
void *_data)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
|
|
|
|
g_object_unref (data->object);
|
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*************************************************************
|
|
|
|
* Test notify handled performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_handled_handler (ComplexObject *obj, GParamSpec *pspec, void *data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *
|
|
|
|
test_notify_handled_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct NotifyTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
|
|
|
|
|
|
|
g_signal_connect (data->object, "notify::val1",
|
|
|
|
G_CALLBACK (test_notify_handled_handler), data);
|
|
|
|
g_signal_connect (data->object, "notify::val2",
|
|
|
|
G_CALLBACK (test_notify_handled_handler), data);
|
|
|
|
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_handled_init (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double factor)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_NOTIFY_PER_ROUND);
|
2022-12-13 00:45:18 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_handled_finish (PerformanceTest *test,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_handled_print_result (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
|
|
|
|
g_print ("Notify per second: %.0f\n",
|
|
|
|
data->n_checks / time);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_notify_handled_teardown (PerformanceTest *test,
|
|
|
|
void *_data)
|
|
|
|
{
|
|
|
|
struct NotifyTest *data = _data;
|
|
|
|
|
|
|
|
g_assert_cmpuint (
|
|
|
|
g_signal_handlers_disconnect_by_func (data->object,
|
|
|
|
test_notify_handled_handler,
|
|
|
|
data), ==, 2);
|
|
|
|
g_object_unref (data->object);
|
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
2022-12-13 00:55:03 +01:00
|
|
|
/*************************************************************
|
|
|
|
* Test object set performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
#define NUM_SET_PER_ROUND 10000
|
|
|
|
|
|
|
|
struct SetTest {
|
|
|
|
GObject *object;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_checks;
|
2022-12-13 00:55:03 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_set_run (PerformanceTest *test,
|
|
|
|
void *_data)
|
|
|
|
{
|
|
|
|
struct SetTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2022-12-13 00:55:03 +01:00
|
|
|
g_object_set (object, "val1", i, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *
|
|
|
|
test_set_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct SetTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct SetTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
|
|
|
|
2024-03-06 12:17:08 +01:00
|
|
|
/* g_object_get() will take a reference. Increasing the ref count from 1 to 2
|
|
|
|
* is more expensive, due to the check for toggle notifications. We have a
|
|
|
|
* performance test for that already. Don't also test that overhead during
|
|
|
|
* "property-get" test and avoid this by taking an additional reference. */
|
|
|
|
g_object_ref (data->object);
|
|
|
|
|
2022-12-13 00:55:03 +01:00
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_set_init (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double factor)
|
|
|
|
{
|
|
|
|
struct SetTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_SET_PER_ROUND);
|
2022-12-13 00:55:03 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_set_finish (PerformanceTest *test,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_set_print_result (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct SetTest *data = _data;
|
|
|
|
|
|
|
|
g_print ("Property set per second: %.0f\n",
|
|
|
|
data->n_checks / time);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_set_teardown (PerformanceTest *test,
|
|
|
|
void *_data)
|
|
|
|
{
|
|
|
|
struct SetTest *data = _data;
|
|
|
|
|
2024-03-06 12:17:08 +01:00
|
|
|
g_object_unref (data->object);
|
2022-12-13 00:55:03 +01:00
|
|
|
g_object_unref (data->object);
|
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*************************************************************
|
|
|
|
* Test object get performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
#define NUM_GET_PER_ROUND 10000
|
|
|
|
|
|
|
|
struct GetTest {
|
|
|
|
GObject *object;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_checks;
|
2022-12-13 00:55:03 +01:00
|
|
|
};
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_get_run (PerformanceTest *test,
|
|
|
|
void *_data)
|
|
|
|
{
|
|
|
|
struct GetTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
int val;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2022-12-13 00:55:03 +01:00
|
|
|
g_object_get (object, "val1", &val, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *
|
|
|
|
test_get_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct GetTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct GetTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
|
|
|
|
2024-03-06 12:17:08 +01:00
|
|
|
/* g_object_get() will take a reference. Increasing the ref count from 1 to 2
|
|
|
|
* is more expensive, due to the check for toggle notifications. We have a
|
|
|
|
* performance test for that already. Don't also test that overhead during
|
|
|
|
* "property-get" test and avoid this by taking an additional reference. */
|
|
|
|
g_object_ref (data->object);
|
|
|
|
|
2022-12-13 00:55:03 +01:00
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_get_init (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double factor)
|
|
|
|
{
|
|
|
|
struct GetTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_GET_PER_ROUND);
|
2022-12-13 00:55:03 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_get_finish (PerformanceTest *test,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_get_print_result (PerformanceTest *test,
|
|
|
|
void *_data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct GetTest *data = _data;
|
|
|
|
|
|
|
|
g_print ("Property get per second: %.0f\n",
|
|
|
|
data->n_checks / time);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_get_teardown (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct GetTest *data = _data;
|
|
|
|
|
2024-03-06 12:17:08 +01:00
|
|
|
g_object_unref (data->object);
|
2022-12-13 00:55:03 +01:00
|
|
|
g_object_unref (data->object);
|
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
/*************************************************************
|
|
|
|
* Test object refcount performance
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
#define NUM_KILO_REFS_PER_ROUND 100000
|
|
|
|
|
|
|
|
struct RefcountTest {
|
|
|
|
GObject *object;
|
2024-06-28 15:41:48 +02:00
|
|
|
unsigned int n_checks;
|
2024-03-06 18:21:40 +01:00
|
|
|
gboolean is_toggle_ref;
|
2014-07-30 12:09:01 +02:00
|
|
|
};
|
|
|
|
|
2024-03-06 18:21:40 +01:00
|
|
|
static void
|
|
|
|
test_refcount_toggle_ref_cb (gpointer data,
|
|
|
|
GObject *object,
|
|
|
|
gboolean is_last_ref)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
static gpointer
|
|
|
|
test_refcount_setup (PerformanceTest *test)
|
|
|
|
{
|
|
|
|
struct RefcountTest *data;
|
|
|
|
|
|
|
|
data = g_new0 (struct RefcountTest, 1);
|
|
|
|
data->object = g_object_new (COMPLEX_TYPE_OBJECT, NULL);
|
|
|
|
|
2024-03-06 18:21:40 +01:00
|
|
|
if (g_str_equal (test->name, "refcount-toggle"))
|
|
|
|
{
|
|
|
|
g_object_add_toggle_ref (data->object, test_refcount_toggle_ref_cb, NULL);
|
|
|
|
g_object_unref (data->object);
|
|
|
|
data->is_toggle_ref = TRUE;
|
|
|
|
}
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_refcount_init (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double factor)
|
|
|
|
{
|
|
|
|
struct RefcountTest *data = _data;
|
|
|
|
|
2024-06-28 15:43:26 +02:00
|
|
|
data->n_checks = (unsigned int) (factor * NUM_KILO_REFS_PER_ROUND);
|
2014-07-30 12:09:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_refcount_run (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct RefcountTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2014-07-30 12:09:01 +02:00
|
|
|
{
|
|
|
|
g_object_ref (object);
|
|
|
|
g_object_ref (object);
|
|
|
|
g_object_ref (object);
|
|
|
|
g_object_unref (object);
|
|
|
|
g_object_unref (object);
|
|
|
|
|
|
|
|
g_object_ref (object);
|
|
|
|
g_object_ref (object);
|
|
|
|
g_object_unref (object);
|
|
|
|
g_object_unref (object);
|
|
|
|
g_object_unref (object);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-03-06 12:17:08 +01:00
|
|
|
static void
|
|
|
|
test_refcount_1_run (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct RefcountTest *data = _data;
|
|
|
|
GObject *object = data->object;
|
|
|
|
|
2024-06-28 15:41:48 +02:00
|
|
|
for (unsigned int i = 0; i < data->n_checks; i++)
|
2024-03-06 12:17:08 +01:00
|
|
|
{
|
|
|
|
g_object_ref (object);
|
|
|
|
g_object_unref (object);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
static void
|
|
|
|
test_refcount_finish (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_refcount_print_result (PerformanceTest *test,
|
|
|
|
gpointer _data,
|
|
|
|
double time)
|
|
|
|
{
|
|
|
|
struct RefcountTest *data = _data;
|
|
|
|
g_print ("Million refs+unref per second: %.2f\n",
|
|
|
|
data->n_checks * 5 / (time * 1000000 ));
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
test_refcount_teardown (PerformanceTest *test,
|
|
|
|
gpointer _data)
|
|
|
|
{
|
|
|
|
struct RefcountTest *data = _data;
|
|
|
|
|
2024-03-06 18:21:40 +01:00
|
|
|
if (data->is_toggle_ref)
|
|
|
|
g_object_remove_toggle_ref (data->object, test_refcount_toggle_ref_cb, NULL);
|
|
|
|
else
|
|
|
|
g_object_unref (data->object);
|
|
|
|
|
2014-07-30 12:09:01 +02:00
|
|
|
g_free (data);
|
|
|
|
}
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
/*************************************************************
|
|
|
|
* Main test code
|
|
|
|
*************************************************************/
|
|
|
|
|
|
|
|
static PerformanceTest tests[] = {
|
|
|
|
{
|
|
|
|
"simple-construction",
|
|
|
|
simple_object_get_type,
|
|
|
|
test_construction_setup,
|
|
|
|
test_construction_init,
|
|
|
|
test_construction_run,
|
|
|
|
test_construction_finish,
|
|
|
|
test_construction_teardown,
|
|
|
|
test_construction_print_result
|
|
|
|
},
|
2021-09-29 13:17:46 +02:00
|
|
|
{
|
|
|
|
"simple-construction1",
|
|
|
|
simple_object_get_type,
|
|
|
|
test_construction_setup,
|
|
|
|
test_construction_init,
|
|
|
|
test_construction_run1,
|
|
|
|
test_construction_finish1,
|
|
|
|
test_construction_teardown,
|
|
|
|
test_construction_print_result
|
|
|
|
},
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
"complex-construction",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_construction_setup,
|
|
|
|
test_construction_init,
|
2021-09-29 13:17:46 +02:00
|
|
|
test_complex_construction_run,
|
|
|
|
test_construction_finish,
|
|
|
|
test_construction_teardown,
|
|
|
|
test_construction_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"complex-construction1",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_construction_setup,
|
|
|
|
test_construction_init,
|
|
|
|
test_complex_construction_run1,
|
|
|
|
test_construction_finish,
|
|
|
|
test_construction_teardown,
|
|
|
|
test_construction_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"complex-construction2",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_construction_setup,
|
|
|
|
test_construction_init,
|
|
|
|
test_complex_construction_run2,
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
test_construction_finish,
|
|
|
|
test_construction_teardown,
|
|
|
|
test_construction_print_result
|
|
|
|
},
|
2022-05-22 13:37:01 +02:00
|
|
|
{
|
|
|
|
"finalization",
|
|
|
|
simple_object_get_type,
|
|
|
|
test_construction_setup,
|
|
|
|
test_finalization_init,
|
|
|
|
test_finalization_run,
|
|
|
|
test_finalization_finish,
|
|
|
|
test_construction_teardown,
|
|
|
|
test_finalization_print_result
|
|
|
|
},
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
"type-check",
|
|
|
|
NULL,
|
|
|
|
test_type_check_setup,
|
|
|
|
test_type_check_init,
|
|
|
|
test_type_check_run,
|
|
|
|
test_type_check_finish,
|
|
|
|
test_type_check_teardown,
|
|
|
|
test_type_check_print_result
|
|
|
|
},
|
|
|
|
{
|
2011-09-27 10:15:17 +02:00
|
|
|
"emit-unhandled",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL),
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_unhandled_setup,
|
|
|
|
test_emission_unhandled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_unhandled_finish,
|
|
|
|
test_emission_unhandled_teardown,
|
|
|
|
test_emission_unhandled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"emit-unhandled-empty",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_EMPTY),
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_unhandled_setup,
|
|
|
|
test_emission_unhandled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_unhandled_finish,
|
|
|
|
test_emission_unhandled_teardown,
|
|
|
|
test_emission_unhandled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"emit-unhandled-generic",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_GENERIC),
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_unhandled_setup,
|
|
|
|
test_emission_unhandled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_unhandled_finish,
|
|
|
|
test_emission_unhandled_teardown,
|
|
|
|
test_emission_unhandled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"emit-unhandled-generic-empty",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_GENERIC_EMPTY),
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_unhandled_setup,
|
|
|
|
test_emission_unhandled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_unhandled_finish,
|
|
|
|
test_emission_unhandled_teardown,
|
|
|
|
test_emission_unhandled_print_result
|
|
|
|
},
|
2013-02-21 18:47:08 +01:00
|
|
|
{
|
|
|
|
"emit-unhandled-args",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_ARGS),
|
2013-02-21 18:47:08 +01:00
|
|
|
test_emission_unhandled_setup,
|
|
|
|
test_emission_unhandled_init,
|
|
|
|
test_emission_run_args,
|
|
|
|
test_emission_unhandled_finish,
|
|
|
|
test_emission_unhandled_teardown,
|
|
|
|
test_emission_unhandled_print_result
|
|
|
|
},
|
2011-09-27 10:15:17 +02:00
|
|
|
{
|
|
|
|
"emit-handled",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL),
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_handled_setup,
|
|
|
|
test_emission_handled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_handled_finish,
|
|
|
|
test_emission_handled_teardown,
|
|
|
|
test_emission_handled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"emit-handled-empty",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_EMPTY),
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_handled_setup,
|
|
|
|
test_emission_handled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_handled_finish,
|
|
|
|
test_emission_handled_teardown,
|
|
|
|
test_emission_handled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"emit-handled-generic",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_GENERIC),
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_handled_setup,
|
|
|
|
test_emission_handled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2012-02-22 19:44:24 +01:00
|
|
|
test_emission_handled_finish,
|
|
|
|
test_emission_handled_teardown,
|
|
|
|
test_emission_handled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"emit-handled-generic-empty",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_GENERIC_EMPTY),
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_handled_setup,
|
|
|
|
test_emission_handled_init,
|
2013-02-21 18:44:56 +01:00
|
|
|
test_emission_run,
|
2011-09-27 10:15:17 +02:00
|
|
|
test_emission_handled_finish,
|
|
|
|
test_emission_handled_teardown,
|
|
|
|
test_emission_handled_print_result
|
2013-02-21 18:47:08 +01:00
|
|
|
},
|
|
|
|
{
|
|
|
|
"emit-handled-args",
|
2024-06-28 15:41:48 +02:00
|
|
|
GUINT_TO_POINTER (COMPLEX_SIGNAL_ARGS),
|
2013-02-21 18:47:08 +01:00
|
|
|
test_emission_handled_setup,
|
|
|
|
test_emission_handled_init,
|
|
|
|
test_emission_run_args,
|
|
|
|
test_emission_handled_finish,
|
|
|
|
test_emission_handled_teardown,
|
|
|
|
test_emission_handled_print_result
|
2014-07-30 12:09:01 +02:00
|
|
|
},
|
2022-12-13 00:45:18 +01:00
|
|
|
{
|
|
|
|
"notify-unhandled",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_notify_unhandled_setup,
|
|
|
|
test_notify_unhandled_init,
|
|
|
|
test_notify_run,
|
|
|
|
test_notify_unhandled_finish,
|
|
|
|
test_notify_unhandled_teardown,
|
|
|
|
test_notify_unhandled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"notify-by-pspec-unhandled",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_notify_unhandled_setup,
|
|
|
|
test_notify_unhandled_init,
|
|
|
|
test_notify_by_pspec_run,
|
|
|
|
test_notify_unhandled_finish,
|
|
|
|
test_notify_unhandled_teardown,
|
|
|
|
test_notify_unhandled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"notify-handled",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_notify_handled_setup,
|
|
|
|
test_notify_handled_init,
|
|
|
|
test_notify_run,
|
|
|
|
test_notify_handled_finish,
|
|
|
|
test_notify_handled_teardown,
|
|
|
|
test_notify_handled_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"notify-by-pspec-handled",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_notify_handled_setup,
|
|
|
|
test_notify_handled_init,
|
|
|
|
test_notify_by_pspec_run,
|
|
|
|
test_notify_handled_finish,
|
|
|
|
test_notify_handled_teardown,
|
|
|
|
test_notify_handled_print_result
|
|
|
|
},
|
2022-12-13 00:55:03 +01:00
|
|
|
{
|
|
|
|
"property-set",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_set_setup,
|
|
|
|
test_set_init,
|
|
|
|
test_set_run,
|
|
|
|
test_set_finish,
|
|
|
|
test_set_teardown,
|
|
|
|
test_set_print_result
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"property-get",
|
|
|
|
complex_object_get_type,
|
|
|
|
test_get_setup,
|
|
|
|
test_get_init,
|
|
|
|
test_get_run,
|
|
|
|
test_get_finish,
|
|
|
|
test_get_teardown,
|
|
|
|
test_get_print_result
|
|
|
|
},
|
2014-07-30 12:09:01 +02:00
|
|
|
{
|
|
|
|
"refcount",
|
|
|
|
NULL,
|
|
|
|
test_refcount_setup,
|
|
|
|
test_refcount_init,
|
|
|
|
test_refcount_run,
|
|
|
|
test_refcount_finish,
|
|
|
|
test_refcount_teardown,
|
|
|
|
test_refcount_print_result
|
2024-03-06 12:17:08 +01:00
|
|
|
},
|
|
|
|
{
|
|
|
|
"refcount-1",
|
|
|
|
NULL,
|
|
|
|
test_refcount_setup,
|
|
|
|
test_refcount_init,
|
|
|
|
test_refcount_1_run,
|
|
|
|
test_refcount_finish,
|
|
|
|
test_refcount_teardown,
|
|
|
|
test_refcount_print_result
|
|
|
|
},
|
2024-03-06 18:21:40 +01:00
|
|
|
{
|
|
|
|
"refcount-toggle",
|
|
|
|
NULL,
|
|
|
|
test_refcount_setup,
|
|
|
|
test_refcount_init,
|
|
|
|
test_refcount_1_run,
|
|
|
|
test_refcount_finish,
|
|
|
|
test_refcount_teardown,
|
|
|
|
test_refcount_print_result
|
|
|
|
},
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
static PerformanceTest *
|
|
|
|
find_test (const char *name)
|
|
|
|
{
|
2024-06-28 15:41:48 +02:00
|
|
|
for (size_t i = 0; i < G_N_ELEMENTS (tests); i++)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
if (strcmp (tests[i].name, name) == 0)
|
|
|
|
return &tests[i];
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
int
|
|
|
|
main (int argc,
|
|
|
|
char *argv[])
|
|
|
|
{
|
|
|
|
PerformanceTest *test;
|
|
|
|
GOptionContext *context;
|
|
|
|
GError *error = NULL;
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
const char *str;
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
if ((str = g_getenv ("GLIB_PERFORMANCE_FACTOR")) && str[0])
|
|
|
|
{
|
|
|
|
test_factor = g_strtod (str, NULL);
|
|
|
|
}
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
context = g_option_context_new ("GObject performance tests");
|
|
|
|
g_option_context_add_main_entries (context, cmd_entries, NULL);
|
|
|
|
if (!g_option_context_parse (context, &argc, &argv, &error))
|
|
|
|
{
|
|
|
|
g_printerr ("%s: %s\n", argv[0], error->message);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
tests/performance: add "factor" argument to performance test
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
2024-03-05 09:53:03 +01:00
|
|
|
if (test_factor < 0)
|
|
|
|
{
|
|
|
|
g_printerr ("%s: test factor must be positive\n", argv[0]);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
global_timer = g_timer_new ();
|
|
|
|
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
if (argc > 1)
|
|
|
|
{
|
2024-06-28 15:41:48 +02:00
|
|
|
for (int i = 1; i < argc; i++)
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
{
|
|
|
|
test = find_test (argv[i]);
|
|
|
|
if (test)
|
|
|
|
run_test (test);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2024-06-28 15:41:48 +02:00
|
|
|
for (size_t k = 0; k < G_N_ELEMENTS (tests); k++)
|
2020-11-20 21:30:02 +01:00
|
|
|
run_test (&tests[k]);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
}
|
|
|
|
|
2022-06-23 15:15:47 +02:00
|
|
|
g_option_context_free (context);
|
tests/performance: ensure to always warm up for 2 seconds
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
2024-03-05 10:54:54 +01:00
|
|
|
g_clear_pointer (&global_timer, g_timer_destroy);
|
Add performance tests for GObject primitives
These are basic performance test for a couple of basic gobject
primitives:
* construction of simple objects. Simple is a bare gobject derived
class with no properties, signals or interfaces.
* construction of complex objects. Complex is a gobject subclass
with construct properties, normal properties, signals, and
implements an interface.
* run-time type check of complex objects
* signal emissions
Lots of care is taken to try to make the results reproducible. Each
test is run for multible "rounds", where we try to make each round be
"not too short" in order to be significant wrt timer accuracy, but
also "not to long" to make the probability of some other random event
happening on the system (interrupts, other process scheduled, etc)
during the round less likely.
The current target round time is 4 msecs, which was picked without
rigour, but seems small wrt e.g. scheduler time.
For each test we then run the calculated round size for 60 seconds,
and then report the performance based on the minimal time of one
round. The model here is that any random stuff that happens during a
round can only slow it down, there is nothing that can make it go
faster, so the minimal time is the best estimate of how fast one round
goes.
The result is not ideal, even on a "idle" system the results vary
from round to round, but the variation seems to be less than 1%.
So, any performance difference reported by this test over 1% is
probably statistically significant.
Additionally the tests can be run with or without threads being
initialized. The script tests/gobject/run-performance.sh makes
it easy to produce a performance report for the current checkout.
https://bugzilla.gnome.org/show_bug.cgi?id=557100
2009-08-20 14:34:51 +02:00
|
|
|
return 0;
|
|
|
|
}
|