For those projects that cannot use `g_autoptr()`, GStrvBuilder's end
plus unref is not really convenient.
We can crib the "unref to data type" model from GBytes, and have an
additional unref function that also returns the just built GStrv.
This was erroring on recent GCC because `struct heap_dict` is smaller than
the publicly provided size (guintptr[16]) in the header for GVariantDict.
Port to use `g_malloc()` directly, and use a static assertion to ensure
we’re allocating the larger of the two struct sizes.
These consistently fail on scheduled CI runs, which is not helping our
ability to catch Hurd regressions.
For example, https://gitlab.gnome.org/GNOME/glib/-/jobs/3709402
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
See: #3148
Bumping the reference count from 1 to 2 (and back) is more expensive,
due to the check for toggle notifications.
We have a performance test already that hits that code path. Avoid that
for he "property-{get,set}" tests, so we avoid the known overhead and
test more relevant parts.
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
The default output is annoyingly verbose. You see
Running test simple-construction
simple-construction: Millions of constructed objects per second: 33.498
Running test simple-construction1
simple-construction1: Millions of constructed objects per second: 142.493
Running test complex-construction
complex-construction: Millions of constructed objects per second: 14.304
Running test complex-construction1
...
where the "Running test" lines just clutter the output. In fact so much
so, that my terminal fills up and I don't see the output of all tests in
one page. The "Running test" line is not so useful, because I mostly
care about the test result, and that line already contains the test
name.
Add an option to silence this.
Previously, the result lines are not unique, for example
Running test simple-construction
Millions of constructed objects per second: 27.629
Running test simple-construction1
Millions of constructed objects per second: 151.879
...
That is undesirable, because we might want to parse the test results
with a script, and that's easier when the line is unique.
Change to:
Running test simple-construction
simple-construction: Millions of constructed objects per second: 27.629
Running test simple-construction1
simple-construction1: Millions of constructed objects per second: 151.879
...
It may not be obvious, but the moment unlock is called, the locker
instance may be destroyed.
See g_object_unref(), which calls toggle_refs_check_and_ref_or_deref().
It will check for toggle references while dropping the ref count from 2
to 1. It must decrement the ref count while holding the lock, but it
also must still unlock afterwards.
Note that the locker instance is on the object itself. Once we decrement
the ref count we give up our reference and another thread may race
against destroying the object. We thus must not touch object anymore.
How can we then still unlock?
This works correctly because:
- unlock operations must not touch the locker instance after unlocking.
- assume that another thread races g_object_unref() to destroy the
object, while we are about to call object_bit_unlock() in
toggle_refs_check_and_ref_or_deref(). Then that other thread will also
need to acquire the same lock (during g_object_notify_queue_freeze()).
It thus is blocked to destroy the object.
Add code comments about that.
We can only assert for having one toggle reference, after we confirmed
(under lock) that the ref count was in the toggle case.
Otherwise, if another thread refs/unrefs the object, we can hit a wrong
g_critical() assertion about
if (tstackptr->n_toggle_refs != 1)
{
g_critical ("Unexpected number of toggle-refs. g_object_add_toggle_ref() must be paired with g_object_remove_toggle_ref()");
Fixes: 9ae43169cf ('gobject: fix race in toggle ref during g_object_ref()')
We don't actually need to use the Meson-detected size macros here,
because the result of `sizeof()` is an integer constant expression.
No functional change.
Signed-off-by: Simon McVittie <smcv@collabora.com>
g-ir-scanner currently maps these to lower-level types at scan time by
assuming that time_t is an alias for long, off_t is an alias for size_t
and so on. This is not always accurate: some ILP32 architectures have
64-bit time_t (for Y2038 compatibility) and 64-bit off_t (for large file
support), and that mismatch is tracked as GNOME/gobject-introspection#494.
One option for resolving this g-ir-scanner bug is to have it pass these
types through to the GIR XML, and teach g-ir-compiler and its replacement
gi-compile-repository to convert them to the corresponding concrete
type tag, as they already do for abstract types such as `long long` and
`size_t`.
Loosely based on GNOME/gobject-introspection!451 by Shuyu Liu.
Co-authored-by: Shuyu Liu <liushuyu011@gmail.com>
Signed-off-by: Simon McVittie <smcv@collabora.com>
We don't actually need to use the results of configure-time checks here:
sizeof is a perfectly reasonable integer constant expression, so we can
use that directly.
Helps: https://gitlab.gnome.org/GNOME/glib/-/issues/2842
Signed-off-by: Simon McVittie <smcv@collabora.com>
These scripts use $(readlink -f) to guess their own path if necessary,
but macOS readlink doesn't support the -f option, and POSIX doesn't
guarantee that readlink even exists.
Resolves: https://gitlab.gnome.org/GNOME/glib/-/issues/3289
Fixes: d7601f7e "Incorporate some lint checks into `meson test`"
Signed-off-by: Simon McVittie <smcv@collabora.com>
The `gi-docgen` tool is not designed to be used like that. In
particular, when nesting documentation directories, the generated
`*.devhelp2` files (needed by Devhelp to show the documentation) are
nested one directory level too deep for Devhelp to find them, and hence
are useless, and the documentation doesn’t show up in this common
documentation viewer.
So, change the installed documentation directory hierarchy:
* `${PREFIX}/share/doc/glib-2.0/gio` → `${PREFIX}/share/doc/gio-2.0`
* `${PREFIX}/share/doc/glib-2.0/glib-unix` →
`${PREFIX}/share/doc/glib-unix-2.0`
* `${PREFIX}/share/doc/glib-2.0/gobject` →
`${PREFIX}/share/doc/gobject-2.0`
* etc.
* `${PREFIX}/share/doc/glib-2.0/glib` → `${PREFIX}/share/doc/glib-2.0`
This is going to seem like pointless churn (the contents of the
documentation have not changed), and packagers may mourn the split of
content in `/usr/share/doc` from `/usr/share/doc/${package_name}` to
`/usr/share/doc/${pkg_config_id}` instead, but that seems to be the best
approach to fix this issue in GLib. gi-docgen’s behaviour does feel
fairly consistent and correct with the rest of how it works (single
output directory).
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Fixes: #3287
These unfortunately have `gchar*` return types rather than `const
gchar*`. This is a historical artifact which we can’t change: while
adding `const` would only be an API break and not an ABI break, it would
cause all sorts of C++ code which uses GLib to emit new cast warnings
(similarly, C code with const correctness compiler warnings enabled
would do the same).
The incorrect return type causes the GIR scanner to (reasonably) assume
the return value is allocated, which is wrong.
Fix that by explicitly adding `(transfer none)`.
Also add an explicit `(nullable)` because all three functions are.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Fixes: #3286
The gdbus-example-objectmanager visibility header was being re-created
on reconfigure, causing a needless rebuild of gdbus tests that were
using the visibility header.
All other invocations of gen_visibility_macros are via custom_target.