It’s deprecated, but I was modifying it anyway and it didn’t have any
coverage, so let’s add a simple test (as suggested by Michael
Catanzaro).
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
The latter only accepts a `gint` as the number of elements in the array,
which means that its use in `GArray` (and related array implementations)
truncates at least half the potential array size.
So, introduce a replacement for it which uses `size_t` for the number of
elements. This is inline with what `qsort()` (or `qsort_r()`) actually
does. Unfortunately we can’t directly use `qsort_r()` because it’s not
guaranteed to be a stable sort.
This fixes some `-Wsign-conversion` warnings (when building GLib with
that enabled).
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
The default values for construct properties always have to be set, even
if those properties are deprecated. The code to do that is in GLib, and
not under the control of the user (unless they completely override the
`constructor` vfunc, which is not recommended). So don’t emit a warning
for that if `G_ENABLE_DIAGNOSTICS` is enabled.
In particular, this fixes deprecation warnings being emitted for
properties of a parent class when chaining up with a custom constructor,
even when none of the child class code mentions the deprecated property.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Fixes: #3254
This just makes it a bit clearer that they’re atomic/for thread safety,
and not just NIHed bit operations with shouty names.
This introduces no functional changes.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
This avoids the need to ref/unref the closure while invalidating it in
the `closure->ref_count == 1` path in `g_closure_unref()`.
scan-build gets very confused about the ref count here, and ends up
assuming it’s possible for the `g_closure_unref()` call in
`g_closure_invalidate()` to finalise the closure when the latter is
called from `g_closure_unref()`. There was an existing assertion in
`g_closure_invalidate()` which hinted that this wasn’t possible, but
scan-build doesn’t seem to be able to propagate assumptions about
refcounts between function contexts.
So, introduce an internal variant of `g_closure_invalidate()` which can
skip modifying the closure’s refcount. It’s safe to invalidate the
closure without adding a ref when doing so from `g_closure_unref()` with
`closure->ref_count == 1` because at that point `g_closure_unref()`
holds the only remaining ref to the closure. So none of the invalidation
callbacks are allowed to unref it further.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #1767
scan-build is worried that `node->data->common.value_table->value_init`
will be a `NULL` pointer dereference in the assignment to
`node->mutatable_check_cache`.
There’s already an assertion immediately below to check against this, so
let’s move it up a line to help the static analyser out.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #1767
Avoid scan-build thinking that `new_wrdata` could be `NULL` on this
control path. It can’t be `NULL` if `new_object` is set.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #1767
Basically various trivial instances of the following MSVC compiler
warning:
```
../gio/gio-tool-set.c(50): warning C4267: '=': conversion from 'size_t' to 'int', possible loss of data
```
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
I’m not sure exactly how this code is supposed to work, so this might
not be the right fix. But there’s definitely a problem here, and it was
spotted by scan-build.
If `param_value_array_validate()` is entered with
`value->data[0].v_pointer == NULL && aspec->fixed_n_elements`, that `NULL`
will be stored in `value_array` too. `value->data[0].v_pointer` will
then be set to a new non-`NULL` array.
A few lines down, `value_array_ensure_size()` is called on
`value_array` – which is still `NULL` – and this results in a `NULL`
pointer dereference.
It looks like `value->data[0].v_pointer` and `value_array` are used
interchangeably throughout the whole of the function, so assign the new
value of `value->data[0].v_pointer` to `value_array` too.
My guess is that `value_array` is just a convenience alias for
`value->data[0].v_pointer`, because the latter is a real mouthful to
type or read.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #1767
The python interpreter found by `/usr/bin/env python3` is not
necessarily the same installation as the one that's found by meson's
`pymod.find_installation('python')`. This means that even though meson
is checking that the python installation it found includes the
'packaging' module, the scripts might not have access to that module
when run.
For distribution packaging, it's usually desirable to have python script
interpreters be fully specified paths, rather than use `/usr/bin/env`,
to ensure the scripts run using the expected python installation (i.e.
the one where the python 'packaging' dependency is installed).
The easiest way to fix this is to set the script interpreter to the
`full_path()` of the python interpreter found by meson. The specific
python interpreter that will be used can be selected through the use of
a meson machine file by overriding the "python" program. Many
distributions already have this set up using meson packaging helpers.
These consistently fail on scheduled CI runs, which is not helping our
ability to catch Hurd regressions.
For example, https://gitlab.gnome.org/GNOME/glib/-/jobs/3709402
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
See: #3148
Bumping the reference count from 1 to 2 (and back) is more expensive,
due to the check for toggle notifications.
We have a performance test already that hits that code path. Avoid that
for he "property-{get,set}" tests, so we avoid the known overhead and
test more relevant parts.
Despite all the efforts, there still seems to be a lot of noise in the
performance measurement. Especially, the first iterations seem to run
faster. Maybe that is because the kernel didn't yet determine that the
process is CPU bound and is less likely to schedule it out Or maybe it's
because burning the cycles heats up the CPU and it gets throttled after
a while. It's unclear why, and it's even unclear whether this really
happens. But from my observations, it seems to do.
Hence, more warm up.
- the first time we enter the test, ensure that we keep the CPU busy for
at 2 seconds. This additional warm up (WARM_UP_ALWAYS_SEC) is
global, and not per test.
- for each test, ignore the first 5% of the runs. It seems those tend to
run faster, thus skewing the results.
- if the user specifies a "--factor", the warm up operations are the
same and independent from external factors (such as time
measurements).
Note that this matters the most, when you want to run the executable
twice in a row and compare the results.
By default, the test estimates a run factor for each test. This means,
if you run performance under `perf`, the results are not comparable,
as the run time depends on the estimated factor.
Add an option, to set a fixed factor.
Of course, there is only one factor argument for all tests. Quite
possibly, you would want to run each test individually with a factor
appropriate for the test. On the other hand, all tests should be tuned
so that the same factor gives a similar test duration. So this may not
be a concern, or the tests should be adjusted. In any case, the option
is most useful when running only one test explicitly.
You can get a suitable factor by running the test once with "--verbose".
Another use case is if you run the benchmark under valgrind. Valgrind
slows down the run so much, that the estimated factor would be quite
off. As a result, the chosen code paths are different from the real run.
By setting the factor, the timing measurements don't affect the executed
code.
The default output is annoyingly verbose. You see
Running test simple-construction
simple-construction: Millions of constructed objects per second: 33.498
Running test simple-construction1
simple-construction1: Millions of constructed objects per second: 142.493
Running test complex-construction
complex-construction: Millions of constructed objects per second: 14.304
Running test complex-construction1
...
where the "Running test" lines just clutter the output. In fact so much
so, that my terminal fills up and I don't see the output of all tests in
one page. The "Running test" line is not so useful, because I mostly
care about the test result, and that line already contains the test
name.
Add an option to silence this.
Previously, the result lines are not unique, for example
Running test simple-construction
Millions of constructed objects per second: 27.629
Running test simple-construction1
Millions of constructed objects per second: 151.879
...
That is undesirable, because we might want to parse the test results
with a script, and that's easier when the line is unique.
Change to:
Running test simple-construction
simple-construction: Millions of constructed objects per second: 27.629
Running test simple-construction1
simple-construction1: Millions of constructed objects per second: 151.879
...
It may not be obvious, but the moment unlock is called, the locker
instance may be destroyed.
See g_object_unref(), which calls toggle_refs_check_and_ref_or_deref().
It will check for toggle references while dropping the ref count from 2
to 1. It must decrement the ref count while holding the lock, but it
also must still unlock afterwards.
Note that the locker instance is on the object itself. Once we decrement
the ref count we give up our reference and another thread may race
against destroying the object. We thus must not touch object anymore.
How can we then still unlock?
This works correctly because:
- unlock operations must not touch the locker instance after unlocking.
- assume that another thread races g_object_unref() to destroy the
object, while we are about to call object_bit_unlock() in
toggle_refs_check_and_ref_or_deref(). Then that other thread will also
need to acquire the same lock (during g_object_notify_queue_freeze()).
It thus is blocked to destroy the object.
Add code comments about that.
We can only assert for having one toggle reference, after we confirmed
(under lock) that the ref count was in the toggle case.
Otherwise, if another thread refs/unrefs the object, we can hit a wrong
g_critical() assertion about
if (tstackptr->n_toggle_refs != 1)
{
g_critical ("Unexpected number of toggle-refs. g_object_add_toggle_ref() must be paired with g_object_remove_toggle_ref()");
Fixes: 9ae43169cf ('gobject: fix race in toggle ref during g_object_ref()')
The documentation previously implied that they could. That’s not really
true though: they can only fail if preconditions fail, i.e. they’re
passed invalid input. That’s a programmer error, which is not something
we want to encourage people to check for at runtime (e.g. by dynamically
checking for a 0 return value).
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
This is nowhere near a complete check-through and gi-docgenification of
the signals docs, just a few bits I was looking at anyway.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3250
On a fast laptop, this test currently takes about 7s to run, which is a
significant portion of the overall test suite time.
On a slower CI machine, especially running the test under valgrind, the
test can time out.
There’s no need to always run so many iterations: we run the tests under
CI so often that it’s likely a failure will eventually be hit (if there
is a bug) even with fewer iterations. We also now run the tests once a
week with `-m slow`, so the original iteration count will also still be
used then.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
It’s more helpful to always register the test, even if it’s normally
skipped, since then the skip is recorded in the test logs so people can
see what’s ‘missing’ from them.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
GSList doesn't seem the best choice here. It's benefits are that it's
relatively convenient to use (albeit not very efficient) and that an
empty list requires only the pointer to the list's head.
But for non-empty list, we need to allocate GSList elements. We can do
better, by writing more code.
I think it's worth optimizing GObject, at the expense of a bit(?) more
complicated code. The complicated code is still entirely self-contained,
so unless you review WeakRefData usage, it doesn't need to bother you.
Note that this can be easily measure to be a bit faster. But I think the
more important part is to safe some allocations. Often objects are
long-lived, and the GWeakRef will be tracked for a long time. It is
interesting, to optimize the memory usage of that.
- if the list only contains one weak reference, it's interned/embedded in
WeakRefData.list.one. Otherwise, an array is allocated and tracked
at WeakRefData.list.many.
- when the buffer grows, we double the size. When the buffer shrinks,
we reallocate to 50% when 75% are empty. When the buffer shrinks to
length 1, we free it (so that "list.one" is always used with a length
of 1).
That means, at worst case we waste 75% of the allocated buffer,
which is a choice in the hope that future weak references will be
registered, and that this is a suitable strategy.
- on architectures like x86_68, does this not increase the size of
WeakRefData.
Also, the number of weak-refs is now limited to 65535, and now an
assertion fails when you try to register more than that. But note that
the internal tracking just uses a linear search, so you really don't
want to register thousands of weak references on an object. If you do
that, the current implementation is not suitable anyway and you must
rethink your approach. Nor does it make sense to optimize the
implementation for such a use case. Instead, the implementation is
optimized for a few (one!) weak reference per object.
We can safely combine this, and use bit 30 of the ref-count for locking.
This leaves still 2^30-1 for the ref-count, which is more than enough,
because these references are only taken for a short time in
g_weak_ref_get() and g_weak_ref_set(). Note that one thread can at most
take one reference at a time, so the ref-count will always a smaller
number.
Also note, that obviously we will only take a bit lock while also
holding a reference. That means, when weak_ref_data_unref() decreases
the ref-count to zero, the bit will be unlocked as well.
The reason to do this is to free up some space in WeakRefData. Note that
(on x86_64) this doesn't actually make the struct smaller. It's
probably not reasonably possible to make WeakRefData smaller than it
already is (on x86_64). However, by combining the fields we have some
space for reuse without increasing the struct size. That space will be
used next.
The implementation of GWeakRef tracks weak references in a way, that
requires linear search. That is probably best, for an expected low
number of entries (e.g. compared to the overhead of having a hash
table). However, it means, if you create thousands of weak references,
performance start to degrade.
Add a test that creates 64k weak references. Just to see how it goes.
Replace the global RWLock with per-object locking. Note that there are
three places where we needed to take the globlal lock. g_weak_ref_get(),
g_weak_ref_set() and in _object_unref_clear_weak_locations(), during
g_object_unref(). The calls during g_object_unref() seem the most
relevant here, where we would want to avoid a global lock. Luckily, that
global lock only had to be taken if the object ever had a GWeakRef
registered, so most objects wouldn't care. The global lock only affects
objects, that are ever set via g_weak_ref_set(). Still, try to avoid that
global lock.
Related to GWeakRef, there are various moments when we don't hold a
strong reference to the object. So the per-object lock cannot be on the
object itself, because when we want to unlock we no longer have access
to the object. And we cannot take a strong reference on the GObject
either, because that triggers toggle notifications. And worse, when one
thread holds the last strong reference of an object and decides to
destroy it, then a `g_weak_ref_set(weak_ref, NULL)` on another thread
could acquire a temporary reference, and steal the destruction of the
object from the other thread.
Instead, we already had a "quark_weak_locations" GData and an allocated
structure for tracking the GSList with GWeakRef. Extend that to be
ref-counted and have a separate lifetime from the object. This
WeakRefData now contains the per-object mutex for locking. We can
request the WeakRefData from an object, take a reference to keep it
alive, and use it to hold the lock without having the object alive.
We also need a bitlock on GWeakRef itself. So to set or get a
GWeakRef we must take the per-object lock on the WeakRefData and the
lock on the GWeakRef (in this order). During g_weak_ref_set() there may
be of course two objects (and two WeakRefData) involved, the previous
and the new object.
Note that now once an object gets a WeakRefData allocated, it can no
longer be freed. It must stick until the object gets destroyed. This
allocation happens, once an object is set via g_weak_ref_set(). In
other words, objects involved with GWeakRef will have extra data
allocated.
It may be possible to also release the WeakRefData once it's no longer
needed. However, that would be quite complicated, and require additional
atomic operations, so it's not clear to be worth it. So it's not done.
Instead, the WeakRefData sticks on the object once it's set.
_object_unref_clear_weak_locations() is called twice during
g_object_unref(). In both cases, it is when we expect that the reference
count is 1 and we are either about to call dispose() or finalize().
At this point, we must check for GWeakRef to avoid a race that the ref
count gets increased just at that point.
However, we can do something better than to always take the global lock.
On the object, whenever an object is set to a GWeakRef, set a flag
OPTIONAL_FLAG_EVER_HAD_WEAK_REF. Most objects are not involved with weak
references and won't have this flag set.
If we reach _object_unref_clear_weak_locations() we just (atomically)
checked that the ref count is one. If the object at this point never had
a GWeakRef registered, we know that nobody else could have raced against
obtaining another reference. In this case, we can skip taking the lock
and checking for weak locations.
As most object don't ever have a GWeakRef registered, this significantly
avoids unnecessary work during _object_unref_clear_weak_locations().
This even fixes a hard to hit race in the do_unref=FALSE case.
Previously, if do_unref=FALSE there were code paths where we avoided
taking the global lock. We do so, when quark_weak_locations is unset.
However, that is not race free. If we enter
_object_unref_clear_weak_locations() with a ref-count of 1 and one
GWeakRef registered, another thread can take a strong reference and
unset the GWeakRef. Then quark_weak_locations will be unset, and
_object_unref_clear_weak_locations() misses the fact that the ref count
is now bumped to two. That is now fixed, because once
OPTIONAL_FLAG_EVER_HAD_WEAK_REF is set, it will stick.
Previously, there was an optimization to first take a read lock to check
whether there are weak locations to clear. It's not clear that this is
worth it, because we now already have a hint that there might be a weak
location. Unfortunately, GRWLock does not support an upgradable lock, so
we cannot take an (upgradable) read lock, and when necessary upgrade
that to a write lock.
It's not clear what this code comment tries to tell us. Yes, when we
make changes, we must take care that the changes are correct and update
the relevant places.
It seems long obsolete. Drop it.
This partly reverts commit d7dd9aefd8 ('placed a comment about not
changing CArray until we have').
g_object_weak_ref() documentation refers to GWeakRef as thread-safe
replacement. However, it's not clear to me, how GWeakRef is a
replacement for a callback. I think, it means, that you combine
g_object_weak_ref() with GWeakRef, to both hold a (thread-safe) weak
reference and get a notification on destruction.
Add a test, that GWeakRef is already cleared inside the GWeakNotify
callback.
This isn’t normally hit because it’s in a test which is disabled unless
run with `-m thorough`.
The data is owned by `g_test_add_data_func_full()` until the end of the
process.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
This isn’t normally hit because it’s in a test which is disabled unless
run with `-m thorough`.
The `GParamSpec` is initially floating, but its floating ref is sunk by
`g_object_interface_install_property()` (regardless of whether that call
succeeds or aborts). The behaviour of
`g_object_interface_install_property()` in this respect may have changed
more recently than the test was written.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Usually, after g_pointer_bit_lock() we want to read the pointer that we
have. In many cases, when we g_pointer_bit_lock() a pointer, we can
access it afterwards without atomic, as nobody is going to modify the
pointer then.
However, gdataset also supports g_datalist_set_flags(), so the pointer
may change at any time and we must always use atomics to read it. For
that reason, g_datalist_lock_and_get() does an atomic read right after
g_pointer_bit_lock().
g_pointer_bit_lock() can easily access the value that it just set. Add
g_pointer_bit_lock_and_get() which can return the value that gets set
afterwards.
Aside from saving the second atomic-get in certain scenarios, the
returned value is also atomically the one that we just set.
The existing g_pointer_bit_lock() and g_pointer_bit_unlock() API
requires the user to understand/reimplement how bits of the pointer get
mangled. Add helper functions for that.
The useful thing to do with g_pointer_bit_lock() API is to get/set
pointers while having it locked. For example, to set the pointer a user
can do:
g_pointer_bit_lock (&lockptr, lock_bit);
ptr2 = set_bit_pointer_as_if_locked(ptr, lock_bit);
g_atomic_pointer_set (&lockptr, ptr2);
g_pointer_bit_unlock (&lockptr, lock_bit);
That has several problems:
- it requires one extra atomic operations (3 instead of 2, in the
non-contended case).
- the first g_atomic_pointer_set() already wakes blocked threads,
which find themselves still being locked and needs to go back to
sleep.
- the user needs to re-implement how bit-locking mangles the pointer so
that it looks as if it were locked.
- while the user tries to re-implement what glib does to mangle the
pointer for bitlocking, there is no immediate guarantee that they get
it right.
Now we can do instead:
g_pointer_bit_lock(&lockptr, lock_bit);
g_pointer_bit_unlock_and_set(&lockptr, lock_bit, ptr, 0);
This will also emit a critical if @ptr has the locked bit set.
g_pointer_bit_lock() really only works with pointers that have a certain
alignment, and the lowest bits unset. Otherwise, there is no space to
encode both the locking and all pointer values. The new assertion helps
to catch such bugs.
Also, g_pointer_bit_lock_mask_ptr() is here, so we can do:
g_pointer_bit_lock(&lockptr, lock_bit);
/* set a pointer separately, when g_pointer_bit_unlock_and_set() is unsuitable. */
g_atomic_pointer_set(&lockptr, g_pointer_bit_lock_mask_ptr(ptr, lock_bit, TRUE, 0, NULL));
...
g_pointer_bit_unlock(&lockptr, lock_bit);
and:
g_pointer_bit_lock(&lockptr, lock_bit);
/* read the real pointer after getting the lock. */
ptr = g_pointer_bit_lock_mask_ptr(lockptr, lock_bit, FALSE, 0, NULL));
...
g_pointer_bit_unlock(&lockptr, lock_bit);