This works around a Meson bug
(https://github.com/mesonbuild/meson/issues/4668).
If we have a Python test which spawns a built native binary, that binary is
listed in the `depends` argument of the `test()`. On Linux, this results in
the directories containing the built libraries which the binary depends on
being added to the `LD_LIBRARY_PATH` of the test invocation. On Windows,
however, Meson currently doesn’t add those directories to `PATH` (which is
the equivalent of `LD_LIBRARY_PATH`), so we have to do it manually.
This takes the same approach as Christoph Reiter did in
gobject-introspection
(13e8c7ff80/tests/meson.build (L2)).
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
During g_datalist_id_update_atomic(), we get "data" and "destroy_notify"
pointers. The callback can then create, update or steal the data. It
steals it by setting the `*data` to NULL. In that case, the
"destroy_notify" has no more significance, because the old data was
stolen (and won't be touched by g_datalist_id_update_atomic()) and the
new data that we return is NULL (which means there is no data at all,
and nothing to destroy).
Still, to be clearer about that, clear the "destroy_notify" to NULL at a
few places, where we steal the data.
Note that there are other g_datalist_id_update_atomic() places that also
steal the data but still don't reset the "destroy_notify". Those places
are about the quark_closure_array (CArray) and quark_weak_notifies
(WeakRefStack). For those keys, we never set any "destroy_notify". We
relay on the fact that we take care of the data explicitly and we never
set it. We also don't double check that it's really set to NULL, when we
rightly expect it to be NULL already. At those places, it feels wrong
to suddenly reset "destroy_notify" to NULL.
This is different from the places where this patch clears the
"destroy_notify". At those places, we (sometimes) do have a callback set,
and it might be clearer to explicitly clear it.
When we set a property, we usually freeze the queue notification and
thaw it at the end. This currently always requires a per-object
allocation. That is needed to track the freeze count and frozen
properties.
But there are cases, where we freeze only a single time and never track
a frozen property. In such cases, we can avoid allocating a separate
GObjectNotifyQueue instance.
Optimize for that case by initially tracking adding a global, immutable
sentinel pointer "notify_queue_empty". Only when requiring a per-object
queue, allocate one.
This can be useful before calling dispose(). While there are probably
dispose functions that still try to set properties on the object (which
is the main reason we freeze the notification), most probably don't. In
this case, we can avoid allocating the memory during g_object_unref().
Another such case is during object construction. If the object has no
construct properties and the user didn't specify any properties during
g_object_new(), we may well freeze the object but never add properties
to it. In that case too, we can get away without ever allocating the
GObjectNotifyQueue.
These functions are unused. The per-object bitlock OPTIONAL_FLAG_LOCK
was replaced by using GData's lock and g_datalist_id_update_atomic().
Note that object_bit_lock() was originally introduced to replace global
locks for GObject so that all locks were per-object. Now this lock is
also dropped, and we only use the (per-object) GData lock.
When we introduced object_bit_lock(), we also added optional-flags on
32bit (see HAVE_OPTIONAL_FLAGS_IN_GOBJECT). These flags are still
useful, to have also on x86, so we keep them even if the original user
is gone now.
It was replaced by the GData lock and g_datalist_id_update_atomic().
Note that we introduced object_bit_lock() to replace global mutexes.
Now, object_bit_lock() is also replaced, by using the GData lock via
g_datalist_id_update_atomic().
This means, all mutex-like locks on the GObject now go through the GData
lock on the GObject's qdata.
For the moment, the object_bit_lock() API is still here and unused. It
will be dropped in a separate commit.
This is the first step to drop OPTIONAL_BIT_LOCK_TOGGLE_REFS lock. That
will happen soon after.
Note that toggle_refs_check_and_ref_or_deref() is called from
g_object_ref()/g_object_unref(), when the ref count toggles between 1
and 2 and called frequently. Also consider that most objects have no
toggle references and would rather avoid the overhead.
Note that we expect that the object has no toggle references. So the
fast path only takes a g_datalist_lock() first, updates the ref-count
and checks OBJECT_HAS_TOGGLE_REF(). If we have no toggle reference, we
avoid the call to g_datalist_id_update_atomic() -- which first needs to
search the datalist for the quark_toggle_refs key.
Only if OBJECT_HAS_TOGGLE_REF(), we call g_datalist_id_update_atomic().
At that point, we pass "already_locked" to indicate that we hold the
lock, and avoid the overhead of taking the lock a second time.
In this commit, the fast-path actually gets worse. Because previously we
already had the OBJECT_HAS_TOGGLE_REF() optimization and only needed the
object_bit_lock(). Now we additionally take the g_datalist_lock(). Note
that the object_bit_lock() will go away next, which brings the fast path
back to take only one bit lock. You might think, that the fast-path then
is still worse, because previously we took a distinct lock
object_bit_lock(), while now even more places go through the GData lock.
Which theoretically could allow for higher parallelism, by taking different
locks. However, note that in both cases these are per-object locks. So
it would be very hard to find a usage where previously higher
parallelism was achieved due to that (e.g. a concurrent g_weak_ref_get()
vs toggling the last reference). And that is only the fast-path in
toggle_refs_check_and_ref_or_deref(). At all other places, we soon will
take one lock less.
This also fixes a regression of commit abdb58007a ('gobject: drop
OPTIONAL_BIT_LOCK_NOTIFY lock'). Note the code comment in
toggle_refs_check_and_ref_or_deref() how it relies to hold the same lock
that is also taken while destroying the object. This was no longer the
case since OPTIONAL_BIT_LOCK_NOTIFY lock was replaced by GData lock.
This is fixed by this commit, because again the same lock is taken.
Fixes: abdb58007a ('gobject: drop OPTIONAL_BIT_LOCK_NOTIFY lock')
This allows the caller to take the lock on the GData first, and perform
some operations.
This is useful under the assumption, that the caller can find cases
where calling g_datalist_id_update_atomic() is unnecessary, but where
they still need to hold the lock to atomically make that decision.
That can avoid performance overhead, if we can avoid calling
g_datalist_id_update_atomic(). That matters for checking the
toggle-notify in g_object_ref()/g_object_unref().
Note that with "already_locked", g_datalist_id_update_atomic() will
still unlock the GData at the end. That is because the API of
g_datalist_id_update_atomic() requires that it might re-allocate the
buffer, and it can do a more efficient unlock in that case, instead of
leaving it to the caller. The usage and purpose of this parameter is
anyway special, so the few callers will be fine with this asymmetry.
It can safe an additional atomic operation to first set the buffer and
then do a separate g_datalist_unlock().
dispose() would previously set the "handlers" pointer to NULL. But
dispose() also calls g_signal_group_gc_handlers(), which requires this
pointer to be not NULL.
This means, dispose() could not be called multiple times. Which is a
good practice to allow, because g_object_run_dispose() and object
resurrection both requires that dispose() can be called more than once
per object.
Fix that problem, by leaving the array until finalize().
Fixes: dd43471f60 ('gobject: add GSignalGroup')
We call g_object_weak_release_all() at two places.
Once right before finalize(). At this point, the object is definitely
going to be destroyed, and the user must no longer resurrect it or
subscribe new weak notifications. In that case, we really want to
notify/release all weak notifications.
However, we also call it from g_object_real_dispose(). During dispose,
the API allows the user to resurrect an object. Granted, that is
probably not something anybody should do, but GObject makes a reasonable
attempt to support that.
A possible place to resurrect (and subscribe new weak notifications) is
when GObject calls g_object_real_dispose().
static void
g_object_real_dispose (GObject *object)
{
g_signal_handlers_destroy (object);
/* GWeakNotify and GClosure can call into user code */
g_object_weak_release_all (object);
closure_array_destroy_all (object);
}
But previously, g_object_weak_release_all() would continue iterating
until there are no more weak notifications left. So while the user can
take a strong reference and resurrect the object, their attempts to
register new weak notifications are thwarted.
Instead, when the loop in g_object_weak_release_all() starts, remember
the initial number of weak notifications, and don't release more than
that. Note that WeakRefStack preserves the order of entries, so by
maintaining the "remaining_to_notify" counter we know when to stop.
Note that this brings also an earlier behavior back, where we would call
g_datalist_id_set_data (&object->qdata, quark_weak_notifies, NULL);
This would take out the entire WeakRefStack at once and notify the weak
notifications registered at the time. But subsequent registrations would
not be released/notified yet.
It seems bad style, to use a naive realloc() +1 increment each time a new
element gets added. Instead, remember the allocation size and double the
buffer size on buffer grow. This way we get linear amortized runtime
complexity for buffer growth.
Well, WeakRefStack uses a flat array for tracking the entires. We anyway
need to search and memmove() the entries and are thus O(n) anyway. We do
that, because it allows for relatively simple code while being memory
efficient. Also, we do expect only a reasonably small number of weak
notifications in the first place.
I still think it makes sense to avoid the O(n) number of realloc() calls
on top of that. Note that we do this while holding the (per-object)
lock. It's one thing to do a linear search or a memmove(). It's another
to do a (more expensive) realloc().
Also, shrink the buffer during g_object_weak_unref() to get rid of
excess memory.
Also, note that the initial allocation only allocates space for the
first item. I think that makes sense, because I expect that many objects
will only get a single weak notification registered. So this allocation
should not yet have excess memory allocated.
Also, note that the "flexible" array member WeakRefStack.weak_refs has a
length of 1. Maybe we should use C99 flexible array members ([]) or the
pre-C99 workaround ([0]). Anyway. Previously, we would always allocate
space for that extra one tuple, but never use it. Fix that too.
Previously, at two places (in g_object_real_dispose() and shortly before
finalize()), we would call
g_datalist_id_set_data (&object->qdata, quark_weak_notifies, NULL);
This clears @quark_weak_notifies at once and then invokes all
notifications.
This means, if you were inside a notification callback and called
g_object_weak_unref() on the object for *another* weak-reference, then
an exception failed:
GLib-GObject-FATAL-CRITICAL: g_object_weak_unref_cb: couldn't find weak ref 0x401320(0x16b9fe0)
Granted, maybe inside a GWeakNotify you shouldn't call much of anything
on where_the_object_was. However, unregistering things (like calling
g_object_weak_unref()) should still reasonably work.
Instead, now remove each weak notification one by one and invoke it.
As we now invoke the callbacks in a loop, if a callee registers a new
callback, then that one gets unregistered right away too. Previously,
we would during g_object_real_dispose() only notify the notifications
that were present when the loop starts. This is similar to what happens
in closure_array_destroy_all(). This is a change in behavior, but it
will be fixed in a separate follow-up commit.
https://gitlab.gnome.org/GNOME/glib/-/issues/1002
g_object_weak_unref() would have done a fast-removal of the entry, which
messes up the order of the weak notifications.
During destruction of the object we emit the weak notifications. They
are emitted in the order in which they were registered (FIFO). Except,
when a g_object_weak_unref() messes up the order. Avoid that and
preserve the order.
Now, do a memmove(), which is O(n). But note that we already track weak
references in a flat array that requires a O(n) linear search. Thus,
g_object_weak_unref() was already O(n) and that didn't change. More
importantly, users are well advised to limit themselves to a reasonably
small number of weak notifications. And for small n, the linear search
and the memmove() is an efficient solution.
At the time when this code was added ([1]), the code and the comment was
correct. g_object_run_dispose() did not clear GWeakRef.
That was later adjusted to clear them ([2]), but at various times it was
not ensured that the GWeakRef was cleared *before* the weak notification
is emitted.
This is now fixed, and the checks for "where_the_object_was" are no
longer necessary. Drop them.
I considered to keep the checks just to be extra safe. But we need to
rely on how g_object_run_dispose() works in detail. By now there is a
test that checks GWeakRef are cleared before emitting the notifications,
so we should not accidentally mess this up and the code is no longer
needed.
[1] commit e82eb490fe ('Handle the case of g_object_run_dispose() in GBinding')
[2] commit a7262d6357 ('gobject: Cleanup weak locations data as part of dispose')
This changes behavior from commit [1] most similar to what was before.
The point of g_object_run_dispose() is to break reference cycles to
bring down an object. We don't expect the object to take new references
to keep it alive for longer. We probably also don't expect it to
register new weak references. We also don't expect the dispose() callees
to check g_weak_ref_get() for the object. In that case, this change
makes not difference.
Note that during g_object_run_dispose() the ref count does not yet go to
zero, still we clear GWeakRef. As such, GWeakRef rather tracks when
objects get disposed, instead of when the ref count really goes to zero.
That is intentional (e.g. issue [2]).
But compare to g_object_unref(), where we also clear GWeakRef *before*
calling dispose. That makes more sense, because inside dispose() (and
for example during weak notifications), we probably want to see that
g_weak_ref_get() indicates the object is already disposed. For that
reason, it seems more correct to clear out the GWeakRef before calling
dispose().
Also, the dispose() callees (e.g. the weak notifications) might refuse to
let the object die by intentionally keeping strong references around.
Not sure why they would do that, it is similar to resurrecting an object
during dispose(). But if they do, they might also want to register new
GWeakRef. In that case, we wouldn't want to unset those newly set
GWeakRef unconditionally right after.
In most cases, it shouldn't make a difference. In the case where it
does, this is the more sensible order of doing things.
[1] commit 2952cfd7a7 ('gobject: drop clearing quark_weak_locations from g_object_real_dispose()')
[2] https://gitlab.gnome.org/GNOME/glib/-/issues/2266
During object initialization, we may want to freeze the notifications,
but only do so once (and once unfreeze at the end).
Rework how that was done. We can avoid an additional GData lookup.
By now, GObjectNotifyQueue gets reallocated. So quite possibly if we
keep the queue, it is a dangling pointer.
That is error prone, but it's also unnecessary. All we need to know is
whether we bumped the freeze count and need to unfreeze. The queue
itself was not useful, because we anyway must take a lock (via
g_datalist_id_update_atomic()) to do anything with it.
Instead, use a nqueue_is_frozen boolean variable.
GSList is almost in all use cases a bad choice. It's bad for locality
and requires a heap allocation per entry.
Instead, use an array, and grow the buffer exponentially via realloc().
Now, that we use g_datalist_id_update_atomic(), it is also easy to
update the pointer. Hence, the GObjectNotifyQueue struct does not point
to an array of pspecs. Instead the entire GObjectNotifyQueue itself gets
reallocated, thus saving one heap allocation for the separate head
structure.
We can tighten up the types which are being used, to prevent the
warnings. Not everything in the world has to be a `guint`.
These warnings only showed up on the macOS CI runner.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
These only show up on macOS. Apparently it’s more sensitive to assigning
`gboolean` (which is secretly `int`) to a `guint` bitfield. 🤷
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
These don’t show up for me on Linux, but are now causing CI failures on
macOS (https://gitlab.gnome.org/GNOME/glib/-/jobs/5006543):
```
../gobject/gclosure.c:923:40: error: implicit conversion changes signedness: 'gboolean' (aka 'int') to 'guint' (aka 'unsigned int') [-Werror,-Wsign-conversion]
ATOMIC_SET (closure, in_marshal, in_marshal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
```
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
This makes the code a little clearer. In most cases, it’s not a
functional change.
In a few cases, the values are different. I believe the original values
were incorrect (accidentally transposed, perhaps). This never caused an
issue because they were all immediately overwritten during construction
of a `GParamSpec`: these values were defaults in the `instance_init`
vfunc of the `GTypeInstance` for a `GParamSpec`, but the actual min/max
for the `GParamSpec` instance were immediately written over them in the
constructor (such as `g_param_spec_int()`).
Spotted in !4593.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
The "without first having or creating a strong reference" part is wrong.
While we invoke the dispose() method, we always still hold the last
reference. The calling thread called g_object_unref() with a strong
reference that we are about to give up, but at the point where we call
dispose(), we didn't yet decrement the ref count to zero. Doing so would
be a fatal bug.
As such, during dispose() the object is still healthy and still has a
strong pointer. You can call `g_weak_ref_set()` on that pointer without
taking an additional strong reference. Of course, if you don't actually
take a strong reference (and thus don't resurrect the object), then
right afterwards, the last reference is dropped to zero, and the
GWeakRef gets reset again.
But there is no need to claim that you need to take another strong
reference to set a GWeakRef during dispose(). This was always the case.
Also, reword the previous paragraph. I think this is clearer.
These are all fairly straightforward, but I didn’t get them locally;
they only showed up on CI.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Fixing #3405 is going to take a lot of work, so let’s split it up into
pieces and work on them separately. The `gobject/` and `gobject/tests/`
directories now compile cleanly with `-Wsign-conversion` (see the
previous commits), so let’s enable the warning for those directories to
prevent regressions while we continue to work on the other directories.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
There’s a painful inconsistency in the types of the
`g_{test_rand,random,rand}_int{,_range}()` functions, which vary
arbitrarily between `gint32` and `guint32`.
Unfortunately since those functions mention `int` explicitly in the name
(and then some of them return an `unsigned` integer), I don’t see a way
to make the APIs consistent without significant deprecations or
additions.
So, for the moment, to fix various `-Wsign-conversion` warnings, plaster
the tests with casts.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
This fixes a load of -Wsign-conversion warnings. The dest type setter
function is being used (presumably by design?) so there’s sometimes a
type mismatch (signed/unsigned, or size) with the constant value being
used by the test. This just makes the existing implicit casts explicit.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Fix all the instances where `-Wsign-conversion` was pointing out that
`g_assert_cmpint()` had been used on unsigned inputs, or vice-versa.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Not sure why these constants were chosen the way they were, but that’s
not a problem I’m going to investigate right now. This just makes the
implicit cast explicit to shut the compiler warning up.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
This fixes `-Wsign-conversion` warnings, though I’m not sure why the
compiler is emitting them. The signed/unsigned status of flag enum
members is not particularly well defined in the C standard (and even
less well understood by me), so just do what seems necessary to shut the
compiler up.
The benefits of enabling `-Wsign-conversion` across the codebase
hopefully outweighs this noise.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
While we’re at it, rename the variables to make the intent a bit
clearer: these functions return a boolean indicating whether any of the
values were modified to make them valid. `n_changed` is a counter of the
number of modified values.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Unfortunately the signatures of our atomic functions alternate between
using signed and unsigned integers across different functions, so we
can’t just use one type as input. Add some explicit casts to fix
harmless `-Wsign-conversion` warnings.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Rather than reinventing it ourselves. The old version in `gboxed.c`
could lose the second half of very long strings, as it truncated the
`size_t` string length to the `ssize_t` accepted by
`g_string_new_len()`.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Now all accesses to quark_notify_queue are guarded by the GData lock.
Several non-trivial operations are implemented via
g_datalist_id_update_atomic().
The OPTIONAL_BIT_LOCK_NOTIFY lock is thus unnecessary and can be dropped.
Note that with the move to g_datalist_id_update_atomic(), we now
potentially do more work while holding the GData lock (e.g. some code
paths allocation additional memory). But note that
g_datalist_id_set_data() already has code paths where it must allocate
memory to track the GDataElt. Also, most objects are not used in
parallel, so holding the per-object (per-GData) lock longer does not
affect them. Also, many operations also require a object_bit_lock(), so
it seems very unlikely that you really could achieve higher parallelism
by taking more locks (and minimizing the time to hold the GData lock).
On the contrary, taking one lock less and doing all the work there is
beneficial.