Commit Graph

479 Commits

Author SHA1 Message Date
Thomas Haller
73eb2d2be7 gobject: new thread-safe API for g_object_weak_ref()
The weak notification APIs g_object_weak_ref() and g_object_weak_unref()
are not thread-safe. This patch adds thread-safe alternatives:
g_object_weak_ref_full() and g_object_weak_unref_full().

The problem arises when other threads call g_object_run_dispose() or
g_object_unref(), making g_object_weak_unref() unsafe. The caller cannot
know whether the weak notification was successfully removed or might
still be invoked.

For example, g_object_weak_unref() will assert if no matching
notification is found. This is inherrently racy. Beyond that problem,
weak notifications often involve user data that must be freed -- either
by the callback or after g_object_weak_unref(). Since you can't know
which path executed, this can lead to races and double-free errors.

The new g_object_weak_unref_full() returns a boolean to indicate whether
the notification was removed or will still be invoked, allowing safe
cleanup. This return value and acting upon it is the main solution for
thread-safety.

Note that g_object_unref() only starts disposing after ensuring there
are no more GWeakRefs and only the single caller's strong reference
remains. So you might think that no other thread could acquire a strong
reference and race by calling g_object_weak_unref(). While this makes
such a race less likely, it is not eliminated. If there are multiple
weak notifications or closures, one can pass a reference to another
thread that calls g_object_weak_unref() and enables the race. Also, with
g_object_run_dispose(), there is nothing preventing another thread from
racing against g_object_weak_unref().

g_object_weak_ref_full() and g_object_weak_unref_full() also support a
`synchronize=TRUE` flag. This ensures the callback runs while holding a
per-callback mutex, allowing g_object_weak_unref_full() to wait until
the callback has either already run or will never run.

Calling user callbacks while holding a lock can risk deadlocks, but the
risk is limited because the lock is specific to that notification.

Finally, GDestroyNotify callbacks are supported. While mostly a
convenience, they are also invoked outside the lock, which enables more
complex cleanup without the risk of deadlock.

Contrary to common wisdom, combining weak notifications with GWeakRef
does not solve this problem. Also, it forces to acquire strong
references, which emits toggle notifications. When carefully using
g_object_weak_ref_full(), the caller of g_object_weak_unref_full()
can safely use a pointer to the object, without ever increasing
the reference count. A unit test shows how that is done.

This improves correctness and safety for weak references in
multithreaded contexts.

The main overhead of this change is that WeakRefTuple grew from 2
pointer sizes to 4. Every weak notification will have such a entry, so
it takes now more memory to track the registration. Otherwise, there is
no relevant overhead compared to before. Obviously, a "synchronized"
notification is more expensive, which is why it requires an opt-in
during g_object_weak_ref_full().
2025-08-08 20:48:11 +02:00
Thomas Haller
ba2b08cffa gobject: shrink WeakRefStack during release-all
It feels ugly to leave the buffer not sized right.

We call g_object_weak_release_all() during g_object_real_dispose() and
right before finalize. In most cases, we expect that the loop iterates
until there are no weak notifications left (in which case the entire
WeakRefStack is freed). In that case, there is no need to shrink the
buffer, because it's going to be released soon anyway.

Note that no new weak references can be registered after finalize (as
the ref count already dropped to zero). However, new weak referenes can
be registered during dispose (either during the last g_object_unref() or
during g_object_run_dispose()).

In that case, I feel it is nice to bring the buffer size right again. We
don't know how long the object will continue to live afterwards, so
let's trim the extra allocation.
2025-08-05 10:30:17 +02:00
Thomas Haller
735458ec19 gobject: clean up loop in g_object_weak_unref_cb()
Refactor the function to separate the search and removal logic. Instead
of nesting the removal inside the loop, first search for the matching
entry. If none is found, return early. Otherwise, goto the removal
logic.

This reduces indentation, emphasizes the main path, and improves
readability and maintainability. The change uses the often unfairly
maligned goto for clarity.
2025-08-05 10:30:17 +02:00
Thomas Haller
4a3f273834 gobject: reword documentation for GWeakRef
Also delete

  * It is invalid to take a #GWeakRef on an object during #GObjectClass.dispose
  * without first having or creating a strong reference to the object.

This is wrong. During dispose() there is still a strong reference.
Likewise during a GWeakNotify callback (which is called during
dispose()). So yes, you probably should no longer register new weak (or
strong) reference during dispose and let the object die. But aside from
that, you don't need to first obtain another strong reference.
2025-08-05 10:30:17 +02:00
Philip Withnall
7b996a6ce5 Merge branch 'th/gobj-empty-notify-queue' into 'main'
[th/gobj-empty-notify-queue] gobject: optimize notify-queue handling for a single freeze

See merge request GNOME/glib!4642
2025-05-23 05:44:43 +00:00
Thomas Haller
3ade8a93f3 gobject: clear "destroy_notify" during g_datalist_id_update_atomic()
During g_datalist_id_update_atomic(), we get "data" and "destroy_notify"
pointers.  The callback can then create, update or steal the data. It
steals it by setting the `*data` to NULL. In that case, the
"destroy_notify" has no more significance, because the old data was
stolen (and won't be touched by g_datalist_id_update_atomic()) and the
new data that we return is NULL (which means there is no data at all,
and nothing to destroy).

Still, to be clearer about that, clear the "destroy_notify" to NULL at a
few places, where we steal the data.

Note that there are other g_datalist_id_update_atomic() places that also
steal the data but still don't reset the "destroy_notify". Those places
are about the quark_closure_array (CArray) and quark_weak_notifies
(WeakRefStack). For those keys, we never set any "destroy_notify". We
relay on the fact that we take care of the data explicitly and we never
set it. We also don't double check that it's really set to NULL, when we
rightly expect it to be NULL already.  At those places, it feels wrong
to suddenly reset "destroy_notify" to NULL.
This is different from the places where this patch clears the
"destroy_notify". At those places, we (sometimes) do have a callback set,
and it might be clearer to explicitly clear it.
2025-05-22 21:24:47 +02:00
Thomas Haller
1818d53034 gobject: optimize notify-queue handling for a single freeze
When we set a property, we usually freeze the queue notification and
thaw it at the end. This currently always requires a per-object
allocation. That is needed to track the freeze count and frozen
properties.

But there are cases, where we freeze only a single time and never track
a frozen property. In such cases, we can avoid allocating a separate
GObjectNotifyQueue instance.

Optimize for that case by initially tracking adding a global, immutable
sentinel pointer "notify_queue_empty". Only when requiring a per-object
queue, allocate one.

This can be useful before calling dispose(). While there are probably
dispose functions that still try to set properties on the object (which
is the main reason we freeze the notification), most probably don't. In
this case, we can avoid allocating the memory during g_object_unref().

Another such case is during object construction. If the object has no
construct properties and the user didn't specify any properties during
g_object_new(), we may well freeze the object but never add properties
to it. In that case too, we can get away without ever allocating the
GObjectNotifyQueue.
2025-05-22 21:04:56 +02:00
Thomas Haller
51dd935202 gobject: drop object_bit_lock() functions
These functions are unused. The per-object bitlock OPTIONAL_FLAG_LOCK
was replaced by using GData's lock and g_datalist_id_update_atomic().

Note that object_bit_lock() was originally introduced to replace global
locks for GObject so that all locks were per-object. Now this lock is
also dropped, and we only use the (per-object) GData lock.

When we introduced object_bit_lock(), we also added optional-flags on
32bit (see HAVE_OPTIONAL_FLAGS_IN_GOBJECT). These flags are still
useful, to have also on x86, so we keep them even if the original user
is gone now.
2025-05-20 18:29:08 +02:00
Thomas Haller
e6a1e78029 gobject: drop OPTIONAL_BIT_LOCK_TOGGLE_REFS lock
It was replaced by the GData lock and g_datalist_id_update_atomic().

Note that we introduced object_bit_lock() to replace global mutexes.
Now, object_bit_lock() is also replaced, by using the GData lock via
g_datalist_id_update_atomic().

This means, all mutex-like locks on the GObject now go through the GData
lock on the GObject's qdata.

For the moment, the object_bit_lock() API is still here and unused. It
will be dropped in a separate commit.
2025-05-20 16:40:49 +02:00
Thomas Haller
588dbc569d gobject: rework g_object_remove_toggle_ref() to use g_datalist_id_update_atomic() 2025-05-20 16:40:49 +02:00
Thomas Haller
f438ef6802 gobject: rework g_object_add_toggle_ref() to use g_datalist_id_update_atomic() 2025-05-20 16:40:49 +02:00
Thomas Haller
2fe2f2f9b7 gobject: rework toggle_refs_check_and_ref() to use g_datalist_id_update_atomic()
This is the first step to drop OPTIONAL_BIT_LOCK_TOGGLE_REFS lock. That
will happen soon after.

Note that toggle_refs_check_and_ref_or_deref() is called from
g_object_ref()/g_object_unref(), when the ref count toggles between 1
and 2 and called frequently. Also consider that most objects have no
toggle references and would rather avoid the overhead.

Note that we expect that the object has no toggle references. So the
fast path only takes a g_datalist_lock() first, updates the ref-count
and checks OBJECT_HAS_TOGGLE_REF(). If we have no toggle reference, we
avoid the call to g_datalist_id_update_atomic() -- which first needs to
search the datalist for the quark_toggle_refs key.

Only if OBJECT_HAS_TOGGLE_REF(), we call g_datalist_id_update_atomic().
At that point, we pass "already_locked" to indicate that we hold the
lock, and avoid the overhead of taking the lock a second time.

In this commit, the fast-path actually gets worse. Because previously we
already had the OBJECT_HAS_TOGGLE_REF() optimization and only needed the
object_bit_lock(). Now we additionally take the g_datalist_lock(). Note
that the object_bit_lock() will go away next, which brings the fast path
back to take only one bit lock. You might think, that the fast-path then
is still worse, because previously we took a distinct lock
object_bit_lock(), while now even more places go through the GData lock.
Which theoretically could allow for higher parallelism, by taking different
locks. However, note that in both cases these are per-object locks. So
it would be very hard to find a usage where previously higher
parallelism was achieved due to that (e.g. a concurrent g_weak_ref_get()
vs toggling the last reference). And that is only the fast-path in
toggle_refs_check_and_ref_or_deref(). At all other places, we soon will
take one lock less.

This also fixes a regression of commit abdb58007a ('gobject: drop
OPTIONAL_BIT_LOCK_NOTIFY lock'). Note the code comment in
toggle_refs_check_and_ref_or_deref() how it relies to hold the same lock
that is also taken while destroying the object. This was no longer the
case since OPTIONAL_BIT_LOCK_NOTIFY lock was replaced by GData lock.
This is fixed by this commit, because again the same lock is taken.

Fixes: abdb58007a ('gobject: drop OPTIONAL_BIT_LOCK_NOTIFY lock')
2025-05-20 16:40:49 +02:00
Thomas Haller
61aa0c3ace gdataset: add "already_locked" argument to g_datalist_id_update_atomic()
This allows the caller to take the lock on the GData first, and perform
some operations.

This is useful under the assumption, that the caller can find cases
where calling g_datalist_id_update_atomic() is unnecessary, but where
they still need to hold the lock to atomically make that decision.

That can avoid performance overhead, if we can avoid calling
g_datalist_id_update_atomic(). That matters for checking the
toggle-notify in g_object_ref()/g_object_unref().

Note that with "already_locked", g_datalist_id_update_atomic() will
still unlock the GData at the end. That is because the API of
g_datalist_id_update_atomic() requires that it might re-allocate the
buffer, and it can do a more efficient unlock in that case, instead of
leaving it to the caller. The usage and purpose of this parameter is
anyway special, so the few callers will be fine with this asymmetry.
It can safe an additional atomic operation to first set the buffer and
then do a separate g_datalist_unlock().
2025-05-20 16:40:47 +02:00
Thomas Haller
3cf6d22f76 Revert "Merge branch 'th/gobj-doc-weakref' into 'main'"
This change appears to cause crashes. Revert for now, to investigate why
exactly that happens.

This reverts commit 22f57fce78, reversing
changes made to 549a966b46.

Fixes: https://gitlab.gnome.org/GNOME/glib/-/issues/3684
See-also: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4584#note_2436512
See-also: https://gitlab.gnome.org/GNOME/gnome-builder/-/issues/2324
2025-05-09 20:34:18 +02:00
Thomas Haller
0be672e1e0 gobject: preserve weak notifications registered during dispose
We call g_object_weak_release_all() at two places.

Once right before finalize(). At this point, the object is definitely
going to be destroyed, and the user must no longer resurrect it or
subscribe new weak notifications. In that case, we really want to
notify/release all weak notifications.

However, we also call it from g_object_real_dispose(). During dispose,
the API allows the user to resurrect an object. Granted, that is
probably not something anybody should do, but GObject makes a reasonable
attempt to support that.

A possible place to resurrect (and subscribe new weak notifications) is
when GObject calls g_object_real_dispose().

  static void
  g_object_real_dispose (GObject *object)
  {
    g_signal_handlers_destroy (object);

    /* GWeakNotify and GClosure can call into user code */
    g_object_weak_release_all (object);
    closure_array_destroy_all (object);
  }

But previously, g_object_weak_release_all() would continue iterating
until there are no more weak notifications left. So while the user can
take a strong reference and resurrect the object, their attempts to
register new weak notifications are thwarted.

Instead, when the loop in g_object_weak_release_all() starts, remember
the initial number of weak notifications, and don't release more than
that. Note that WeakRefStack preserves the order of entries, so by
maintaining the "remaining_to_notify" counter we know when to stop.

Note that this brings also an earlier behavior back, where we would call

  g_datalist_id_set_data (&object->qdata, quark_weak_notifies, NULL);

This would take out the entire WeakRefStack at once and notify the weak
notifications registered at the time. But subsequent registrations would
not be released/notified yet.
2025-05-07 21:29:37 +00:00
Thomas Haller
1312ec6d0b gobject: grow buffers for weak notifications exponentially
It seems bad style, to use a naive realloc() +1 increment each time a new
element gets added. Instead, remember the allocation size and double the
buffer size on buffer grow. This way we get linear amortized runtime
complexity for buffer growth.

Well, WeakRefStack uses a flat array for tracking the entires. We anyway
need to search and memmove() the entries and are thus O(n) anyway. We do
that, because it allows for relatively simple code while being memory
efficient. Also, we do expect only a reasonably small number of weak
notifications in the first place.

I still think it makes sense to avoid the O(n) number of realloc() calls
on top of that. Note that we do this while holding the (per-object)
lock. It's one thing to do a linear search or a memmove(). It's another
to do a (more expensive) realloc().

Also, shrink the buffer during g_object_weak_unref() to get rid of
excess memory.

Also, note that the initial allocation only allocates space for the
first item. I think that makes sense, because I expect that many objects
will only get a single weak notification registered. So this allocation
should not yet have excess memory allocated.

Also, note that the "flexible" array member WeakRefStack.weak_refs has a
length of 1. Maybe we should use C99 flexible array members ([]) or the
pre-C99 workaround ([0]). Anyway. Previously, we would always allocate
space for that extra one tuple, but never use it. Fix that too.
2025-05-07 21:29:37 +00:00
Thomas Haller
dadb759c65 gobject: invoke g_object_weak_ref() one-by-one during destruction
Previously, at two places (in g_object_real_dispose() and shortly before
finalize()), we would call

    g_datalist_id_set_data (&object->qdata, quark_weak_notifies, NULL);

This clears @quark_weak_notifies at once and then invokes all
notifications.

This means, if you were inside a notification callback and called
g_object_weak_unref() on the object for *another* weak-reference, then
an exception failed:

  GLib-GObject-FATAL-CRITICAL: g_object_weak_unref_cb: couldn't find weak ref 0x401320(0x16b9fe0)

Granted, maybe inside a GWeakNotify you shouldn't call much of anything
on where_the_object_was. However, unregistering things (like calling
g_object_weak_unref()) should still reasonably work.

Instead, now remove each weak notification one by one and invoke it.

As we now invoke the callbacks in a loop, if a callee registers a new
callback, then that one gets unregistered right away too.  Previously,
we would during g_object_real_dispose() only notify the notifications
that were present when the loop starts. This is similar to what happens
in closure_array_destroy_all(). This is a change in behavior, but it
will be fixed in a separate follow-up commit.

https://gitlab.gnome.org/GNOME/glib/-/issues/1002
2025-05-07 21:29:37 +00:00
Thomas Haller
af508f91b1 gobject: add internal WeakRefTuple helper structure
This is already useful and will be more useful later.
2025-05-07 21:29:37 +00:00
Thomas Haller
d2e08b7dfe gobject: preserve order of weak notifications in g_object_weak_unref()
g_object_weak_unref() would have done a fast-removal of the entry, which
messes up the order of the weak notifications.

During destruction of the object we emit the weak notifications. They
are emitted in the order in which they were registered (FIFO). Except,
when a g_object_weak_unref() messes up the order. Avoid that and
preserve the order.

Now, do a memmove(), which is O(n). But note that we already track weak
references in a flat array that requires a O(n) linear search. Thus,
g_object_weak_unref() was already O(n) and that didn't change. More
importantly, users are well advised to limit themselves to a reasonably
small number of weak notifications. And for small n, the linear search
and the memmove() is an efficient solution.
2025-05-07 21:29:37 +00:00
Michael Catanzaro
3548c4ae53 Merge branch 'th/gobject-no-object-locks-pt1-notify' into 'main'
[th/gobject-no-object-locks-pt1-notify] use `g_datalist_id_update_atomic()` instead of OPTIONAL_BIT_LOCK_NOTIFY

See merge request GNOME/glib!4185
2025-05-06 21:24:32 +00:00
Michael Catanzaro
22f57fce78 Merge branch 'th/gobj-doc-weakref' into 'main'
[th/gobj-doc-weakref] clear #GWeakRef earlier in g_object_run_dispose() and reword docs about #GWeakRef

See merge request GNOME/glib!4586
2025-05-06 21:23:52 +00:00
Thomas Haller
d8f84a517e gobject: clear weak locations before calling dispose in g_object_run_dispose()
This changes behavior from commit [1] most similar to what was before.

The point of g_object_run_dispose() is to break reference cycles to
bring down an object. We don't expect the object to take new references
to keep it alive for longer. We probably also don't expect it to
register new weak references. We also don't expect the dispose() callees
to check g_weak_ref_get() for the object. In that case, this change
makes not difference.

Note that during g_object_run_dispose() the ref count does not yet go to
zero, still we clear GWeakRef. As such, GWeakRef rather tracks when
objects get disposed, instead of when the ref count really goes to zero.
That is intentional (e.g. issue [2]).

But compare to g_object_unref(), where we also clear GWeakRef *before*
calling dispose. That makes more sense, because inside dispose() (and
for example during weak notifications), we probably want to see that
g_weak_ref_get() indicates the object is already disposed. For that
reason, it seems more correct to clear out the GWeakRef before calling
dispose().

Also, the dispose() callees (e.g. the weak notifications) might refuse to
let the object die by intentionally keeping strong references around.
Not sure why they would do that, it is similar to resurrecting an object
during dispose(). But if they do, they might also want to register new
GWeakRef. In that case, we wouldn't want to unset those newly set
GWeakRef unconditionally right after.

In most cases, it shouldn't make a difference. In the case where it
does, this is the more sensible order of doing things.

[1] commit 2952cfd7a7 ('gobject: drop clearing quark_weak_locations from g_object_real_dispose()')
[2] https://gitlab.gnome.org/GNOME/glib/-/issues/2266
2025-05-01 23:40:02 +02:00
Thomas Haller
42c0f9a7b1 gobject: rework freezing once during object initialization
During object initialization, we may want to freeze the notifications,
but only do so once (and once unfreeze at the end).

Rework how that was done. We can avoid an additional GData lookup.
2025-05-01 23:01:46 +02:00
Thomas Haller
18d5b34cfc gobject: don't pass around the GObjectNotifyQueue instance
By now, GObjectNotifyQueue gets reallocated. So quite possibly if we
keep the queue, it is a dangling pointer.

That is error prone, but it's also unnecessary. All we need to know is
whether we bumped the freeze count and need to unfreeze. The queue
itself was not useful, because we anyway must take a lock (via
g_datalist_id_update_atomic()) to do anything with it.

Instead, use a nqueue_is_frozen boolean variable.
2025-05-01 23:01:46 +02:00
Thomas Haller
b8ff814d7d gobject: rework GObjectNotifyQueue to not use GSList
GSList is almost in all use cases a bad choice. It's bad for locality
and requires a heap allocation per entry.

Instead, use an array, and grow the buffer exponentially via realloc().

Now, that we use g_datalist_id_update_atomic(), it is also easy to
update the pointer. Hence, the GObjectNotifyQueue struct does not point
to an array of pspecs. Instead the entire GObjectNotifyQueue itself gets
reallocated, thus saving one heap allocation for the separate head
structure.
2025-05-01 23:01:46 +02:00
Thomas Haller
88f5db3e75 gobject/docs: remove wrong paragraph from GWeakRef docs
The "without first having or creating a strong reference" part is wrong.

While we invoke the dispose() method, we always still hold the last
reference. The calling thread called g_object_unref() with a strong
reference that we are about to give up, but at the point where we call
dispose(), we didn't yet decrement the ref count to zero. Doing so would
be a fatal bug.

As such, during dispose() the object is still healthy and still has a
strong pointer. You can call `g_weak_ref_set()` on that pointer without
taking an additional strong reference. Of course, if you don't actually
take a strong reference (and thus don't resurrect the object), then
right afterwards, the last reference is dropped to zero, and the
GWeakRef gets reset again.

But there is no need to claim that you need to take another strong
reference to set a GWeakRef during dispose(). This was always the case.

Also, reword the previous paragraph. I think this is clearer.
2025-04-17 18:07:27 +02:00
Philip Withnall
633e49c8d1 gobject: Cast various inverted bitfield constants to unsigned
This fixes `-Wsign-conversion` warnings, though I’m not sure why the
compiler is emitting them. The signed/unsigned status of flag enum
members is not particularly well defined in the C standard (and even
less well understood by me), so just do what seems necessary to shut the
compiler up.

The benefits of enabling `-Wsign-conversion` across the codebase
hopefully outweighs this noise.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>

Helps: #3405
2025-04-11 23:47:47 +01:00
Philip Withnall
615cd4c10c gobject: Fix a guint to gboolean conversion warning
Make the conversion explicit. Fixes some `-Wsign-conversion` warnings.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>

Helps: #3405
2025-04-11 23:47:38 +01:00
Philip Withnall
636bbd1d63 gobject: Fix several int/unsigned conversions with atomics
Unfortunately the signatures of our atomic functions alternate between
using signed and unsigned integers across different functions, so we
can’t just use one type as input. Add some explicit casts to fix
harmless `-Wsign-conversion` warnings.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>

Helps: #3405
2025-04-11 23:47:33 +01:00
Thomas Haller
abdb58007a gobject: drop OPTIONAL_BIT_LOCK_NOTIFY lock
Now all accesses to quark_notify_queue are guarded by the GData lock.
Several non-trivial operations are implemented via
g_datalist_id_update_atomic().

The OPTIONAL_BIT_LOCK_NOTIFY lock is thus unnecessary and can be dropped.

Note that with the move to g_datalist_id_update_atomic(), we now
potentially do more work while holding the GData lock (e.g. some code
paths allocation additional memory). But note that
g_datalist_id_set_data() already has code paths where it must allocate
memory to track the GDataElt. Also, most objects are not used in
parallel, so holding the per-object (per-GData) lock longer does not
affect them. Also, many operations also require a object_bit_lock(), so
it seems very unlikely that you really could achieve higher parallelism
by taking more locks (and minimizing the time to hold the GData lock).
On the contrary, taking one lock less and doing all the work there is
beneficial.
2025-04-09 18:17:16 +02:00
Thomas Haller
2c0a2b830e gobject: rework g_object_notify_queue_add() to use g_datalist_id_update_atomic()
The goal is to drop OPTIONAL_BIT_LOCK_NOTIFY lock. This is one step.
Move code inside g_datalist_id_update_atomic().
2025-04-09 18:17:16 +02:00
Thomas Haller
f92e9dd329 gobject: rework g_object_notify_queue_thaw() to use g_datalist_id_update_atomic()
The goal is to drop OPTIONAL_BIT_LOCK_NOTIFY lock. This is one step.
Move code inside g_datalist_id_update_atomic().
2025-04-09 18:13:24 +02:00
Thomas Haller
37717a123e gobject: rework g_object_notify_queue_freeze() to use g_datalist_id_update_atomic()
A common pattern is to look whether a GData entry exists, and if it
doesn't, add it.

For that, we currently always must take a OPTIONAL_BIT_LOCK_NOTIFY lock.

This can be avoided, because GData already uses an internal mutex. By
using g_datalist_id_update_atomic(), we can perform all relevant
operations while holding that mutex.

Move functionality from g_object_notify_queue_freeze() inside
g_datalist_id_update_atomic().

The goal will be to drop the OPTIONAL_BIT_LOCK_NOTIFY lock in a later
commit.
2025-04-07 16:41:02 +02:00
Thomas Haller
93b5b8a051 gobject: drop OPTIONAL_BIT_LOCK_WEAK_REFS bit lock
With the previous changes, all accesses to the WeakRefStack go through
g_datalist_id_update_atomic() and are guarded by the GData's bit lock.
At this point, the OPTIONAL_BIT_LOCK_WEAK_REFS lock is unnecessary and
can be dropped.

A minor benefit is that g_object_weak_{ref,unref}() needs now one lock
less.

Also note that this rework fixes a potential race for weak refs. Note
that we have two calls

  g_datalist_id_set_data (&object->qdata, quark_weak_notifies, NULL);

that don't take OPTIONAL_BIT_LOCK_WEAK_REFS lock. One of these calls is
right before finalize(). At that point, no other thread can hold a
reference to the object to race and we are good. However, the other call
is from g_object_real_dispose(). At that point, theoretically the object
could have been resurrected and a pointer could have been passed to
another thread. Calling then g_object_weak_ref()/g_object_weak_unref()
will race. We would have required a OPTIONAL_BIT_LOCK_WEAK_REFS lock
around those g_datalist_id_set_data(,,NULL) calls.

Instead, this is now also fixed, because every update to the
WeakRefStack happens while holding the GData lock. So if you call
g_datalist_id_set_data(,,NULL), the WeakRefStack is removed from the
GData (and later freed by weak_refs_notify() and can no longer
concurrently updated by g_object_weak_{ref,unref}().
2025-04-07 12:43:08 +01:00
Thomas Haller
5da7ed2bc9 gobject: rework g_object_weak_ref() to use _g_datalist_id_update_atomic()
This is a step step to drop OPTIONAL_BIT_LOCK_WEAK_REFS lock. See the
next commits why that is done.
2025-04-07 12:43:06 +01:00
Thomas Haller
4e1039ea76 gobject: rework g_object_weak_unref() to use _g_datalist_id_update_atomic()
This is a step step to drop OPTIONAL_BIT_LOCK_WEAK_REFS lock. See the
next commits why that is done.

Also, free the WeakRefStack, if there are no more references.
Previously, it was never freed.
2025-04-07 12:42:49 +01:00
Philip Withnall
792c4505e0 Merge branch 'th/gobj-closure-array-atomic' into 'main'
[th/gobj-closure-array-atomic] use g_datalist_id_update_atomic() for array of closure watches

See merge request GNOME/glib!4536
2025-04-03 15:09:06 +00:00
Thomas Haller
67fa02bdcf gobject: destroy closure watches one by one
Previously, we would call

  g_datalist_id_set_data (&object->qdata, quark_closure_array, NULL);

which called destroy_closure_array() on the CArray.

At that point, it would iterate over the CArray, and invalidate all
closures. But note that this invokes external callbacks, which in turn
can destroy other closures, which can call object_remove_closure().
But now that closure can no longer be found and an assertion fails.

Instead of removing the entire CArray at once, remove each closure one
by one in a loop.

This problem is similar to issue 1002, except here it's about closure
watches instead of GWeakNotify.

Note that now we destroy closures one-by-one in a loop, and we iterate
the loop as long as we have closures. That makes a difference when a new
closure gets registered while we destroy them all. Previously, newly
registered closures would survive. It would be possible to implement the
previous behavior, but I think the new behavior is better. It is anyway
a very remote use case.
2025-03-13 17:23:55 +01:00
Thomas Haller
dec3ba69e8 gobject: avoid potential race and use g_datalist_id_update_atomic() for closure array
There are two calls to

  g_datalist_id_set_data (&object->qdata, quark_closure_array, NULL);

that don't take a OPTIONAL_BIT_LOCK_CLOSURE_ARRAY lock. These are inside
g_object_real_dispose() and right before finalize(). The one before
finalize() is fine, becase we are already in a situation where nobody
else holds a reference on object.

However not so with g_object_real_dispose().  That is called after we
checked that there is only one strong reference left and we are inside
the call to dispose(). However, at that point (before chaining up
g_object_real_dispose()) the callee is able can pass the reference
to another thread. That other thread could create a Closure and destroy it
again. This calles object_remove_closure() (accessing CArray) which now
races against g_object_real_dispose() (destroying CArray).

Granted, this is very unlikely to happen. But let's try to avoid such
races in principle.

We can avoid this problem with less overhead by doing everything while
holding the GData lock, using g_datalist_id_update_atomic().  This is
probably even faster, as we don't need the additional
OPTIONAL_BIT_LOCK_CLOSURE_ARRAY lock.

Also free the empty closure data during object_remove_closure(). This
frees some unused memory.
2025-03-12 08:22:52 +01:00
Marco Trevisan (Treviño)
5b0ce18dcd gobject: Add single function to check G_ENABLE_DIAGNOSTIC
It was duplicated, and racing too
2025-03-11 01:07:20 +01:00
Marco Trevisan (Treviño)
fba031c41c gobject: Be consistent in using atomic logic to handle the GParamSpecPool
We init it atomically but then we don't really use it in such way and it
may lead to races at read/write times
2025-03-11 01:07:17 +01:00
Thomas Haller
482e078083 gobject: avoid GLIB_PRIVATE_CALL() for g_datalist_id_update_atomic()
Cache the function pointer for g_datalist_id_update_atomic() in a static
variable in "gobject.c" to avoid looking it up repeatedly.

g_datalist_id_update_atomic() is anyway internal API. Like GData is not
a useful data structure in general, this function is only useful for
something specific inside GObject.

It can be easily seen that _local_g_datalist_id_update_atomic is never
read without having a GObject at hand (because we call it on
`&object->qdata`). Thus initializing the pointer in
g_object_do_class_init() (under lock) is sufficient to ensure
thread-safe initialization. Note that we still set the pointer via
g_atomic_pointer_set(). This is done in an attempt to pacify thread
sanatizer.

Note that also with LTO enabled, the GLIB_PRIVATE_CALL() call cannot be
inlined. Previously we get:

0000000000011300 <_weak_ref_set>:
   ...
   1131d:       e8 ee 03 ff ff          call   1710 <glib__private__@plt>
   11322:       8b 35 0c b2 05 00       mov    0x5b20c(%rip),%esi        # 6c534 <quark_weak_locations.lto_priv.0>
   11328:       4c 89 e1                mov    %r12,%rcx
   1132b:       49 8d 7c 24 10          lea    0x10(%r12),%rdi
   11330:       48 8d 15 b9 42 ff ff    lea    -0xbd47(%rip),%rdx        # 55f0 <weak_ref_data_get_or_create_cb.lto_priv.0>
   11337:       ff 90 80 00 00 00       call   *0x80(%rax)

afterwards:

0000000000011300 <_weak_ref_set>:
   ...
   1131d:       48 8d 7e 10             lea    0x10(%rsi),%rdi
   11321:       48 89 f1                mov    %rsi,%rcx
   11324:       48 8d 15 c5 42 ff ff    lea    -0xbd3b(%rip),%rdx        # 55f0 <weak_ref_data_get_or_create_cb.lto_priv.0>
   1132b:       8b 35 0b b2 05 00       mov    0x5b20b(%rip),%esi        # 6c53c <quark_weak_locations.lto_priv.0>
   11331:       ff 15 f9 b1 05 00       call   *0x5b1f9(%rip)        # 6c530 <_local_g_datalist_id_update_atomic.lto_priv.0>

Also note, that the point here is not to optimize _weak_ref_set() (which
is not a hot path). There is work in progress that will use
g_datalist_id_update_atomic() for more purposes (and during more
relevant code paths of GObject).
2025-02-24 17:41:18 +01:00
Thomas Haller
6ce489bf83 gdataset: drop "key_id" argument from GDataListUpdateAtomicFunc
None of the users actually care about this parameter. And it's unlikely
that they ever will. Also, the passed "key_id" is the argument from
g_datalist_id_update_atomic(). If the caller really cared to know the
"key_id" in the callback, they could pass it as additional user data.
2025-02-21 15:24:51 +01:00
Michael Catanzaro
15edbef3a0 Remove incorrect (inout) annotations from GWeakRef
These are in parameters, not inout parameters.

Fixes #3558
2024-12-10 08:51:07 -06:00
Sid
0a68b172be gsignal: Add clarification on 'detailed_signal' validation
Fixes: https://gitlab.gnome.org/GNOME/glib/-/issues/3540
2024-12-02 14:28:59 +00:00
Philip Withnall
df5aa217e4 gobject: Don’t warn when setting deprecated construct property defaults
The default values for construct properties always have to be set, even
if those properties are deprecated. The code to do that is in GLib, and
not under the control of the user (unless they completely override the
`constructor` vfunc, which is not recommended). So don’t emit a warning
for that if `G_ENABLE_DIAGNOSTICS` is enabled.

In particular, this fixes deprecation warnings being emitted for
properties of a parent class when chaining up with a custom constructor,
even when none of the child class code mentions the deprecated property.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>

Fixes: #3254
2024-06-14 17:53:37 +01:00
gwillems
d6e0cf9884 gobject: fix broken links to parameters and signals naming rules 2024-05-21 22:32:20 +00:00
Philip Withnall
6a1beede60 gobject: Add an assertion to avoid a static analysis false positive
Avoid scan-build thinking that `new_wrdata` could be `NULL` on this
control path. It can’t be `NULL` if `new_object` is set.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>

Helps: #1767
2024-04-25 23:16:17 +01:00
Emmanuele Bassi
b37312f7e4 docs: Fix g_object_connect()'s docblock 2024-04-08 12:05:31 +00:00
Ville Skyttä
b20647c2e2 docs: spelling and grammar fixes
Signed-off-by: Ville Skyttä <ville.skytta@iki.fi>
2024-04-01 11:01:06 +00:00