Usually, after g_pointer_bit_lock() we want to read the pointer that we
have. In many cases, when we g_pointer_bit_lock() a pointer, we can
access it afterwards without atomic, as nobody is going to modify the
pointer then.
However, gdataset also supports g_datalist_set_flags(), so the pointer
may change at any time and we must always use atomics to read it. For
that reason, g_datalist_lock_and_get() does an atomic read right after
g_pointer_bit_lock().
g_pointer_bit_lock() can easily access the value that it just set. Add
g_pointer_bit_lock_and_get() which can return the value that gets set
afterwards.
Aside from saving the second atomic-get in certain scenarios, the
returned value is also atomically the one that we just set.
The existing g_pointer_bit_lock() and g_pointer_bit_unlock() API
requires the user to understand/reimplement how bits of the pointer get
mangled. Add helper functions for that.
The useful thing to do with g_pointer_bit_lock() API is to get/set
pointers while having it locked. For example, to set the pointer a user
can do:
g_pointer_bit_lock (&lockptr, lock_bit);
ptr2 = set_bit_pointer_as_if_locked(ptr, lock_bit);
g_atomic_pointer_set (&lockptr, ptr2);
g_pointer_bit_unlock (&lockptr, lock_bit);
That has several problems:
- it requires one extra atomic operations (3 instead of 2, in the
non-contended case).
- the first g_atomic_pointer_set() already wakes blocked threads,
which find themselves still being locked and needs to go back to
sleep.
- the user needs to re-implement how bit-locking mangles the pointer so
that it looks as if it were locked.
- while the user tries to re-implement what glib does to mangle the
pointer for bitlocking, there is no immediate guarantee that they get
it right.
Now we can do instead:
g_pointer_bit_lock(&lockptr, lock_bit);
g_pointer_bit_unlock_and_set(&lockptr, lock_bit, ptr, 0);
This will also emit a critical if @ptr has the locked bit set.
g_pointer_bit_lock() really only works with pointers that have a certain
alignment, and the lowest bits unset. Otherwise, there is no space to
encode both the locking and all pointer values. The new assertion helps
to catch such bugs.
Also, g_pointer_bit_lock_mask_ptr() is here, so we can do:
g_pointer_bit_lock(&lockptr, lock_bit);
/* set a pointer separately, when g_pointer_bit_unlock_and_set() is unsuitable. */
g_atomic_pointer_set(&lockptr, g_pointer_bit_lock_mask_ptr(ptr, lock_bit, TRUE, 0, NULL));
...
g_pointer_bit_unlock(&lockptr, lock_bit);
and:
g_pointer_bit_lock(&lockptr, lock_bit);
/* read the real pointer after getting the lock. */
ptr = g_pointer_bit_lock_mask_ptr(lockptr, lock_bit, FALSE, 0, NULL));
...
g_pointer_bit_unlock(&lockptr, lock_bit);
GType is either an integer or a pointer, so we have to use the _pointer
version here to support architectures such as Morello.
These two lines were missed in 5ecd3cbe52
and allows the gobject/enums test to pass on CheriBSD (Morello).
Helps: https://gitlab.gnome.org/GNOME/glib/-/issues/2842
Comparing reallocated pointers is UB, but this happens to work for now
on most compilers. However, for CHERI systems if g_bsearch_array_insert()
reallocs in-place then the new `hlbsa` pointer may have larger bounds
than `o` and using the old pointer with the smaller bounds can result
in a bounds error. I don't think this code is performance critical, so
removing the optimization and inserting unconditionally should be fine.
Currently, this realloc() UB rarely causes issues, but newer versions of
GCC with _FORTIFY_SOURCE=3 might also be able to observe the valid
memory range (assuming sufficient inlining).
See https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level
In general, we must not call out to external, unknown code while holding
a lock. That is prone to dead lock.
g_object_ref() can emit a toggle notification. In g_weak_ref_set(), we
must not do that while holding the lock.
The optional flags should be used for bit locks. That means,
we must only use atomic operations when updating the flags.
Having a variant of _X methods that update the flags without
locks (_X), means that we must take care not to take bit locks
during construction.
That is hard to get right. There is so much happening during object
construction, that it's unclear when it's really safe to access the
flags without atomic. Don't do this.
Add a GObjectPrivate struct and let GObject have private data.
On architectures where we have an alignment gap in GObject struct (64
bit), we use the gap for "optional_flags". Use the private data for
those optional flags, on architectures where we don't have them.
For now, private data is only added for those optional flags (and not on
architectures, where the flags fit inside GObject). In the future, we
may add additional fields there, and add the private struct always.
The main purpose will be to replace all the global locks with per-object
locks, and make "optional_flags" also available on 32bit.
GWeakRef calls g_object_ref() while holding a read lock.
g_object_ref() can emit a toggle notification.
If you then inside the toggle notification setup a GWeakRef
(to the same or another object), the code will try to get
a write lock. Deadlock will happen.
Add a test to "show" that (the breaking part is commented out).
Will be fixed next.
In g_object_unref(), we call _object_unref_clear_weak_locations() before
and after dispose() already. At those places it is necessary to do.
Calling it a third time during g_object_real_dispose() seems not useful,
has unnecessary overhead and actually undesirable.
In particular, because g_object_real_dispose() is the implementation for
the virtual function GObject.dispose(). For subclasses that override dispose(),
it's not well defined at which point they should chain up the parent
implementation (for dispose(), I'd argue that usually they chain up at
the end of their own code). If they chain up at the end, this has no
effect.
This only really matters if you try to register GWeakRef during dipose
and/or resurrect the object.
static void dispose(GObject *object)
{
g_weak_ref_set(&global_weak_ref, object);
global_ref = g_object_ref(object);
G_OBJECT_CLASS (parent_class)->dispose (object);
}
the object was resurrected, but g_object_real_dispose() would clear the
weak ref. That is not desirable, nor does it make sense.
Instead, the virtual function dispose() is called from two places, from
g_object_unref() and g_object_run_dispose(). In both cases, it is
ensured that weak locations are cleared *after* dispatching the virtual
function. Don't do it somewhere in the middle from
g_object_real_dispose().
dispose() can resurrect an object and/or register a weak-ref. After
returning from dispose(), we must check again. And we must do so in a
race-free manner, where we check that we have no more weak-locations
and the ref-count is one.
In fact, if _object_unref_clear_weak_locations() determines that the
ref-count is 1, it must also decrement the ref-count to zero while
holding the weak_locations_lock. This prevents g_weak_ref_set() to
still register a weak-pointer after the reference count dropped to zero.
Also add an assertion to g_weak_ref_set(), that the object is still
alive. The assertion is useful to finding bugs, but the change really
makes it impossible that a wrongly used g_weak_ref_set() can still
resurrect the object after finalization starts.
The final
g_datalist_id_set_data (&object->qdata, quark_weak_locations, NULL);
during finalization is no longer necessary and dropped.
We never set any data for quark_weak_refs. It's unused, drop it.
Also, fail a g_critical() assertion, if the GWeakRef is unexpectedly not
registered in the object. That really shouldn't happen.
Previously:
1. old_val = atomic_add(&object->ref_count);
2. if (old_val == 1 && OBJECT_HAS_TOGGLE_REF (object)) { toggle_notify() }
As old_val was 1, you might think that no other thread can have a valid
reference to object. However, that's not the case. For one, GWeakRef can
be used to create another strong reference. More easily, the single
reference can be shared between multiple threads (as long as the code
takes care that the object lives long enough).
That means, another thread can easily add and drop references (including
toggle references). All between step 1 and 2.
A race here might be hard to hit, and the effect might not be obviously
bad. However, consider old_val is 1 due to a normal reference, and
another thread adds a toggle ref between step 1. and 2. Then we would
notify a toggle from 1->2, although a newly added toggle ref is expected
to always start with a normal and a toggle reference. The first toggle
notification is expected to notify about the loss of other references, not
about getting a second reference.
To handle this properly, when we increase the reference count from 1 to
2, we must do so under a lock and check for the toggle notification.
As we now correctly track the toggle behavior, we can also assert in
toggle_refs_get_notify_unlocked() that n_toggle_refs agrees with the
number of references, that is, that the user did always match
g_object_add_toggle_ref() with g_object_remove_toggle_ref().
The downside is here too, that there is now a case (when increasing the
reference count from 1 to 2) where we need to take the global lock.
That performance problem should be addresses by using per-object locks
instead of a global lock.
The previous g_object_unref() was racy. There were three places where we
decremented the ref count, but still accessed the object afterwards
(while assuming that somebody else might still hold a reference). For
example:
if (!g_atomic_int_compare_and_exchange_full ((int *) &object->ref_count,
old_ref, old_ref - 1,
&old_ref))
continue;
TRACE (GOBJECT_OBJECT_UNREF (object, G_TYPE_FROM_INSTANCE (object), old_ref));
/* if we went from 2->1 we need to notify toggle refs if any */
if (old_ref == 2 && OBJECT_HAS_TOGGLE_REF (object))
{
/* The last ref being held in this case is owned by the toggle_ref */
toggle_refs_notify (object, TRUE);
}
After we decrement the reference count (and gave up our reference), we
are only allowed to access object if we know we have the only possible
reference to it. In particular, if old_ref is larger than 1, then
somebody else holds references and races against destroying object.
The object might be a dangling pointer already.
This is slightly complicated due to toggle references and clearing of
weak-locations.
For toggle references, we must take a lock on the mutex. Luckily, that
is only necessary, when the current reference count is exactly 2.
Note that we emit the TRACE() after the ref count was already decreased.
If another thread unrefs the object, inside the TRACE() we might have a
dangling pointer. That would only be fixable, by emitting the TRACE()
before the actual unref (which has its own problems). This problem
already existed previously.
The change to the test is necessary and correct. Before this patch,
g_object_unref() would call dispose() and decrement the reference count
right after.
In the test case at gobject/tests/reference.c:1108, the reference count
after dispose and decrement is 1. Then it thaws the queue notification,
which emits a property changed signal. The test then proceeds to
reference the object again and notifying the toggle reference.
Previously, the toggle reference was notified 3 times.
After this change, the property changed signal is emitted before
decreasing the reference count. Taking a reference then does not cause
an additional toggle on+off, so in total only one toggle happens.
That accounts for the change in the test. The new behavior is
correct.
The indentation level in g_object_unref() is wrong. Fix it by reformatting
the function with clang-format. That makes follow up patches easier to adhere
a consistent style.
No other changes.
Toggle refs are seldom used, and when they are, it makes mostly sense that
there is only one of them. Thus, when removing the last toggle ref, also
remove the associated data.
Previously:
- if the object is currently not frozen, we called
g_object_notify_queue_freeze() once. Afterwards dispatch the event
directly. This is probably the common case, and requires one
notify_lock lock.
- if the object is currently frozen, we call
g_object_notify_queue_freeze(), g_object_notify_queue_add().
g_object_notify_queue_thaw().
This required taking the notify_lock three times.
- if the object is currently not frozen and in_init, then we called
g_object_notify_queue_freeze(), g_object_notify_queue_freeze(),
g_object_notify_queue_add(). This also required to take
the lock three times. There is another thaw at the end of
object initialization.
That was because we first call g_object_notify_queue_freeze() to see
whether we are frozen. And depending on that, queue the event (and thaw
again).
Instead, g_object_notify_queue_add() can do the check and queueing in
one step. There is no need to call a freeze() to (conditionally) enqueue
a notification. Now only one lock is taken in all cases.
Also, g_object_notify_queue_freeze() and g_object_notify_queue_thaw()
both call g_datalist_id_get_data() (which also take a bit lock). As the
thaw is no longer necessary, the second lock is also saved.
Before dispatching signals (and calling out to user code), we want to
take a reference and ensure that the object stays alive.
However, a thaw may not decrease the freeze_count to zero, or there may
be no properties to notify. Avoid taking a reference in those cases.
This was done since the beginning (commit e773d7dba6 ('fixed dealing
with collection/lcopy of NULL values.'). But it's not clear, why we
would need to take a reference on the calling object.
Freeze does not emit any signals/callbacks and does not call back to the
user. It just sets up some internal state.
This doesn't require to take a reference. The caller must hold a valid
reference to being with, but if that's given, there is no need to
acquire another reference.
g_atomic_int_get() returns a signed int. While we don't expect this to be ever
negative, a negative value would also indicate a bug. Adjust the check to assert
against negative ref-count too.
Generating gir and typelib files has inter-dependencies that may depend
on other elements.
For example, glib requires gobject and gdump generated files require
gmodule, so we've a cyclic dependency because gmodule requires gobject,
that requires glib.
To prevent this, let's just generate the introspection files at once in
a different meson file so that we don't have to deal with this.
As per this we could even revert commit fa37ab6d0 since gio is now
compiled before the gir files.
GLib gir requires glib-types.h that also includes gobject-visibility.h
that needs to be generated in order to be able to generate the GLib gir,
so explicitly add it to the sources.
Also, in order to link the typelib we need the definitions of the *_get_type()
functions that are defined in the gboxed.c file, so this should be part
of the gir sources or we'll have linking issues.
Aside from checking that we're accessing the global GParamSpecPool
without necessarily initializing GObjectClass, we should also verify
that we're doing so safely without the class init lock.
Ensure that the fix in commit af024b6d7e7d3fbef23c1f7d1f7704fc90ac4fb1
works, by replicating what gobject-introspection does:
- initialise the default GTypeInterface from a GType without also
initialising GObjectClass
- install a property for the interface
- find the GParamSpec using g_object_interface_find_property()
Right now, we're assuming that GObjectClass will be initialised first
and under a lock, but that's not always the case: when traversing a list
of type, the first one might be a GTypeInterface, and if we initialise
an interface that installs a property, the whole thing comes crashing
down because the global GParamSpecPool is not initialised.
Instead of taking a lock everywhere, we can use an atomic compare and
swap; the first thread that installs a property wins the race, as any
other access to the GParamSpecPool is performed under a lock.
While GParamSpecPool should never be used by newly written code, having
the ability to free the associated memory is a good idea. The only
reason why this hasn't been necessary until now is that we assume base
classes are going to keep their GParamSpecPool around forever.
Symptom:
20 0x00007ffff756337b in gtk_widget_size_allocate_with_baseline (widget=0xd7a0c0 [DirectWindow], allocation=, baseline=) at
gdb.error: value has been optimized out