Commit Graph

2340 Commits

Author SHA1 Message Date
Philip Withnall
5f6b1516ae tests: Rework how slow param test is skipped
It’s more helpful to always register the test, even if it’s normally
skipped, since then the skip is recorded in the test logs so people can
see what’s ‘missing’ from them.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>
2024-02-06 11:01:46 +00:00
Philip Withnall
3f4e6ddcd8 Merge branch 'thorough-tests-in-ci' into 'main'
build: Add thorough test setup

See merge request GNOME/glib!3838
2024-02-02 14:33:22 +00:00
Thomas Haller
d8e4f39aa8 gobject: track GWeakRef in object's WeakRefData with an array
GSList doesn't seem the best choice here. It's benefits are that it's
relatively convenient to use (albeit not very efficient) and that an
empty list requires only the pointer to the list's head.

But for non-empty list, we need to allocate GSList elements. We can do
better, by writing more code.

I think it's worth optimizing GObject, at the expense of a bit(?) more
complicated code. The complicated code is still entirely self-contained,
so unless you review WeakRefData usage, it doesn't need to bother you.
Note that this can be easily measure to be a bit faster. But I think the
more important part is to safe some allocations. Often objects are
long-lived, and the GWeakRef will be tracked for a long time. It is
interesting, to optimize the memory usage of that.

- if the list only contains one weak reference, it's interned/embedded in
  WeakRefData.list.one. Otherwise, an array is allocated and tracked
  at WeakRefData.list.many.

- when the buffer grows, we double the size. When the buffer shrinks,
  we reallocate to 50% when 75% are empty. When the buffer shrinks to
  length 1, we free it (so that "list.one" is always used with a length
  of 1).
  That means, at worst case we waste 75% of the allocated buffer,
  which is a choice in the hope that future weak references will be
  registered, and that this is a suitable strategy.

- on architectures like x86_68, does this not increase the size of
  WeakRefData.

Also, the number of weak-refs is now limited to 65535, and now an
assertion fails when you try to register more than that. But note that
the internal tracking just uses a linear search, so you really don't
want to register thousands of weak references on an object. If you do
that, the current implementation is not suitable anyway and you must
rethink your approach. Nor does it make sense to optimize the
implementation for such a use case. Instead, the implementation is
optimized for a few (one!) weak reference per object.
2024-02-02 14:49:09 +01:00
Thomas Haller
637c2a08ce gobject: combine ref_count/lock_field in WeakRefData
We can safely combine this, and use bit 30 of the ref-count for locking.

This leaves still 2^30-1 for the ref-count, which is more than enough,
because these references are only taken for a short time in
g_weak_ref_get() and g_weak_ref_set(). Note that one thread can at most
take one reference at a time, so the ref-count will always a smaller
number.

Also note, that obviously we will only take a bit lock while also
holding a reference. That means, when weak_ref_data_unref() decreases
the ref-count to zero, the bit will be unlocked as well.

The reason to do this is to free up some space in WeakRefData. Note that
(on x86_64) this doesn't actually make the struct smaller.  It's
probably not reasonably possible to make WeakRefData smaller than it
already is (on x86_64). However, by combining the fields we have some
space for reuse without increasing the struct size. That space will be
used next.
2024-02-02 14:49:09 +01:00
Thomas Haller
824c4da44b gobject/tests: add test that creates a large number of weak references
The implementation of GWeakRef tracks weak references in a way, that
requires linear search. That is probably best, for an expected low
number of entries (e.g. compared to the overhead of having a hash
table). However, it means, if you create thousands of weak references,
performance start to degrade.

Add a test that creates 64k weak references. Just to see how it goes.
2024-02-02 14:49:09 +01:00
Philip Withnall
2638f97b3e Merge branch 'th/weak-ref-lock-2' into 'main'
[th/weak-ref-lock-2] gobject: use per-object bit-lock instead of global RWLock for GWeakRef

Closes #743

See merge request GNOME/glib!3834
2024-01-31 16:51:50 +00:00
Thomas Haller
7382cc4383 gobject: use per-object bit-lock instead of global RWLock for GWeakRef
Replace the global RWLock with per-object locking. Note that there are
three places where we needed to take the globlal lock. g_weak_ref_get(),
g_weak_ref_set() and in _object_unref_clear_weak_locations(), during
g_object_unref(). The calls during g_object_unref() seem the most
relevant here, where we would want to avoid a global lock. Luckily, that
global lock only had to be taken if the object ever had a GWeakRef
registered, so most objects wouldn't care. The global lock only affects
objects, that are ever set via g_weak_ref_set(). Still, try to avoid that
global lock.

Related to GWeakRef, there are various moments when we don't hold a
strong reference to the object. So the per-object lock cannot be on the
object itself, because when we want to unlock we no longer have access
to the object. And we cannot take a strong reference on the GObject
either, because that triggers toggle notifications. And worse, when one
thread holds the last strong reference of an object and decides to
destroy it, then a `g_weak_ref_set(weak_ref, NULL)` on another thread
could acquire a temporary reference, and steal the destruction of the
object from the other thread.

Instead, we already had a "quark_weak_locations" GData and an allocated
structure for tracking the GSList with GWeakRef. Extend that to be
ref-counted and have a separate lifetime from the object. This
WeakRefData now contains the per-object mutex for locking. We can
request the WeakRefData from an object, take a reference to keep it
alive, and use it to hold the lock without having the object alive.

We also need a bitlock on GWeakRef itself. So to set or get a
GWeakRef we must take the per-object lock on the WeakRefData and the
lock on the GWeakRef (in this order). During g_weak_ref_set() there may
be of course two objects (and two WeakRefData) involved, the previous
and the new object.

Note that now once an object gets a WeakRefData allocated, it can no
longer be freed. It must stick until the object gets destroyed. This
allocation happens, once an object is set via g_weak_ref_set(). In
other words, objects involved with GWeakRef will have extra data
allocated.

It may be possible to also release the WeakRefData once it's no longer
needed. However, that would be quite complicated, and require additional
atomic operations, so it's not clear to be worth it. So it's not done.
Instead, the WeakRefData sticks on the object once it's set.
2024-01-31 17:30:28 +01:00
Thomas Haller
092be080c5 gobject: avoid global GRWLock for weak locations in g_object_unref() in some cases
_object_unref_clear_weak_locations() is called twice during
g_object_unref(). In both cases, it is when we expect that the reference
count is 1 and we are either about to call dispose() or finalize().

At this point, we must check for GWeakRef to avoid a race that the ref
count gets increased just at that point.

However, we can do something better than to always take the global lock.

On the object, whenever an object is set to a GWeakRef, set a flag
OPTIONAL_FLAG_EVER_HAD_WEAK_REF. Most objects are not involved with weak
references and won't have this flag set.

If we reach _object_unref_clear_weak_locations() we just (atomically)
checked that the ref count is one. If the object at this point never had
a GWeakRef registered, we know that nobody else could have raced against
obtaining another reference. In this case, we can skip taking the lock
and checking for weak locations.

As most object don't ever have a GWeakRef registered, this significantly
avoids unnecessary work during _object_unref_clear_weak_locations().

This even fixes a hard to hit race in the do_unref=FALSE case.
Previously, if do_unref=FALSE there were code paths where we avoided
taking the global lock. We do so, when quark_weak_locations is unset.
However, that is not race free. If we enter
_object_unref_clear_weak_locations() with a ref-count of 1 and one
GWeakRef registered, another thread can take a strong reference and
unset the GWeakRef. Then quark_weak_locations will be unset, and
_object_unref_clear_weak_locations() misses the fact that the ref count
is now bumped to two. That is now fixed, because once
OPTIONAL_FLAG_EVER_HAD_WEAK_REF is set, it will stick.

Previously, there was an optimization to first take a read lock to check
whether there are weak locations to clear. It's not clear that this is
worth it, because we now already have a hint that there might be a weak
location. Unfortunately, GRWLock does not support an upgradable lock, so
we cannot take an (upgradable) read lock, and when necessary upgrade
that to a write lock.
2024-01-31 17:30:28 +01:00
Philip Withnall
7b5bcf62b8 Merge branch 'th/gobject-carray-comment' into 'main'
[th/gobject-carray-comment] gobject: remove obsolete code comment about CArray

See merge request GNOME/glib!3866
2024-01-31 14:09:04 +00:00
Thomas Haller
81107da210 gobject: remove obsolete code comment about CArray
It's not clear what this code comment tries to tell us. Yes, when we
make changes, we must take care that the changes are correct and update
the relevant places.

It seems long obsolete. Drop it.

This partly reverts commit d7dd9aefd8 ('placed a comment about not
changing CArray until we have').
2024-01-31 13:34:37 +01:00
Thomas Haller
970325b92d gobject/tests: add test checking that GWeakRef is cleared in GWeakNotify
g_object_weak_ref() documentation refers to GWeakRef as thread-safe
replacement.  However, it's not clear to me, how GWeakRef is a
replacement for a callback. I think, it means, that you combine
g_object_weak_ref() with GWeakRef, to both hold a (thread-safe) weak
reference and get a notification on destruction.

Add a test, that GWeakRef is already cleared inside the GWeakNotify
callback.
2024-01-31 12:42:02 +01:00
Philip Withnall
a4b9b41afc tests: Fix a double-free in param test
This isn’t normally hit because it’s in a test which is disabled unless
run with `-m thorough`.

The data is owned by `g_test_add_data_func_full()` until the end of the
process.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>
2024-01-18 17:22:09 +00:00
Philip Withnall
d738dcc76f tests: Fix an expected message in param test
This isn’t normally hit because it’s in a test which is disabled unless
run with `-m thorough`.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>
2024-01-18 17:22:09 +00:00
Philip Withnall
5d38f3ebd4 tests: Fix a double-unref in params tests
This isn’t normally hit because it’s in a test which is disabled unless
run with `-m thorough`.

The `GParamSpec` is initially floating, but its floating ref is sunk by
`g_object_interface_install_property()` (regardless of whether that call
succeeds or aborts). The behaviour of
`g_object_interface_install_property()` in this respect may have changed
more recently than the test was written.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>
2024-01-18 17:22:09 +00:00
Philip Withnall
f70b3a91d3 Merge branch 'th/g-pointer-bit-lock-ext' into 'main'
[th/g-pointer-bit-lock-ext] glib: add g_pointer_bit_unlock_and_set() and g_pointer_bit_lock_mask_ptr()

See merge request GNOME/glib!3802
2024-01-16 13:43:54 +00:00
Thomas Haller
5609370de9 gbitlock: add g_pointer_bit_lock_and_get()
Usually, after g_pointer_bit_lock() we want to read the pointer that we
have. In many cases, when we g_pointer_bit_lock() a pointer, we can
access it afterwards without atomic, as nobody is going to modify the
pointer then.

However, gdataset also supports g_datalist_set_flags(), so the pointer
may change at any time and we must always use atomics to read it. For
that reason, g_datalist_lock_and_get() does an atomic read right after
g_pointer_bit_lock().

g_pointer_bit_lock() can easily access the value that it just set. Add
g_pointer_bit_lock_and_get() which can return the value that gets set
afterwards.

Aside from saving the second atomic-get in certain scenarios, the
returned value is also atomically the one that we just set.
2024-01-16 12:50:31 +01:00
Thomas Haller
c4c76d77cb gbitlock: add g_pointer_bit_unlock_and_set() and g_pointer_bit_lock_mask_ptr()
The existing g_pointer_bit_lock() and g_pointer_bit_unlock() API
requires the user to understand/reimplement how bits of the pointer get
mangled. Add helper functions for that.

The useful thing to do with g_pointer_bit_lock() API is to get/set
pointers while having it locked. For example, to set the pointer a user
can do:

    g_pointer_bit_lock (&lockptr, lock_bit);
    ptr2 = set_bit_pointer_as_if_locked(ptr, lock_bit);
    g_atomic_pointer_set (&lockptr, ptr2);
    g_pointer_bit_unlock (&lockptr, lock_bit);

That has several problems:

- it requires one extra atomic operations (3 instead of 2, in the
  non-contended case).

- the first g_atomic_pointer_set() already wakes blocked threads,
  which find themselves still being locked and needs to go back to
  sleep.

- the user needs to re-implement how bit-locking mangles the pointer so
  that it looks as if it were locked.

- while the user tries to re-implement what glib does to mangle the
  pointer for bitlocking, there is no immediate guarantee that they get
  it right.

Now we can do instead:

  g_pointer_bit_lock(&lockptr, lock_bit);
  g_pointer_bit_unlock_and_set(&lockptr, lock_bit, ptr, 0);

This will also emit a critical if @ptr has the locked bit set.
g_pointer_bit_lock() really only works with pointers that have a certain
alignment, and the lowest bits unset. Otherwise, there is no space to
encode both the locking and all pointer values. The new assertion helps
to catch such bugs.

Also, g_pointer_bit_lock_mask_ptr() is here, so we can do:

  g_pointer_bit_lock(&lockptr, lock_bit);
  /* set a pointer separately, when g_pointer_bit_unlock_and_set() is unsuitable. */
  g_atomic_pointer_set(&lockptr, g_pointer_bit_lock_mask_ptr(ptr, lock_bit, TRUE, 0, NULL));
  ...
  g_pointer_bit_unlock(&lockptr, lock_bit);

and:

  g_pointer_bit_lock(&lockptr, lock_bit);
  /* read the real pointer after getting the lock. */
  ptr = g_pointer_bit_lock_mask_ptr(lockptr, lock_bit, FALSE, 0, NULL));
  ...
  g_pointer_bit_unlock(&lockptr, lock_bit);
2024-01-16 12:16:42 +01:00
Philip Withnall
571636538b Merge branch 'optional-flags' into 'main'
gobject: define HAVE_OPTIONAL_FLAGS for sizeof(void*) > 8

See merge request GNOME/glib!3813
2024-01-16 09:02:22 +00:00
Alex Richardson
dfa0d60db2 genums: use g_once_init_enter_pointer for GType initializers
GType is either an integer or a pointer, so we have to use the _pointer
version here to support architectures such as Morello.

These two lines were missed in 5ecd3cbe52
and allows the gobject/enums test to pass on CheriBSD (Morello).

Helps: https://gitlab.gnome.org/GNOME/glib/-/issues/2842
2024-01-15 17:32:11 -08:00
Alex Richardson
6e51c2e150 gobject: define HAVE_OPTIONAL_FLAGS for sizeof(void*) > 8
The padding space for this extra field is also available if void* is
larger than 8 bytes.
2024-01-15 16:27:06 -08:00
Alex Richardson
9769cd0d24 gsignal.c: drop an optimization that is undefined behaviour
Comparing reallocated pointers is UB, but this happens to work for now
on most compilers. However, for CHERI systems if g_bsearch_array_insert()
reallocs in-place then the new `hlbsa` pointer may have larger bounds
than `o` and using the old pointer with the smaller bounds can result
in a bounds error. I don't think this code is performance critical, so
removing the optimization and inserting unconditionally should be fine.

Currently, this realloc() UB rarely causes issues, but newer versions of
GCC with _FORTIFY_SOURCE=3 might also be able to observe the valid
memory range (assuming sufficient inlining).
See https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level
2024-01-05 16:46:29 -08:00
Thomas Haller
e05623001b gobject: fix deadlock with g_weak_ref_set()
In general, we must not call out to external, unknown code while holding
a lock. That is prone to dead lock.

g_object_ref() can emit a toggle notification. In g_weak_ref_set(), we
must not do that while holding the lock.
2024-01-04 16:42:59 +01:00
Thomas Haller
b397ef2122 gobject: use per-object bit-lock instead of global mutex for closure array 2024-01-04 16:32:09 +01:00
Thomas Haller
ab202a2c1c gobject: use per-object bit-lock instead of global mutex for toggle refs 2024-01-04 16:32:09 +01:00
Thomas Haller
cf044ba7ad gobject: use per-object bit-lock instead of global mutex for notify-queue 2024-01-04 16:32:09 +01:00
Thomas Haller
c66880e46f gobject: use per-object bit-lock instead of global mutex for weak-refs 2024-01-04 16:32:09 +01:00
Thomas Haller
bfb829231f gobject: never access optional flags without atomic
The optional flags should be used for bit locks. That means,
we must only use atomic operations when updating the flags.

Having a variant of _X methods that update the flags without
locks (_X), means that we must take care not to take bit locks
during construction.

That is hard to get right. There is so much happening during object
construction, that it's unclear when it's really safe to access the
flags without atomic. Don't do this.
2024-01-04 16:32:09 +01:00
Thomas Haller
777606d9c4 gobject: add private data for GObject and use it for optional flags
Add a GObjectPrivate struct and let GObject have private data.

On architectures where we have an alignment gap in GObject struct (64
bit), we use the gap for "optional_flags". Use the private data for
those optional flags, on architectures where we don't have them.

For now, private data is only added for those optional flags (and not on
architectures, where the flags fit inside GObject). In the future, we
may add additional fields there, and add the private struct always.

The main purpose will be to replace all the global locks with per-object
locks, and make "optional_flags" also available on 32bit.
2024-01-04 16:32:09 +01:00
Thomas Haller
6c389738d3 gobject/tests/reference: add tests for concurrent g_weak_ref_set()
GWeakRef methods are thread safe. Add test for calling g_weak_ref_set()
concurrently.
2024-01-04 16:32:09 +01:00
Thomas Haller
787861761d gobject/tests/reference: add test for deadlock related to GWeakRef and toggle reference
GWeakRef calls g_object_ref() while holding a read lock.
g_object_ref() can emit a toggle notification.

If you then inside the toggle notification setup a GWeakRef
(to the same or another object), the code will try to get
a write lock. Deadlock will happen.

Add a test to "show" that (the breaking part is commented out).
Will be fixed next.
2024-01-04 16:17:13 +01:00
Philip Withnall
dd2b6d7e3f Merge branch 'th/g-object-unref-ref' into 'main'
[th/g-object-unref-ref] fix race in g_object_unref()

Closes #3064

See merge request GNOME/glib!3769
2024-01-03 23:39:24 +00:00
Philip Withnall
bf7ccbde9e tests: Fix a minor leak in the new GParamSpecPool test
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
2024-01-03 12:04:16 +00:00
Tanmay
dbbd9c60ba Add boxed GType for GRand
Signed-off-by: Tanmay Patil <tanmaynpatil105@gmail.com>
2023-12-31 09:09:48 +00:00
Thomas Haller
35e7fec16c stp: add code comment to gobject.object_unref about object being dangling 2023-12-30 00:20:17 +01:00
Thomas Haller
f4aa119157 gobject: really fix race with GWeakRef during g_object_unref()
To do it really race-free, we must take a reader lock during when
decrementing the ref count from 1 to 0.
2023-12-30 00:20:17 +01:00
Thomas Haller
2952cfd7a7 gobject: drop clearing quark_weak_locations from g_object_real_dispose()
In g_object_unref(), we call _object_unref_clear_weak_locations() before
and after dispose() already. At those places it is necessary to do.

Calling it a third time during g_object_real_dispose() seems not useful,
has unnecessary overhead and actually undesirable.

In particular, because g_object_real_dispose() is the implementation for
the virtual function GObject.dispose(). For subclasses that override dispose(),
it's not well defined at which point they should chain up the parent
implementation (for dispose(), I'd argue that usually they chain up at
the end of their own code). If they chain up at the end, this has no
effect.

This only really matters if you try to register GWeakRef during dipose
and/or resurrect the object.

        static void dispose(GObject *object)
        {
            g_weak_ref_set(&global_weak_ref, object);
            global_ref = g_object_ref(object);
            G_OBJECT_CLASS (parent_class)->dispose (object);
        }

the object was resurrected, but g_object_real_dispose() would clear the
weak ref. That is not desirable, nor does it make sense.

Instead, the virtual function dispose() is called from two places, from
g_object_unref() and g_object_run_dispose(). In both cases, it is
ensured that weak locations are cleared *after* dispatching the virtual
function. Don't do it somewhere in the middle from
g_object_real_dispose().
2023-12-30 00:20:17 +01:00
Thomas Haller
fe0347bc64 gobject: fix race clearing weak locations for resurrection during g_object_unref()
dispose() can resurrect an object and/or register a weak-ref. After
returning from dispose(), we must check again. And we must do so in a
race-free manner, where we check that we have no more weak-locations
and the ref-count is one.

In fact, if _object_unref_clear_weak_locations() determines that the
ref-count is 1, it must also decrement the ref-count to zero while
holding the weak_locations_lock. This prevents g_weak_ref_set() to
still register a weak-pointer after the reference count dropped to zero.

Also add an assertion to g_weak_ref_set(), that the object is still
alive. The assertion is useful to finding bugs, but the change really
makes it impossible that a wrongly used g_weak_ref_set() can still
resurrect the object after finalization starts.

The final

    g_datalist_id_set_data (&object->qdata, quark_weak_locations, NULL);

during finalization is no longer necessary and dropped.
2023-12-28 09:36:56 +01:00
Thomas Haller
aaf4ccb901 gobject: avoid calling g_weak_ref_set() when intializing GWeakRef to NULL
If object is NULL, g_weak_ref_set() really does nothing. However,
it will pointlessly take a lock, before deciding to do nothing.
Avoid that.
2023-12-28 09:36:56 +01:00
Thomas Haller
de5e26e2ed gobject: remove unused quark_weak_refs from gobject
We never set any data for quark_weak_refs. It's unused, drop it.

Also, fail a g_critical() assertion, if the GWeakRef is unexpectedly not
registered in the object. That really shouldn't happen.
2023-12-28 09:36:56 +01:00
Thomas Haller
9ae43169cf gobject: fix race in toggle ref during g_object_ref()
Previously:

   1. old_val = atomic_add(&object->ref_count);
   2. if (old_val == 1 && OBJECT_HAS_TOGGLE_REF (object)) { toggle_notify() }

As old_val was 1, you might think that no other thread can have a valid
reference to object. However, that's not the case. For one, GWeakRef can
be used to create another strong reference. More easily, the single
reference can be shared between multiple threads (as long as the code
takes care that the object lives long enough).

That means, another thread can easily add and drop references (including
toggle references). All between step 1 and 2.

A race here might be hard to hit, and the effect might not be obviously
bad. However, consider old_val is 1 due to a normal reference, and
another thread adds a toggle ref between step 1. and 2. Then we would
notify a toggle from 1->2, although a newly added toggle ref is expected
to always start with a normal and a toggle reference. The first toggle
notification is expected to notify about the loss of other references, not
about getting a second reference.

To handle this properly, when we increase the reference count from 1 to
2, we must do so under a lock and check for the toggle notification.

As we now correctly track the toggle behavior, we can also assert in
toggle_refs_get_notify_unlocked() that n_toggle_refs agrees with the
number of references, that is, that the user did always match
g_object_add_toggle_ref() with g_object_remove_toggle_ref().

The downside is here too, that there is now a case (when increasing the
reference count from 1 to 2)  where we need to take the global lock.
That performance problem should be addresses by using per-object locks
instead of a global lock.
2023-12-28 09:36:56 +01:00
Thomas Haller
408dc69186 gobject: fix race in g_object_unref() related to toggle references
The previous g_object_unref() was racy. There were three places where we
decremented the ref count, but still accessed the object afterwards
(while assuming that somebody else might still hold a reference). For
example:

      if (!g_atomic_int_compare_and_exchange_full ((int *) &object->ref_count,
                                                   old_ref, old_ref - 1,
                                                   &old_ref))
        continue;

      TRACE (GOBJECT_OBJECT_UNREF (object, G_TYPE_FROM_INSTANCE (object), old_ref));

      /* if we went from 2->1 we need to notify toggle refs if any */
      if (old_ref == 2 && OBJECT_HAS_TOGGLE_REF (object))
        {
          /* The last ref being held in this case is owned by the toggle_ref */
          toggle_refs_notify (object, TRUE);
        }

After we decrement the reference count (and gave up our reference), we
are only allowed to access object if we know we have the only possible
reference to it. In particular, if old_ref is larger than 1, then
somebody else holds references and races against destroying object.
The object might be a dangling pointer already.

This is slightly complicated due to toggle references and clearing of
weak-locations.

For toggle references, we must take a lock on the mutex. Luckily, that
is only necessary, when the current reference count is exactly 2.

Note that we emit the TRACE() after the ref count was already decreased.
If another thread unrefs the object, inside the TRACE() we might have a
dangling pointer. That would only be fixable, by emitting the TRACE()
before the actual unref (which has its own problems). This problem
already existed previously.

The change to the test is necessary and correct. Before this patch,
g_object_unref() would call dispose() and decrement the reference count
right after.
In the test case at gobject/tests/reference.c:1108, the reference count
after dispose and decrement is 1. Then it thaws the queue notification,
which emits a property changed signal. The test then proceeds to
reference the object again and notifying the toggle reference.
Previously, the toggle reference was notified 3 times.
After this change, the property changed signal is emitted before
decreasing the reference count. Taking a reference then does not cause
an additional toggle on+off, so in total only one toggle happens.
That accounts for the change in the test. The new behavior is
correct.
2023-12-28 09:36:56 +01:00
Thomas Haller
7292726931 gobject: reformat g_object_ref()/g_object_unref() with clang-format
The indentation level in g_object_unref() is wrong. Fix it by reformatting
the function with clang-format. That makes follow up patches easier to adhere
a consistent style.

No other changes.
2023-12-28 09:36:56 +01:00
Thomas Haller
b2a68d6ec4 gobject: factor out toggle_refs_get_notify_unlocked() function
This is the part of toggle_refs_notify(), for which we usually hold the
lock. It will be used next.
2023-12-28 09:32:49 +01:00
Thomas Haller
2276099236 gobject: remove data when last toggle ref is removed
Toggle refs are seldom used, and when they are, it makes mostly sense that
there is only one of them. Thus, when removing the last toggle ref, also
remove the associated data.
2023-12-28 08:39:41 +01:00
Thomas Haller
4da4220c8b gobject: remove unnecessary "object" field from ToggleRefStack
We always have at hand the object pointer that we care about.
Tracking it also in ToggleRefStack is redundant and unnecessary.

Drop the field.
2023-12-28 08:39:41 +01:00
Philip Withnall
ec3fb9a48c Merge branch 'glib-gir-sources' into 'main'
gobject: Make GLib-2.0 gir build depend on GObject dependency

See merge request GNOME/glib!3772
2023-12-22 14:45:37 +00:00
Gaël Bonithon
55e2d9f6a7 gtypemodule: Add assertions in finalize() 2023-12-21 18:25:36 +01:00
Thomas Haller
42bd9627a5 gobject: use g_malloc() allocator for GObjectNotifyQueue struct
The GSlice allocator is deprecated. Use g_new0() instead.
2023-12-21 07:26:50 +01:00
Thomas Haller
630d8da211 gobject: remove unused "conditional" argument from g_object_notify_queue_freeze() 2023-12-21 07:26:50 +01:00
Thomas Haller
94c735a2aa gobject: don't freeze to queue notify event in g_object_notify_by_spec_internal()
Previously:

- if the object is currently not frozen, we called
  g_object_notify_queue_freeze() once. Afterwards dispatch the event
  directly. This is probably the common case, and requires one
  notify_lock lock.

- if the object is currently frozen, we call
  g_object_notify_queue_freeze(), g_object_notify_queue_add().
  g_object_notify_queue_thaw().
  This required taking the notify_lock three times.

- if the object is currently not frozen and in_init, then we called
  g_object_notify_queue_freeze(), g_object_notify_queue_freeze(),
  g_object_notify_queue_add(). This also required to take
  the lock three times. There is another thaw at the end of
  object initialization.

That was because we first call g_object_notify_queue_freeze() to see
whether we are frozen. And depending on that, queue the event (and thaw
again).

Instead, g_object_notify_queue_add() can do the check and queueing in
one step. There is no need to call a freeze() to (conditionally) enqueue
a notification. Now only one lock is taken in all cases.

Also, g_object_notify_queue_freeze() and g_object_notify_queue_thaw()
both call g_datalist_id_get_data() (which also take a bit lock). As the
thaw is no longer necessary, the second lock is also saved.
2023-12-21 07:26:50 +01:00