Commit Graph

29649 Commits

Author SHA1 Message Date
Simon McVittie
5e8f053d33 tests: Exercise gdbus-codegen --interface-info-header with empty input
Signed-off-by: Simon McVittie <smcv@collabora.com>
2024-02-06 13:55:35 +00:00
Simon McVittie
02a3417ac4 tests: Exercise gdbus-codegen --interface-info-body with empty input
Signed-off-by: Simon McVittie <smcv@collabora.com>
2024-02-06 13:55:33 +00:00
Simon McVittie
1ba8386886 codegen: Document --output -
Signed-off-by: Simon McVittie <smcv@collabora.com>
2024-02-06 11:53:06 +00:00
Simon McVittie
6a1fdb8145 codegen: Use - instead of stdout for output to stdout
In command-line tools, ordinary filenames normally do not have
special-cased meanings, so commit 3ef742eb "Don't skip dbus-codegen tests
on Win32" was a command-line API break: in the unlikely event that a
user wanted to write to a file named exactly `stdout`, this would have
been an incompatible change.

There is a conventional pseudo-filename to represent standard output,
which is `-` (for example `cat -` is a no-op filter). Adding support
for this is technically also a command-line API break (in the very
unlikely event that a user wants to write to a file named exactly `-`,
they would now have to write it as `./-`), but filenames starting with
a dash often require special treatment anyway, so this probably will not
come as a surprise to anyone.

When the output filename is `-` we don't want to use `#ifdef _____` as
a header guard, so special-case it as `__STDOUT__` as before.

Signed-off-by: Simon McVittie <smcv@collabora.com>
2024-02-06 11:53:06 +00:00
Philip Withnall
25e68476fa Merge branch 'garray_maxuint' into 'main'
garray: improve boundary checks

Closes #3240

See merge request GNOME/glib!3882
2024-02-05 18:34:14 +00:00
Philip Withnall
59a818c28b Merge branch '3243-get-type-info-type-type-type' into 'main'
girepository: Rename gi_arg_info_load_type() to gi_arg_info_load_type_info()

Closes #3243

See merge request GNOME/glib!3878
2024-02-05 17:46:24 +00:00
Tobias Stoeckmann
766bc75917 garray: Missing precondition checks
The function arguments index_ and length could lead to a sum which is
larger than G_MAXUINT, possibly leading to out of boundary accesses
in array_remove_range functions.

Signed-off-by: Tobias Stoeckmann <tobias@stoeckmann.org>

Fixes: #3240
2024-02-05 18:09:59 +01:00
Philip Withnall
f2814e36ef Merge branch 'th/gdataset-comment' into 'main'
[th/gdataset-comment] gdataset: add code comment to g_datalist_get_data()

See merge request GNOME/glib!3879
2024-02-05 16:52:00 +00:00
Thomas Haller
10d4351ec9 gdataset: add code comment to g_datalist_get_data()
It's not obvious why we wouldn't use g_quark_try_string(). Add a code
comment that this is intentional and a reference for how to find out
more.

Also, fix typo in another code comment.
2024-02-05 16:49:10 +01:00
Philip Withnall
c8132fdf78 girepository: Rename gi_arg_info_load_type() to gi_arg_info_load_type_info()
So that it matches `gi_arg_info_get_type_info()`. We can’t use
`gi_arg_info_get_type()` because that collides with the `GType` getter
for the type.

Spotted by Philip Chimento.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>

Fixes: #3243
2024-02-05 15:13:46 +00:00
Philip Withnall
bcc22d48b0 Merge branch 'th/datalist-shrink' into 'main'
[th/datalist-shrink] shrink the interal buffer of `GData`

See merge request GNOME/glib!3873
2024-02-05 15:10:33 +00:00
Thomas Haller
29314690c7 gdatalist: shrink the buffer when it becomes 75% empty
The amount of used memory should stay in relation to the number of
entries we have. If we delete most (75%) of the entries, let's also
reallocate the buffer down to 50% of its size.

datalist_append() now starts with 2 elements. This works together with
the shrinking. If we only have one entry left, we will shrink the buffer
back to size 2. In general, d->alloc is always a power of two (unless it
overflows after G_MAXUINT32/2, which we assume will never happen).

The previous buffer growth strategy of never shrinking is not
necessarily bad. It has the advantage to not require any checks for
shrinking, and it works well in cases where the amount of data actually
does not shrink (as we'd often expect).

Also, it's questionable what a realloc() to a smaller size really
brings. Is that really gonna help and will the allocator do something
useful?

Anyway. This patch introduces shrinking. The check for whether to shrink
changes from `if (d->len == 0)` to `if (d->len <= d->alloc / 4u)`, which
is probably cheap even if most of the time we don't need to shrink. For
most cases, that's the only change that this patch brings. However, once
we find out that 75% of the buffer are empty, calling realloc() seems a
sensible thing to do.
2024-02-05 13:34:31 +01:00
Philip Withnall
1757365af3 Merge branch 'dbus-codegen-tests' into 'main'
Don't skip dbus-codegen tests on Win32

Evolved from https://gitlab.gnome.org/GNOME/glib/-/merge_requests/3857

See merge request GNOME/glib!3874
2024-02-05 10:18:28 +00:00
Philip Withnall
8109e91236 Merge branch 'moskalets/fix-gresource-leak' into 'main'
gresources: fix memory leak from libelf

Closes #3242

See merge request GNOME/glib!3875
2024-02-05 10:07:10 +00:00
Philip Withnall
7ff207b1cf Merge branch 'girepository-fixes' into 'main'
Various girepository fixes

See merge request GNOME/glib!3877
2024-02-05 10:01:20 +00:00
Artur S0
378997a8f9 Update Russian translation 2024-02-05 07:21:24 +00:00
Philip Chimento
3a01629955 girepository: Fix copy-paste error in type check macro
GI_IS_REGISTERED_TYPE_INFO() wasn't working because it was actually
defined to be the same as GI_IS_OBJECT_INFO().

Add some desultory type-checking assertions to the repository tests.
2024-02-04 10:16:31 -08:00
Philip Chimento
f19115213a girepository: Add type check to instance parameter
gi_repository_enumerate_versions() was missing a type check of the
instance parameter. This helps catch mistakes when porting from
girepository 1.x where the parameter was allowed to be null.
2024-02-04 09:14:51 -08:00
Maxim Moskalets
aa8ed92fba gresources: fix memory leak from libelf
Memory was leaking when allocating it inside libelf and losing the pointer to it (it was an automatic variable) when returning NULL from the get_elf function in some cases

Closes #3242

Signed-off-by: Maxim Moskalets <Maxim.Moskalets@kaspersky.com>
2024-02-03 15:23:15 +03:00
Thomas Haller
6c0d4c884f gdatalist: rework g_data_remove_internal() to use datalist_shrink()
The main point here is to reuse datalist_remove() and datalist_shrink().
Especially, datalist_shrink() will become more interesting next, when it
actually shrinks the buffer.

Also, I find the previous implementation with "data_end" confusing.
Instead, only use index "i_data" to iterate over the data.
2024-02-02 19:56:26 +01:00
Thomas Haller
927075277c gdatalist: extract helper function for removing element
Extract helper functions datalist_remove() and datalist_shrink(). This
is to reduce duplicate code, but also to have a default way how to do
this.

In particular, later datalist_shrink() might do more aggressive
shrinking. We need to have that code in one place.
2024-02-02 19:56:26 +01:00
Thomas Haller
dbae6b3484 gdatalist: rework g_datalist_clear() to return early
g_datalist_unlock() is probably faster than g_datalist_unlock_and_set().
Move the "if (data)" check (that we anyway had) earlier, so we can
call g_datalist_unlock() and return early.
2024-02-02 19:56:26 +01:00
Thomas Haller
759ebf3663 gdatalist: remove restriction of number of keys in g_datalist_id_remove_multiple()
If too many keys are requested, they temporary buffer is allocated
on the heap. There is no problem in principle, to remove more than
16 keys.

Well, the problem is that GData tracks entries in a linear list, so
performance will degrade when it grows too much. That is a problem,
and users should be careful to not add unreasonably many keys. But it's
not the task of g_datalist_id_remove_multiple() to decide what is
reasonable.

This limitation was present from the beginning, in commit 0415bf9412
('Add g_datalist_id_remove_multiple'). It's no longer necessary since
commit eada6be364 ('gdataset: cleanup g_data_remove_internal()').
2024-02-02 19:56:26 +01:00
Thomas Haller
48a1d8c695 dataset/tests: add test adding many queue data and remove them 2024-02-02 19:27:44 +01:00
Philip Withnall
3f4e6ddcd8 Merge branch 'thorough-tests-in-ci' into 'main'
build: Add thorough test setup

See merge request GNOME/glib!3838
2024-02-02 14:33:22 +00:00
Philip Withnall
35f42d0c8c Merge branch 'alatiera/python-test' into 'main'
gio: tests: Use slightly more explicit assert functions

See merge request GNOME/glib!3872
2024-02-02 14:29:26 +00:00
Jordan Petridis
9c65e9ba2d gio: tests: Use slightly more explicit assert functions
Found by using teyit [1] on the code

https://github.com/isidentical/teyit
2024-02-02 16:15:35 +02:00
Philip Withnall
b65657f068 Merge branch 'th/optimize-weak-ref-list' into 'main'
[th/optimize-weak-ref-list] rework GObject's `WeakRefData` to track references in an array instead of GSList

See merge request GNOME/glib!3869
2024-02-02 14:01:10 +00:00
Thomas Haller
d8e4f39aa8 gobject: track GWeakRef in object's WeakRefData with an array
GSList doesn't seem the best choice here. It's benefits are that it's
relatively convenient to use (albeit not very efficient) and that an
empty list requires only the pointer to the list's head.

But for non-empty list, we need to allocate GSList elements. We can do
better, by writing more code.

I think it's worth optimizing GObject, at the expense of a bit(?) more
complicated code. The complicated code is still entirely self-contained,
so unless you review WeakRefData usage, it doesn't need to bother you.
Note that this can be easily measure to be a bit faster. But I think the
more important part is to safe some allocations. Often objects are
long-lived, and the GWeakRef will be tracked for a long time. It is
interesting, to optimize the memory usage of that.

- if the list only contains one weak reference, it's interned/embedded in
  WeakRefData.list.one. Otherwise, an array is allocated and tracked
  at WeakRefData.list.many.

- when the buffer grows, we double the size. When the buffer shrinks,
  we reallocate to 50% when 75% are empty. When the buffer shrinks to
  length 1, we free it (so that "list.one" is always used with a length
  of 1).
  That means, at worst case we waste 75% of the allocated buffer,
  which is a choice in the hope that future weak references will be
  registered, and that this is a suitable strategy.

- on architectures like x86_68, does this not increase the size of
  WeakRefData.

Also, the number of weak-refs is now limited to 65535, and now an
assertion fails when you try to register more than that. But note that
the internal tracking just uses a linear search, so you really don't
want to register thousands of weak references on an object. If you do
that, the current implementation is not suitable anyway and you must
rethink your approach. Nor does it make sense to optimize the
implementation for such a use case. Instead, the implementation is
optimized for a few (one!) weak reference per object.
2024-02-02 14:49:09 +01:00
Thomas Haller
637c2a08ce gobject: combine ref_count/lock_field in WeakRefData
We can safely combine this, and use bit 30 of the ref-count for locking.

This leaves still 2^30-1 for the ref-count, which is more than enough,
because these references are only taken for a short time in
g_weak_ref_get() and g_weak_ref_set(). Note that one thread can at most
take one reference at a time, so the ref-count will always a smaller
number.

Also note, that obviously we will only take a bit lock while also
holding a reference. That means, when weak_ref_data_unref() decreases
the ref-count to zero, the bit will be unlocked as well.

The reason to do this is to free up some space in WeakRefData. Note that
(on x86_64) this doesn't actually make the struct smaller.  It's
probably not reasonably possible to make WeakRefData smaller than it
already is (on x86_64). However, by combining the fields we have some
space for reuse without increasing the struct size. That space will be
used next.
2024-02-02 14:49:09 +01:00
Thomas Haller
824c4da44b gobject/tests: add test that creates a large number of weak references
The implementation of GWeakRef tracks weak references in a way, that
requires linear search. That is probably best, for an expected low
number of entries (e.g. compared to the overhead of having a hash
table). However, it means, if you create thousands of weak references,
performance start to degrade.

Add a test that creates 64k weak references. Just to see how it goes.
2024-02-02 14:49:09 +01:00
Philip Withnall
1c6db4c8b4 Merge branch 'alatiera/docs-fix' into 'main'
docs: Fix include path for the build

See merge request GNOME/glib!3871
2024-02-02 11:52:51 +00:00
Jordan Petridis
e0b7ab81cf docs: Fix include path for the build
f75221c7ea moved the introspection
folder around, but we also need to adjust the relative path so
the documentation will keep building
2024-02-02 12:54:07 +02:00
Philip Withnall
53ded42fbb Merge branch 'revert-untested-codegen-changes' into 'main'
Revert "Don't skip dbus-codegen tests on Win32"

See merge request GNOME/glib!3870
2024-02-02 10:22:02 +00:00
Philip Withnall
5744f55c11 Revert "Don't skip dbus-codegen tests on Win32"
This reverts commit fbdc9a2d03.

It was not submitted through a merge request and broke CI. Reverting it
immediately to unbreak CI and hence the rest of the development
pipeline. The changes can be re-submitted as a merge request so they’re
properly tested in CI before being merged.

See https://gitlab.gnome.org/GNOME/glib/-/merge_requests/3857#note_1994336
2024-02-02 10:01:24 +00:00
John Ralls
fbdc9a2d03 Don't skip dbus-codegen tests on Win32
And coincidentally on Darwin either.
2024-02-01 15:17:26 -08:00
Philip Withnall
8afefe963d Merge branch '3238-hurd-configure-warnings' into 'main'
ci: Temporarily disable --fatal-meson-warnings on Hurd CI

Closes #3238

See merge request GNOME/glib!3863
2024-02-01 10:53:37 +00:00
Philip Withnall
c6ae56a9d6 Merge branch '3217-gir-stack' into 'main'
girepository: Expose GITypeInfo and GIArgInfo as stack allocatable

Closes #3217

See merge request GNOME/glib!3867
2024-02-01 10:52:13 +00:00
Philip Withnall
bcfb896fed Merge branch 'mcatanzaro/main-context-tutorial' into 'main'
Link to the main context tutorial from the main loop docs

See merge request GNOME/glib!3868
2024-02-01 10:51:30 +00:00
Michael Catanzaro
8f34e90bc3 Link to the main context tutorial from the main loop docs
This might help increase visibility of Philip's useful GMainContext
tutorial. Although the GMainContext documentation is fairly good, it's
also pretty intimidating. The tutorial is very useful and provides
guidance that we can't fit directly into the documentation, so reference
it.
2024-01-31 11:56:56 -06:00
Philip Withnall
2638f97b3e Merge branch 'th/weak-ref-lock-2' into 'main'
[th/weak-ref-lock-2] gobject: use per-object bit-lock instead of global RWLock for GWeakRef

Closes #743

See merge request GNOME/glib!3834
2024-01-31 16:51:50 +00:00
Thomas Haller
7382cc4383 gobject: use per-object bit-lock instead of global RWLock for GWeakRef
Replace the global RWLock with per-object locking. Note that there are
three places where we needed to take the globlal lock. g_weak_ref_get(),
g_weak_ref_set() and in _object_unref_clear_weak_locations(), during
g_object_unref(). The calls during g_object_unref() seem the most
relevant here, where we would want to avoid a global lock. Luckily, that
global lock only had to be taken if the object ever had a GWeakRef
registered, so most objects wouldn't care. The global lock only affects
objects, that are ever set via g_weak_ref_set(). Still, try to avoid that
global lock.

Related to GWeakRef, there are various moments when we don't hold a
strong reference to the object. So the per-object lock cannot be on the
object itself, because when we want to unlock we no longer have access
to the object. And we cannot take a strong reference on the GObject
either, because that triggers toggle notifications. And worse, when one
thread holds the last strong reference of an object and decides to
destroy it, then a `g_weak_ref_set(weak_ref, NULL)` on another thread
could acquire a temporary reference, and steal the destruction of the
object from the other thread.

Instead, we already had a "quark_weak_locations" GData and an allocated
structure for tracking the GSList with GWeakRef. Extend that to be
ref-counted and have a separate lifetime from the object. This
WeakRefData now contains the per-object mutex for locking. We can
request the WeakRefData from an object, take a reference to keep it
alive, and use it to hold the lock without having the object alive.

We also need a bitlock on GWeakRef itself. So to set or get a
GWeakRef we must take the per-object lock on the WeakRefData and the
lock on the GWeakRef (in this order). During g_weak_ref_set() there may
be of course two objects (and two WeakRefData) involved, the previous
and the new object.

Note that now once an object gets a WeakRefData allocated, it can no
longer be freed. It must stick until the object gets destroyed. This
allocation happens, once an object is set via g_weak_ref_set(). In
other words, objects involved with GWeakRef will have extra data
allocated.

It may be possible to also release the WeakRefData once it's no longer
needed. However, that would be quite complicated, and require additional
atomic operations, so it's not clear to be worth it. So it's not done.
Instead, the WeakRefData sticks on the object once it's set.
2024-01-31 17:30:28 +01:00
Thomas Haller
092be080c5 gobject: avoid global GRWLock for weak locations in g_object_unref() in some cases
_object_unref_clear_weak_locations() is called twice during
g_object_unref(). In both cases, it is when we expect that the reference
count is 1 and we are either about to call dispose() or finalize().

At this point, we must check for GWeakRef to avoid a race that the ref
count gets increased just at that point.

However, we can do something better than to always take the global lock.

On the object, whenever an object is set to a GWeakRef, set a flag
OPTIONAL_FLAG_EVER_HAD_WEAK_REF. Most objects are not involved with weak
references and won't have this flag set.

If we reach _object_unref_clear_weak_locations() we just (atomically)
checked that the ref count is one. If the object at this point never had
a GWeakRef registered, we know that nobody else could have raced against
obtaining another reference. In this case, we can skip taking the lock
and checking for weak locations.

As most object don't ever have a GWeakRef registered, this significantly
avoids unnecessary work during _object_unref_clear_weak_locations().

This even fixes a hard to hit race in the do_unref=FALSE case.
Previously, if do_unref=FALSE there were code paths where we avoided
taking the global lock. We do so, when quark_weak_locations is unset.
However, that is not race free. If we enter
_object_unref_clear_weak_locations() with a ref-count of 1 and one
GWeakRef registered, another thread can take a strong reference and
unset the GWeakRef. Then quark_weak_locations will be unset, and
_object_unref_clear_weak_locations() misses the fact that the ref count
is now bumped to two. That is now fixed, because once
OPTIONAL_FLAG_EVER_HAD_WEAK_REF is set, it will stick.

Previously, there was an optimization to first take a read lock to check
whether there are weak locations to clear. It's not clear that this is
worth it, because we now already have a hint that there might be a weak
location. Unfortunately, GRWLock does not support an upgradable lock, so
we cannot take an (upgradable) read lock, and when necessary upgrade
that to a write lock.
2024-01-31 17:30:28 +01:00
Thomas Haller
0c06a4b7a0 glib: add internal g_datalist_id_update_atomic() function
GDataSet is mainly used by GObject. Usually, when we access the private
data there, we already hold another lock around the GObject.

For example, before accessing quark_toggle_refs, we take a
OPTIONAL_BIT_LOCK_TOGGLE_REFS lock. That makes sense, because we anyway
need to protect access to the ToggleRefStack. By holding such an
external mutex around several GData operations, we achieve atomic
updates.

However, there is a (performance) use case to update the qdata
atomically, without such additional lock. The GData already holds a lock
while updating the data. Add a new g_datalist_id_update_atomic()
function, that can invoke a callback while holding that lock.

This will be used by GObject. The benefit is that we can access the
GData atomically, without requiring another mutex around it.

For example, a common pattern is to request some GData entry, and if
it's not yet allocated, to allocate it. This requires to take the GData
bitlock twice. With this API, the callback can allocate the data if no
entry exists yet.
2024-01-31 17:30:28 +01:00
Philip Withnall
2a99d4b168 girepository: Expose GITypeInfo and GIArgInfo as stack allocatable
There are a handful of APIs in libgirepository which are used on
performance-sensitive code paths in language bindings (such as looking
at arguments when doing function calls). Historically libgirepository
has provided a stack-allocated variant for them, which avoids returning
a newly allocated `GIBaseInfo`. Since moving to glib.git and porting to
`GTypeInstance`, that stack allocated version has been broken.

This commit fixes it, by exposing obfuscated stack allocatable versions
of `GITypeInfo` and `GIArgInfo`, which are the two `GIBaseInfo`
subtypes which can be returned by the stack allocation functions.

The commit includes unit tests for them.

Signed-off-by: Philip Withnall <pwithnall@gnome.org>

Fixes: #3217
2024-01-31 15:49:38 +00:00
Philip Withnall
5f12851312 Merge branch 'wip/oholy/libmnt_monitor' into 'main'
gunixmounts: Use libmnt_monitor API for monitoring

See merge request GNOME/glib!3845
2024-01-31 14:30:09 +00:00
Philip Withnall
7b5bcf62b8 Merge branch 'th/gobject-carray-comment' into 'main'
[th/gobject-carray-comment] gobject: remove obsolete code comment about CArray

See merge request GNOME/glib!3866
2024-01-31 14:09:04 +00:00
Ondrej Holy
c7254fb3ad gunixmounts: Use mnt_monitor_veil_kernel option
The previous commit enabled the `/run/mount/utab` monitoring. The problem
is that the `mount-changed` signal can be emitted twice for one mount. One
for the `/proc/mounts` file change and another one for the `/run/media/utab`
file change. This is still not ideal because e.g. the `GMount` objects for
mounts with the `x-gvfs-hide` option are added and immediately removed.
Let's enable the `mnt_monitor_veil_kernel` option to avoid this.

Related: https://github.com/util-linux/util-linux/pull/2725
2024-01-31 14:53:42 +01:00
Ondrej Holy
1abbbd761e gunixmounts: Use libmnt_monitor API for monitoring
The `GUnixMountMonitor` object implements monitoring on its own currently.
Only the `/proc/mounts` file changes are monitored. It is not aware of the
`/run/mount/utab` file changes. This file contains the userspace mount
options (e.g. `x-gvfs-notrash`, `x-gvfs-hide`) among others. There is a
problem when `/sbin/mount.<type>` (e.g. `mount.nfs`) helper programs are
used. In that case, the `/run/mount/utab` file is updated later than the
`/proc/mounts` file and thus the `GUnixMountMonitor` clients (e.g.
`gvfs-udisks2-volume-monitor`, `gvfsd-trash`) don't see the userspace
options until the next `mount-changed` signal. Let's use the `libmnt_monitor`
API for monitoring instead and emit the `mount-changed` signal also when the
`/run/mount/utab` file is changed.

Related: https://issues.redhat.com/browse/RHEL-14607
Related: https://github.com/util-linux/util-linux/pull/2607
2024-01-31 14:53:42 +01:00
Philip Withnall
5f855022a6 Merge branch 'th/test-weak-notify' into 'main'
[th/test-weak-notify] gobject/tests: add test checking that GWeakRef is cleared in GWeakNotify

See merge request GNOME/glib!3865
2024-01-31 12:35:52 +00:00