We are using various indexes types, but not always using the correct
sign or size, so let's adapt this to ensure we're consistent with the
values we're comparing with.
Even though we expose member access as size_t, a GI info blob can
typically just access to an a number of values that is never bigger
than uint16_t, as that's how the typelib is defined (cfr. typelib
internal header blob sizes and n_* elements).
So let's avoid this to happen by adding a check.
We used to use unsigned values, while they should be big enough to old
the data we're handling here, it's cleaner and clearer if we use size_t
as type for such values, as it makes straight forward to understand what
a value should contain. It also makes these values more future proof.
We just do a safe s/gsize/size_t/ replacement here without doing any
changes to places in which different size of size_t and gsize may be
actually different and create troubles.
Usually, after g_pointer_bit_lock() we want to read the pointer that we
have. In many cases, when we g_pointer_bit_lock() a pointer, we can
access it afterwards without atomic, as nobody is going to modify the
pointer then.
However, gdataset also supports g_datalist_set_flags(), so the pointer
may change at any time and we must always use atomics to read it. For
that reason, g_datalist_lock_and_get() does an atomic read right after
g_pointer_bit_lock().
g_pointer_bit_lock() can easily access the value that it just set. Add
g_pointer_bit_lock_and_get() which can return the value that gets set
afterwards.
Aside from saving the second atomic-get in certain scenarios, the
returned value is also atomically the one that we just set.
In all cases after taking the lock with g_datalist_lock(), we also
get the pointer. And it hardly makes sense otherwise.
Replace g_datalist_lock() by g_datalist_lock_and_get() which does
both steps in one.
- only use gnewa0() for up to 400 bytes (arbitrarily chosen as
something that is probably small enough to cover most small tasks
while fitting easily on the stack). Otherwise fallback to g_new0().
- don't do intermediate G_DATALIST_SET_POINTER(). Set to NULL
afterwards with g_datalist_unlock_and_set().
- move the g_free(d) after releasing the lock on datalist.
- when setting datalist to NULL, do it at the end with
g_datalist_unlock_and_set() to avoid the additional atomic
operations.
- when freeing the old "d", do that after releasing the bit
lock.
The previous code was in practice correct.
(((guintptr) ptr) | ((gsize) mask))
works even if gsize is smaller than guintptr. And while the C standard
only guarantees that cast between uintptr_t and void* works, on
reasonable compilers it works to directly cast uintptr_t to GData*.
Still, no need for such uncertainty. Just be clear about the involved
types (use guintptr throughout) and cast first to (gpointer).
g_datalist_clear_i() had only one caller, altough the code comment made
it sound as if in the past there were more.
A function that has only one caller, and then needs a code comment to
explain the context in which it is called, makes the code more complicated
than necessary.
Especially since the function expects to be called with a global lock
held, and then unlocks and relocks. Spreading such things to another
function (which is only used once) makes code harder to follow.
Inline the function, so that we can see all the (trivial) code at one place.
This saves an additional atomic set.
Also, changing the pointer before the unlock can awake a blocked thread,
but at this point, the lock is still held (and the thread may need to
sleep again). This can be avoided.
The existing g_pointer_bit_lock() and g_pointer_bit_unlock() API
requires the user to understand/reimplement how bits of the pointer get
mangled. Add helper functions for that.
The useful thing to do with g_pointer_bit_lock() API is to get/set
pointers while having it locked. For example, to set the pointer a user
can do:
g_pointer_bit_lock (&lockptr, lock_bit);
ptr2 = set_bit_pointer_as_if_locked(ptr, lock_bit);
g_atomic_pointer_set (&lockptr, ptr2);
g_pointer_bit_unlock (&lockptr, lock_bit);
That has several problems:
- it requires one extra atomic operations (3 instead of 2, in the
non-contended case).
- the first g_atomic_pointer_set() already wakes blocked threads,
which find themselves still being locked and needs to go back to
sleep.
- the user needs to re-implement how bit-locking mangles the pointer so
that it looks as if it were locked.
- while the user tries to re-implement what glib does to mangle the
pointer for bitlocking, there is no immediate guarantee that they get
it right.
Now we can do instead:
g_pointer_bit_lock(&lockptr, lock_bit);
g_pointer_bit_unlock_and_set(&lockptr, lock_bit, ptr, 0);
This will also emit a critical if @ptr has the locked bit set.
g_pointer_bit_lock() really only works with pointers that have a certain
alignment, and the lowest bits unset. Otherwise, there is no space to
encode both the locking and all pointer values. The new assertion helps
to catch such bugs.
Also, g_pointer_bit_lock_mask_ptr() is here, so we can do:
g_pointer_bit_lock(&lockptr, lock_bit);
/* set a pointer separately, when g_pointer_bit_unlock_and_set() is unsuitable. */
g_atomic_pointer_set(&lockptr, g_pointer_bit_lock_mask_ptr(ptr, lock_bit, TRUE, 0, NULL));
...
g_pointer_bit_unlock(&lockptr, lock_bit);
and:
g_pointer_bit_lock(&lockptr, lock_bit);
/* read the real pointer after getting the lock. */
ptr = g_pointer_bit_lock_mask_ptr(lockptr, lock_bit, FALSE, 0, NULL));
...
g_pointer_bit_unlock(&lockptr, lock_bit);
While glibc is fine with it (and returns a `NULL` pointer), technically
it’s implementation-defined behaviour according to POSIX, so it’s best
avoided.
See
https://pubs.opengroup.org/onlinepubs/9699919799/functions/posix_memalign.html.
In particular, valgrind will warn about it, which is causing failures of
the gdbus-codegen tests when valgrind is enabled. For example,
https://gitlab.gnome.org/GNOME/glib/-/jobs/3460673 gives
```
==15276== posix_memalign() invalid size value: 0
==15276== at 0x484B7BC: posix_memalign (vg_replace_malloc.c:2099)
==15276== by 0x49320B2: g_variant_new_from_bytes (gvariant-core.c:629)
==15276== by 0x4931853: g_variant_new_from_data (gvariant.c:6226)
==15276== by 0x4B9A951: g_dbus_gvalue_to_gvariant (gdbusutils.c:708)
==15276== by 0x41BD15: _foo_igen_bat_skeleton_handle_get_property (gdbus-test-codegen-generated.c:13503)
==15276== by 0x41BFAF: foo_igen_bat_skeleton_dbus_interface_get_properties (gdbus-test-codegen-generated.c:13582)
…
```
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3228
The "--code" option was removed years ago in d5b8d8d523c3bc26aa9fe6c364d3a17d325b55b6
so remove references to it from README and g-ir-compiler(1)
Remove the "--no-init" option from g-ir-compiler and g-ir-compiler(1)
as it was documented to "can only be used if --code is also specified",
so no reason to keep it around.
Return a non-zero result when opening the output file fails and
don't use g_error() for other failures when writing out the file,
since such errors should not produce a core dump.
We never actually include multiple modules in the compiler,
so just nuke that. Also rather than passing around GIrModule
consistently pass around a GIrTypelibBuild structure which
has various things.
This lets us maintain a stack there which we can walk for
better error messages.
Also, fix up the node lookup in giroffsets.c; previously
it didn't really handle includes correctly. We really need to
switch to always using Foo.Bar (i.e. GIName) names internally...
It can't really work right now because we rely on dumping data at runtime,
which requires the library. If in the future we support static scanning,
we can reinvestigate embedded typelibs.