It's expected that an interface may have a NULL socket address, so just
skip it and don't crash if so. In practice, this occurs for VPN
interfaces.
Fixes: fe0139ee98
Thankfully this struct is also internal, so we can happily change the
types of any field except the first one (which is in the public `GTuples`
prefix).
This fixes the remaining `-Wsign-conversion` warnings for `grel.c`.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
`GTuples.len` is a `guint`.
This doesn’t seem to have caused any problems or warnings, but let’s get
it correct.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
We would otherwise have to have cast and lost width and signedness on
both the arguments to, and return value from, `g_relation_count()`.
Simpler to add an internal helper with the right types, and only do the
casts on the public API wrapper where API stability means it’s
unavoidable.
Fixes some more `-Wsign-conversion` warnings.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Previously this was passed straight into an array dereference, so could
easily have under- or over-flowed the array.
Add a precondition check to it. This doesn’t change the API contract,
because the code already emitted a critical warning if the index didn’t
find a table:
```
g_return_val_if_fail (table != NULL, NULL)
```
This fixes a load of `-Wsign-conversion` warnings and makes these
functions safer.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
This is what’s documented as supported (and the only value that is
supported), so might as well check for it early.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
The internals of this struct are private, so we have some freedom to fix
the types used to be wider (to allow addressing all of memory) and
correctly unsigned.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
The code wasn’t written for it, and we don’t want to diverge from
upstream for this, so disable -Wsign-conversion in case it was set for
the overall GLib build.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3405
Wherever we use gssize to allow passing -1, we need to ensure we don't
overflow the value by assigning a gsize to it without checking if the
size exceeds the maximum gssize. The safest way to do this is to just
use normal gsize everywhere instead and use gssize only for the
parameter.
Our computers don't have enough RAM to write tests for this. I tried
forcing string->len to high values for test purposes, but this isn't
valid and will just cause out of bounds reads/writes due to
string->allocated_len being unexpectedly small, so I don't think we can
test this easily.
Now all accesses to quark_notify_queue are guarded by the GData lock.
Several non-trivial operations are implemented via
g_datalist_id_update_atomic().
The OPTIONAL_BIT_LOCK_NOTIFY lock is thus unnecessary and can be dropped.
Note that with the move to g_datalist_id_update_atomic(), we now
potentially do more work while holding the GData lock (e.g. some code
paths allocation additional memory). But note that
g_datalist_id_set_data() already has code paths where it must allocate
memory to track the GDataElt. Also, most objects are not used in
parallel, so holding the per-object (per-GData) lock longer does not
affect them. Also, many operations also require a object_bit_lock(), so
it seems very unlikely that you really could achieve higher parallelism
by taking more locks (and minimizing the time to hold the GData lock).
On the contrary, taking one lock less and doing all the work there is
beneficial.
Make it a bit clearer that a returned `GFileInfo` can be empty if
`stat()` fails (e.g. with `EACCES`), and that it’s the caller’s
responsibility to use `g_file_info_has_attribute()` before retrieving
attribute values.
This better documents what we implemented in, for example, !3336.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Helps: #3658
It probably would get inlined anyway(?). But add G_ALWAYS_INLINE
to be more sure about this. We don't want this mostly trivial
function to be a separate call.
These are similar to g_pointer_bit_lock_and_get() and
g_pointer_bit_unlock_and_set().
The bitlock API is pretty amazing, as it allows to use a single bit of
an integer for locking while using the remaining 31 bits for other
purposes. But those other bits always need to be accessed atomically
too.
There is a use in being able to lock_and_get(), to combine the setting
of the lock bit and fetching the new value at once. For one, that can
safe additional atomic operations to fetch the value afterwards. But
more importantly, it allows to do this change in one atomic operation.
Likewise, unlock_and_set() allows to atomically clear the lock bit and
set a new value (while also preserving unrelated bits, by using the
@preserve_mask parameter).
A common pattern is to look whether a GData entry exists, and if it
doesn't, add it.
For that, we currently always must take a OPTIONAL_BIT_LOCK_NOTIFY lock.
This can be avoided, because GData already uses an internal mutex. By
using g_datalist_id_update_atomic(), we can perform all relevant
operations while holding that mutex.
Move functionality from g_object_notify_queue_freeze() inside
g_datalist_id_update_atomic().
The goal will be to drop the OPTIONAL_BIT_LOCK_NOTIFY lock in a later
commit.
With the previous changes, all accesses to the WeakRefStack go through
g_datalist_id_update_atomic() and are guarded by the GData's bit lock.
At this point, the OPTIONAL_BIT_LOCK_WEAK_REFS lock is unnecessary and
can be dropped.
A minor benefit is that g_object_weak_{ref,unref}() needs now one lock
less.
Also note that this rework fixes a potential race for weak refs. Note
that we have two calls
g_datalist_id_set_data (&object->qdata, quark_weak_notifies, NULL);
that don't take OPTIONAL_BIT_LOCK_WEAK_REFS lock. One of these calls is
right before finalize(). At that point, no other thread can hold a
reference to the object to race and we are good. However, the other call
is from g_object_real_dispose(). At that point, theoretically the object
could have been resurrected and a pointer could have been passed to
another thread. Calling then g_object_weak_ref()/g_object_weak_unref()
will race. We would have required a OPTIONAL_BIT_LOCK_WEAK_REFS lock
around those g_datalist_id_set_data(,,NULL) calls.
Instead, this is now also fixed, because every update to the
WeakRefStack happens while holding the GData lock. So if you call
g_datalist_id_set_data(,,NULL), the WeakRefStack is removed from the
GData (and later freed by weak_refs_notify() and can no longer
concurrently updated by g_object_weak_{ref,unref}().
This is a step step to drop OPTIONAL_BIT_LOCK_WEAK_REFS lock. See the
next commits why that is done.
Also, free the WeakRefStack, if there are no more references.
Previously, it was never freed.
The usual macro for Linux is `__linux__`, not `__LINUX__`. Noticed this
after Ruby fixed a similar error (6fbc32b5d0da31535cccc0eca1853273313a0b52).
Fixes: 5b84636e62
Spotted while looking at CI output for the previous commit, and I
thought I might as well fix it while I was there.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
This should fix compile errors on macOS, which couldn’t find the
`write()`, `lseek()`, etc. functions added in commit
020e58bac5.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
It was always passed the same value by all users of the macro.
This introduces no functional changes.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Rather than atomically querying the flags, and then atomically reffing
the `GClosure`, do the atomic ref and return the new flags value as a
side-effect. This eliminates the need for an additional atomic read, and
eliminates a race window between the two accesses.
It does mean we need a new internal version of `g_closure_ref()` which
returns the new `GClosureFlags` though.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Remove the last few non-atomic reads of `GClosure.ref_count`. See
previous commits for an explanation of why this is important.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Currently, `GClosure` does atomic writes to the flags in its initial
bitfield, but it never does atomic reads from them. This is not safe —
even if the memory read for an int is atomic, the compiler might
duplicate or rearrange a non-atomic read and give an unexpected result.
Using an atomic fetch inserts the necessary compiler and hardware
barriers to ensure a single result is fetched.
(See !4575 of an example of where this kind of bug has caused
misbehaviour.)
So, introduce an atomic read helper for the `GClosure` bitfield. Rather
than reading a single member of the bitfield, it returns the entire
bitfield, as several `GClosure` methods need access to multiple members
of the bitfield, and atomically fetching them all separately would not
be atomic overall.
Fix various `GClosure` methods to use the new atomic read function.
There are more parts of the code still to fix — these were just the
straightforward ones.
Signed-off-by: Philip Withnall <pwithnall@gnome.org>
Depending on luck, g_closure_ref may access closure->vint and observe
different values between reads. This manifests as a test failure in
signals-refcount{2,4}, properties-refcount1, and closure-refcount depending
on timing and re-runs.
Jakub Jelinek analysed this on GCC bug PR119607 after I'd reported it
over there as a possible GCC regression.
The critical part being g_closure_ref -> ATOMIC_INC_ASSIGN -> ATOMIC_CHANGE_FIELD
where closure->vint gets re-read repeatedly, both outside and inside the retry
loop. To fix that:
1. Atomically fetch it the first time;
2. Use the cached read, not a fresh read, of vint in the loop;
3. Use g_atomic_int_compare_and_exchange_full in the loop so we get a freshly
cached vint if it changed in another thread.
Bug: https://gcc.gnu.org/PR119607
Fixes: 834ddd19 ('turned all modifications to the first 32 integer bits in a closure into')
Co-authored-by: Jakub Jelinek <jakub@redhat.com>