These functions are unused. The per-object bitlock OPTIONAL_FLAG_LOCK
was replaced by using GData's lock and g_datalist_id_update_atomic().
Note that object_bit_lock() was originally introduced to replace global
locks for GObject so that all locks were per-object. Now this lock is
also dropped, and we only use the (per-object) GData lock.
When we introduced object_bit_lock(), we also added optional-flags on
32bit (see HAVE_OPTIONAL_FLAGS_IN_GOBJECT). These flags are still
useful, to have also on x86, so we keep them even if the original user
is gone now.
It was replaced by the GData lock and g_datalist_id_update_atomic().
Note that we introduced object_bit_lock() to replace global mutexes.
Now, object_bit_lock() is also replaced, by using the GData lock via
g_datalist_id_update_atomic().
This means, all mutex-like locks on the GObject now go through the GData
lock on the GObject's qdata.
For the moment, the object_bit_lock() API is still here and unused. It
will be dropped in a separate commit.
This is the first step to drop OPTIONAL_BIT_LOCK_TOGGLE_REFS lock. That
will happen soon after.
Note that toggle_refs_check_and_ref_or_deref() is called from
g_object_ref()/g_object_unref(), when the ref count toggles between 1
and 2 and called frequently. Also consider that most objects have no
toggle references and would rather avoid the overhead.
Note that we expect that the object has no toggle references. So the
fast path only takes a g_datalist_lock() first, updates the ref-count
and checks OBJECT_HAS_TOGGLE_REF(). If we have no toggle reference, we
avoid the call to g_datalist_id_update_atomic() -- which first needs to
search the datalist for the quark_toggle_refs key.
Only if OBJECT_HAS_TOGGLE_REF(), we call g_datalist_id_update_atomic().
At that point, we pass "already_locked" to indicate that we hold the
lock, and avoid the overhead of taking the lock a second time.
In this commit, the fast-path actually gets worse. Because previously we
already had the OBJECT_HAS_TOGGLE_REF() optimization and only needed the
object_bit_lock(). Now we additionally take the g_datalist_lock(). Note
that the object_bit_lock() will go away next, which brings the fast path
back to take only one bit lock. You might think, that the fast-path then
is still worse, because previously we took a distinct lock
object_bit_lock(), while now even more places go through the GData lock.
Which theoretically could allow for higher parallelism, by taking different
locks. However, note that in both cases these are per-object locks. So
it would be very hard to find a usage where previously higher
parallelism was achieved due to that (e.g. a concurrent g_weak_ref_get()
vs toggling the last reference). And that is only the fast-path in
toggle_refs_check_and_ref_or_deref(). At all other places, we soon will
take one lock less.
This also fixes a regression of commit abdb58007a ('gobject: drop
OPTIONAL_BIT_LOCK_NOTIFY lock'). Note the code comment in
toggle_refs_check_and_ref_or_deref() how it relies to hold the same lock
that is also taken while destroying the object. This was no longer the
case since OPTIONAL_BIT_LOCK_NOTIFY lock was replaced by GData lock.
This is fixed by this commit, because again the same lock is taken.
Fixes: abdb58007a ('gobject: drop OPTIONAL_BIT_LOCK_NOTIFY lock')
This allows the caller to take the lock on the GData first, and perform
some operations.
This is useful under the assumption, that the caller can find cases
where calling g_datalist_id_update_atomic() is unnecessary, but where
they still need to hold the lock to atomically make that decision.
That can avoid performance overhead, if we can avoid calling
g_datalist_id_update_atomic(). That matters for checking the
toggle-notify in g_object_ref()/g_object_unref().
Note that with "already_locked", g_datalist_id_update_atomic() will
still unlock the GData at the end. That is because the API of
g_datalist_id_update_atomic() requires that it might re-allocate the
buffer, and it can do a more efficient unlock in that case, instead of
leaving it to the caller. The usage and purpose of this parameter is
anyway special, so the few callers will be fine with this asymmetry.
It can safe an additional atomic operation to first set the buffer and
then do a separate g_datalist_unlock().
g_datalist_id_update_atomic() allows to use GData's internal look and
perform complex operations on the data while holding the lock.
Now, also expose g_datalist_{lock,unlock}() functions as private API.
That will be useful, because g_datalist_id_update_atomic() and GData's
lock is the main synchronization point for GObject. By being able to
take those locks directly, we can perform operations without having to
also look up GData key.
All code that adds an entry to GData calls down to datalist_append().
No caller would never pass a zero key_id. So a GData will never contain
the zero GQuark.
g_datalist_get_data() allows to lookup the keys by string. Looking up
by string, only makes sense if the string is a valid GQuark. And the
NULL string is not a valid GQuark (or, in another interpretation is to
say that the GQuark of NULL is zero). In any case, the GData does not
contain the zero quark and it should never find a NULL string there.
Note however that you can add invalid (non-zero) GQuarks to GData. That
is because we avoid the overhead of checking that the non-zero key_is is
infact a valid GQuark. Thus, if you called
g_datalist_get_data(&data, NULL)
the check
if (g_strcmp0 (g_quark_to_string (data_elt->key), key) == 0)
would have found entries where the key is not a valid GQuark.
Fix that. Looking up a NULL string should never finds an entry.
Fixes: f6a9d04796 ('GDataSet: silently accept NULL/0 as keys')
In g_dataset_id_remove_no_notify() and g_dataset_id_get_data(), check
first the argument before taking a lock.
Note that this doesn't matter much for g_dataset_id_get_data(). We could
leave the check as it was or drop it altogether. That is because
g_dataset_id_get_data() calls g_datalist_id_get_data(), which is fine
with a zero key_id. Also, passing a zero key is not something that would
somebody to normally, so the check is not necessary at all. I leave the
check for consistency with g_dataset_id_remove_no_notify() and do the
change for the same reason as there.
However, the check for key matters for g_dataset_id_remove_no_notify().
That function calls g_data_set_internal(), which must not be called with
zero key.
At this point, it seems a code smell to perform the check after taking
the lock. Something you notice every time that you look at it, and while
technically no problem, it is ugly. Hence the change.
This is internal API, and is only used via GLIB_PRIVATE_CALL(). Such
private API is not stable. That is, you must use the same version of
e.g. libgobject and libglib.
In that case, the "Since" annotation makes no sense.
Fixes the following issues:
1. g_getenv didn't properly differentiate between non-exisiting variables
and variables with empty values. This happened bacause we didn't call
SetLastError (0) before GetEnvironmentVariable (as with most APIs,
GetEnvironmentVariable doesn't set the error code to 0 on success).
2. g_getenv called GetEnvironmentVariable, g_free, and GetLastError in sequence;
the call to g_free could change the last error code.
3. We can use the looping pattern to call APIs that return the required buffer
size. The looping pattern makes the two phases (get size and retrieve value)
appear as just one call. It's also important for values that can change at any
time like environment variables [1].
[1] https://devblogs.microsoft.com/oldnewthing/20220302-00/?p=106303
Some apps names or keywords contain multiple words. For example 'LibreOffice
Calc' contains the word 'Calc'. This is rightfully detected as a prefix match,
however generally it is expected that searching for 'calc' would consistantly
return 'Calculator' in first position, instead of ranking them equally.
We now prioritise tokens that would otherwise rank equal based on where they
occur in the string, giving earlier occurences precedence.
Definitions in definition lists in most markdown implementations,
including the GitLab one, support laziness for the definition text
(https://spec.commonmark.org/0.31.2/#lazy-continuation-line).
As a result, each defined term would be collapsed into preceding definition.
To fix this, definitions need to be separated by blank line.
The use case for exposing this field is GTK wanting reproducible
encoding output across different OSes.
I decided to expose the OS as an integer because zlib uses an int
in its header and does not make its OS codes available as a custom
type in its API.
I also decided to limit it to values between 0 and 255 because zlib
encodes the OS as a single byte.
Test included.
Fixes: #3663
Like the GWeakRef's in GObject, there is a global lock that is consulted
whenever g_main_context_unref() or g_source_destroy() or
g_source_unref() is called to retrieve a reference to the associated
GMainContext.
There are a number of actual races that are fixed by this change.
1. Racing GSource destruction with g_main_context_unref() is solved
by holding the global source_weak_locations lock while setting
source->context = NULL and while g_source_destroy() attempts to
retrieve source->context;
2. Same race as 1. but inside g_source_unref()
3. Theoretical race of double freeing the contents of
context->pending_dispatches if both g_source_destroy() and
g_main_context_unref() both free resources inside g_main_context_unref().
A couple of implementation notes:
1. Unlocking source_weak_locations too early in g_main_context_unref()
(before g_source_destroy_internal() is called) may have a race of the
G_HOOK_FLAG_ACTIVE state of the source and cause a leak of the source.
This is why source_weak_locations is also held over the calls to
g_source_destroy_internal() in g_main_context_unref(). So that
either g_main_context_unref() or g_source_destroy() (but only 1) has
the chance to free the resources associated with the GMainContext.
2. g_main_context_unref() now needs to be more of a dispose()
implementation as it can be called multiple times with losing the
last ref.
Fixes: https://gitlab.gnome.org/GNOME/glib/-/issues/803
These show up in various places in docgen-based docs, including in
results for fairly common searches (such as ‘alloc’). Whilst people
should have been scared off by these being undocumented, the red
deprecated tags make it clear these shouldn't be touched.