Relax assertion about opened registry key as it may have been removed
in the meantime between enumeration and when opening, or (more likely)
we may not have the required permissions to open the some enumerated
keys (i.e. RegOpenKeyExW fails and returns ERROR_ACCESS_DENIED).
Fixes https://gitlab.com/inkscape/inbox/-/issues/5669
It fixes a race. g_source_attach() had the following check to ensure a
loop blocked on poll() would wakeup.
if (do_wakeup && context->owner && context->owner != G_THREAD_SELF)
g_wakeup_signal (context->wakeup);
However it doesn't contemplate an implementation where poll()ing is a
non-blocking operation that will be scheduled while the thread is
released to perform other tasks. This scenario opens up several
different possibilities where the condition would fail to hold true. I
experienced two of such races.
The first race pertains to a mono-threaded application. Do keep in mind
that integrating GLib to a foreign event loop will make GLib act as a
slave in the new event loop. When you post a new work unit to execute in
the thread managed by the foreign event loop, you don't use
g_main_context_invoke(). In fact the only reason to integrate
GMainContext in a foreign event loop is to make the two of them
communicate. So from time to time, the foreign event loop will execute
callbacks that manipulate the GMainContext loop. An illustration
follows.
// in this callback we translate an event from the foreign event loop
// to an event in the GMainContext event loop (that runs in the same
// thread)
static void my_event_loop_callback(void* data)
{
GMainContext* ctx = /* ... */;
// ...
g_source_attach(source, ctx);
}
int main()
{
// ...
my_event_loop_invoke(my_event_loop_callback, data);
// ...
// this function has all mechanisms in place to run the foreign
// event loop and the hooks to call
// g_main_context_{prepare,query,check,dispatch}
my_event_loop_run();
}
In this case, you would have the following series of calls:
1. g_main_context_prepare()
2. g_main_context_query()
3. A callback to my_event_loop is registered when any fd on the set is
ready or the timeout is reached.
4. The thread is released to perform other tasks.
5. One of the tasks executed wishes to communicate with my_event_loop
and enters my_event_loop_callback.
6. g_source_attach() is called.
7. g_source_attach() detects do_wakeup=TRUE, context->owner != NULL, and
context->owner == G_THREAD_SELF so g_wakeup_signal() is skipped.
8. None of the fds on the GLib poll() set becomes ready nor the GLib
timeout expires. The my_event_loop callback that would call
g_main_context_check() is never executed. Deadlock.
A shallow analysis will fail to detect the race here. The explanation
seems to showcase a scenario that will deterministically fail with a
deadlock every time. However do keep in mind that my_event_loop_callback
could be invoked before or after g_main_context_prepare(). There is an
_event_ race here. Furthermore, some GLib libraries such as GDBus will
initialize objects from extra threads (GAsyncInitable interface) and
invoke the result on the original thread when ready (g_source_attach()
will eventually be called). Now you have scenarios closer to standard
race examples.
The other scenario where a race would manifest happens in a
multi-threaded application that has a concurrency design similar to the
actor model. No actor executes in two threads simultaneously, but it's
not guaranteed that it'll always wake-up in the same thread. It'd
perform steps 1-4 just as in the previous example, but before thread
control is returned to the pool, it'd call g_main_context_release(). Now
g_source_attach() would skip g_wakeup_signal() for a different reason:
7. g_source_attach() detects do_wakeup=TRUE, context->owner == NULL so
g_wakeup_signal() is skipped.
8. Same as before.
Certainly there are other concurrency designs where this optimization
would cause a deadlock, but all of them have origin in the same place:
the optimization assumes the poll() implementation is a blocking
operation and the thread will never be released to perform other tasks
(possibly involving GLib calls) while result is not ready. They share
not only the same problem, but also the same solution: do not make
assumptions and just call g_wakeup_signal().
This patch implements this solution by introducing the
G_MAIN_CONTEXT_FLAGS_OWNERLESS_POLLING flag. This flag will force a call
to g_wakeup_signal() and fix the race on foreign event loops. The reason
to prevent changing this option after creation is to avoid other races
that would lead to event loss. Construction is the only proper time to
set this option.
The implementation design means we do not change **any** semantics for
current working code. If you don't set the new flag, the code won't
enter in different branches and current behavior won't be affected. The
patch is small and easy to follow too.
When an object with toggle reference is notifying a change we just
assume that this is true because of previous checks.
However, while locking, another thread may have removed the toggle
reference causing the waiting thread to abort (as no handler is set at
that point).
To avoid this, once we've got the toggle references mutex lock, check
again if the object has toggle reference, and if it's not the case
anymore just ignore the request.
Add a test that triggers this, it's not 100% happening because this is
of course timing related, but this is very close to the truth.
Fixes: #2394
The previous wording was not clear about what happens if a new weak ref
is taken during disposal (shortly after resurrecting the object with a
new strong ref, otherwise taking the weak ref is invalid).
See: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2064/diffs#note_1270092
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
Helps: #2390
No need to call memset in the loop, we can just
initialize all the values in one go.
GtkBuilder is now using g_object_setv, so this
may improve application start times a bit.
As per the previous change, an object that had weak locations set may
need to lock again the weak locations mutex during qdata cleanup, but
we can avoid this when we know we're removing the last location, by
removing the qdata entry and freeing the data.
In case a new location is needed for the same object, new data will be
added.
However, by doing this the weak locations during dispose may be
invalidated once the weak locations lock is passed, so check again if
this is the case while removing them.
It can happen that a GWeakRef is added to an object while it's disposing
(or even during finalizing) and this may happen in a thread that (weak)
references an object while the disposal isn't completed yet or when
using toggle references and switching to GWeakRef on notification (as
the API suggests).
In such scenario the weak locations are not cleaned up when the object
is finalized, and will point to a free'd area.
So, during finalization and when we're sure that the object will be
destroyed for sure, check again if there are new weak locations and
unset them if any as part of the qdata destruction.
Do this adding a new utility function so that we can avoid duplicating
code to free the weak locations.
Added various tests simulating this case.
Fixes: #2390
The documentation sort of already said this, but it’s better to make it
explicit.
This avoids the situation where some of the weak notify callbacks for an
object have been called, and then a subsequent one resurrects the
object. Without some way of undoing the weak notifications already sent,
that would leave external state which is coupled to the object’s
lifecycle out of sync.
This arose from discussion on !2064.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
GTK currently checks if a GtkWidget is finalized while still using a
floating reference—i.e. a widget was disposed without any parent
container owning it.
This warning can be useful to identify and trace ownership transfer
issues in libraries using initially unowned floating object types.
To avoid introducing constraints ex post, we can gate this check behind
both the G_ENABLE_DEBUG compile time flag for GLib, and behind the
G_ENABLE_DIAGNOSTIC environment variable run time check.
Fixes: #2489
Previously, priority was not randomly generated and was instead derived
from `GSequenceNode*` pointer value.
As a result, when a `GSequence` was freed and another was created, the
nodes were returned to memory allocator in such order that allocating
them again caused various performance problems in treap.
To my understanding, the problem develops like this :
1) Initially, memory allocator makes some nodes
2) For each node, priority is derived from pointer alone.
Due to the hash function, initially the priorities are reasonably
randomly distributed.
3) `GSequence` moves inserted nodes around to satisfy treap property.
The priority for node must be >= than priorities of its children
4) When `GSequence` is freed, it frees nodes in a new order.
It finds root node and then recursively frees left/right children.
Due to (3), hashes of freed nodes become partially ordered.
Note that this doesn't depend on choice of hash function.
5) Memory allocator will typically add freed chunks to free list.
This means that it will reallocate nodes in same or inverse order.
6) This results in order of hashes being more and more non-random.
7) This order happens to be increasingly anti-optimal.
That is, `GSequence` needs more `node_rotate` to maintain treap.
This also causes the tree to become more and more unbalanced.
The problem becomes worse with each iteration.
The solution is to use additional noise to maintain reasonable
randomness. This prevents "poisoning" the memory allocator.
On top of that, this patch somehow decreases average tree's height,
which is good because it speeds up various operations. I can't quite
explain why the height decreases with new code, probably the properties
of old hash function didn't quite match the needs of treap?
My averaged results for tree height with different sequence lengths:
Items | before| after |
--------+-------+---------------+
2 | 2,69 | 2,67 -00,74% |
4 | 3,71 | 3,80 +02,43% |
8 | 5,30 | 5,34 +00,75% |
16 | 7,45 | 7,22 -03,09% |
32 | 10,05 | 9,38 -06,67% |
64 | 12,97 | 11,72 -09,64% |
128 | 16,01 | 14,20 -11,31% |
256 | 19,11 | 16,77 -12,24% |
512 | 22,03 | 19,39 -11,98% |
1024 | 25,29 | 22,03 -12,89% |
2048 | 28,43 | 24,82 -12,70% |
4096 | 31,11 | 27,52 -11,54% |
8192 | 34,31 | 30,30 -11,69% |
16384 | 37,40 | 32,81 -12,27% |
32768 | 40,40 | 35,84 -11,29% |
65536 | 43,00 | 38,24 -11,07% |
131072 | 45,50 | 40,83 -10,26% |
262144 | 48,40 | 43,00 -11,16% |
524288 | 52,40 | 46,80 -10,69% |
The memory cost of the patch is zero on 64-bit, because the new field
uses the alignment hole between two other fields.
Note: priorities can sometimes have collisions. This is fine, because
treap allows equal priorities, but these will gradually decrease
performance. The hash function that was used previously has just one
collision on 0xbfff7fff in 32-bit space, but such pointer will not
occur because `g_slice_alloc()` always aligns to sizeof(void*).
However, in 64-bit space the old hash function had collisions anyway,
because it only uses lower 32 bits of pointer.
Closes#2468
Instead of calling xterm when it clearly does not exist and causes a silent error,
inform the user that the launch failed so they can take the right action.