With the previous changes, all accesses to the WeakRefStack go through
g_datalist_id_update_atomic() and are guarded by the GData's bit lock.
At this point, the OPTIONAL_BIT_LOCK_WEAK_REFS lock is unnecessary and
can be dropped.
A minor benefit is that g_object_weak_{ref,unref}() needs now one lock
less.
Also note that this rework fixes a potential race for weak refs. Note
that we have two calls
g_datalist_id_set_data (&object->qdata, quark_weak_notifies, NULL);
that don't take OPTIONAL_BIT_LOCK_WEAK_REFS lock. One of these calls is
right before finalize(). At that point, no other thread can hold a
reference to the object to race and we are good. However, the other call
is from g_object_real_dispose(). At that point, theoretically the object
could have been resurrected and a pointer could have been passed to
another thread. Calling then g_object_weak_ref()/g_object_weak_unref()
will race. We would have required a OPTIONAL_BIT_LOCK_WEAK_REFS lock
around those g_datalist_id_set_data(,,NULL) calls.
Instead, this is now also fixed, because every update to the
WeakRefStack happens while holding the GData lock. So if you call
g_datalist_id_set_data(,,NULL), the WeakRefStack is removed from the
GData (and later freed by weak_refs_notify() and can no longer
concurrently updated by g_object_weak_{ref,unref}().