We can get a "-Wcast-align", if the target type that we cast to ("ct") has a
larger alignment than GTypeInstance.
That can happen on i686 architecture, if the GObject type has larger
alignment than the parent struct (or GObject). Since on i686, embeding
a "long long" or a "long double" in a struct still does not increase
the alignment beyond 4 bytes, this usually only happens when using the
__attribute__() to increase the alignment (or to have a field that has
the alignment increased).
It can happen on x86_64 when having a "long double" field.
The compiler warning is hard to avoid but not very useful, because it purely
operates on the pointer types at compile time. G_TYPE_CHECK_INSTANCE_CAST()
instead asserts (in non-optimized mode) that the pointer really points
to the expected GTypeInstance (and if that's the case, then the alignment
should be suitable already).
This is like in commit ed553e8e30 ('gtype: Eliminate -Wcast-align warnings
with G_TYPE_CHECK_INSTANCE_CAST'). But also fix the optimized code path.
With the unpatched G_TYPE_CHECK_INSTANCE_CAST() macro, the unit test would
now show the problem (with gcc-9.3.1-2.fc30.i686 or
gcc-12.2.1-4.fc37.x86_64):
$ export G_DISABLE_CAST_CHECKS=1
$ export CFLAGS='-Wcast-align=strict'
$ meson build
$ ninja -C build
...
In file included from ../gobject/gobject.h:26,
from ../gobject/gbinding.h:31,
from ../glib/glib-object.h:24,
from ../gobject/tests/objects-refcount1.c:2:
../gobject/tests/objects-refcount1.c: In function ‘my_test_dispose’:
../gobject/gtype.h:2523:42: warning: cast increases required alignment of target type [-Wcast-align]
2523 | # define _G_TYPE_CIC(ip, gt, ct) ((ct*) ip)
| ^
../gobject/gtype.h:517:66: note: in expansion of macro ‘_G_TYPE_CIC’
517 | #define G_TYPE_CHECK_INSTANCE_CAST(instance, g_type, c_type) (_G_TYPE_CIC ((instance), (g_type), c_type))
| ^~~~~~~~~~~
../gobject/tests/objects-refcount1.c:9:37: note: in expansion of macro ‘G_TYPE_CHECK_INSTANCE_CAST’
9 | #define MY_TEST(test) (G_TYPE_CHECK_INSTANCE_CAST ((test), G_TYPE_TEST, GTest))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
../gobject/tests/objects-refcount1.c:96:10: note: in expansion of macro ‘MY_TEST’
96 | test = MY_TEST (object);
| ^~~~~~~
Instead of a plain reference count check failure that is really hard to
understand, let's be explicit, and warn that manipulating an object's
notification queue during its finalization is not allowed.
When an object is revitalized and a notify callbacks increased the reference
counter of the object, we are calling the toggle notifier twice, while it
should only happen if also the actual reference count value is 1 (after
having been decremented from 2).
If an object gets revitalized during the dispose vfunc, we need to call
toggle refs notifiers only if we had 2 references and if the object has
the toggle references enabled.
This may change in case an object notifier handler changes this status,
so do this check only after we've called the notifiers so that in case
toggle notifications are enabled afterwards we still call the handlers.
We were reading if an object has toggle references even if this was not
really relevant for the current object state, as we only need to notify
when going from 2 to 1 references, so first ensure that this is the case
and then check if we have toggle references enabled in the object.
This is a micro-optimization, for the way flags are defined, but still
an operation we can avoid in most cases.
Even though the check is likely to be relevant if the object is finalized,
it may still give some indication if called while an instance has just lost
the last reference.
So use `g_return_if_fail` for consistency with the rest of the code.
They return floating references, so that should be reflected in the
introspection annotations.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
In case g_atomic_int_compare_and_exchange() check fails we ended up doing
another atomic get to figure out what it was the old reference count,
however, we can avoid this by using the full version of the function that
returns the value before the exchange happened as an out value.
This reverts commit 756b424cce.
The freedesktop SDK, which is used by gnome-build-meta, only has Meson
0.63. Bumping GLib’s Meson dependency to 0.64 means that, at the moment,
GLib is not buildable in gnome-build-meta and hence can’t be tested in
nightly pipelines against other projects, etc.
That’s bad for testing GLib.
It’s arguably bad that we’re restricted to using an older version of
Meson than shipped by Debian Testing, but that’s a separate discussion
to be had.
Revert the Meson 0.64 dependency until the freedesktop SDK ships Meson ≥
0.64. This also means reverting the simplifications to use of
`gnome.mkenum_simple()`.
See https://gitlab.gnome.org/GNOME/glib/-/merge_requests/3077#note_1601064
Meson now uses find_program() to get glib-mkenum from glib instead of
from system. That was already fixed at least in >=0.60 which is our
current minimum requirement.
Otherwise this test will succeed at build-time, but will fail when run
as an as-installed test via ginsttest-runner.
Signed-off-by: Simon McVittie <smcv@collabora.com>
This can be used to mark entire types as deprecated,
and trigger a warning when they are instantiated
and `G_ENABLE_DIAGNOSTIC=1` is set in the environment.
There's currently no convenient macros for defining
types with the new flag, but you can do:
```c
_G_DEFINE_TYPE_EXTENDED_BEGIN (GtkAppChooserWidget,
gtk_app_chooser_widget,
GTK_TYPE_WIDGET,
G_TYPE_FLAG_DEPRECATED)
...
_G_DEFINE_TYPE_EXTENDED_END ()
```
Includes a unit test by Philip Withnall.
This reverts commit 476e33c3f3.
We’ve decided to remove `G_OS_DARWIN` in favour of recommending people
use `__APPLE__` instead. As per the discussion on #2802 and linked
issues,
* Adding a new define shifts the complexity from “which of these
platform-provided defines do I use” to “which platform-provided
defines does G_OS_DARWIN use”
* There should ideally be no cases where a user of GLib has to use
their own platform-specific code, since GLib should be providing
appropriate abstractions
* Providing a single `G_OS_DARWIN` to cover all Apple products (macOS
and iOS) hides the complexity of what the user is actually testing:
are they testing for the Mach kernel, the Carbon and/or Cocoa user
space toolkits, macOS vs iOS vs tvOS, etc
Helps: #2802
The SPDX-License-Identifier said LGPL-2.1-or-later, but the license
grant was a permissive license, which we now identify as
LicenseRef-old-glib-tests.
Signed-off-by: Simon McVittie <smcv@collabora.com>
Some of GLib's unit tests are under an apparently GLib-specific
permissive license, vaguely similar to the BSD/MIT family but with the
GPL's lack-of-warranty wording. This is not on SPDX's list of
well-known licenses, so we need to use a custom license name prefixed
with LicenseRef if we want to represent this in SPDX/REUSE syntax.
Most of the newer tests seem to be licensed under LGPL-2.1-or-later
instead.
Signed-off-by: Simon McVittie <smcv@collabora.com>
This allows them to be referenced in other symbols value computation.
In addition, this fixes the automatically assigned value of a public
symbol that is preceded by a private one:
typedef enum {
/*< private >*/
ENUM_VALUE_PRIVATE,
/*< public >*/
ENUM_VALUE_PUBLIC, <--- value is 1, not 0.
} SomeExampleEnum;
Enum symbols can be defined with a value computed from previously
defined enum symbols. The current evaluator does not support this and
requires a literal integer expression.
This commit introduces a C symbol namespace that is filled along
code generation and provided as a local namespace for new symbols
evaluation, effectively allowing definitions such as:
typedef enum {
a = 4;
b = a + 2;
} myenum;
to be successfully processed.
Given that it can be computed using an error-prone strings comparisons it
is better to provide a variable everywhere, so that we don't have the
risk of comparing values that are always false.
We have tests that are failing in some environments, but it's
difficult to handle them because:
- for some environments we just allow all the tests to fail: DANGEROUS
- when we don't allow failures we have flacky tests: A CI pain
So, to avoid this and ensure that:
- New failing tests are tracked in all platforms
- gitlab integration on tests reports is working
- coverage is reported also for failing tests
Add support for `can_fail` keyword on tests that would mark the test as
part of the `failing` test suite.
Not adding the suite directly when defining the tests as this is
definitely simpler and allows to define conditions more clearly (see next
commits).
Now, add a default test setup that does not run the failing and flaky tests
by default (not to bother distributors with testing well-known issues) and
eventually run all the tests in CI:
- Non-flaky tests cannot fail in all platforms
- Failing and Flaky tests can fail
In both cases we save the test reports so that gitlab integration is
preserved.
In principle we could script this so that each max-version.c is compiled
26 times, once per possible MAX_VERSION, but I haven't implemented
that here: just pinning to the oldest possible version is sufficient to
reproduce #2796.
These aren't included in the installed-tests, since they don't really
do anything at runtime (the important thing is that they compile
without warnings).
Reproduces: #2796
Signed-off-by: Simon McVittie <smcv@collabora.com>
The function adjusting private struct size to private struct offset
should be `g_type_class_adjust_private_offset`, instead of the
previously misspelled `g_type_class_add_instance_private` in comment.
Fixes#2791