Setting installed_tests option enforces various test files to be
installed, this causes meson to build tools that might have not built
otherwise but that are still required for testing.
Also, disabling installed tests lead to slightly different code paths
when it comes to using test test files.
So, disable it for debian so that we can ensure that at test time we
have set all the dependencies between test programs and the used
resources (that can be libraries, external programs or modules).
Various glib tests (such as the spawn ones) depend on local binaries
being built, this may not happen (especially when not using installed
tests), thus ensure such dependencies via the newly added extra_programs
key
We need to make sure that such binaries are built and available at test time
or we may fail some tests requiring them (directly or through desktop file).
As per this, and because now generated desktop files are available both
at build and install time, don't skip some tests we were used to, but
actually enforce they are running.
We have some test programs on which some tests depend on, for example
appinfo-test is a tool that is used by the desktop-app-info tests.
So test can now have an 'extra_programs' key where the extra program
names can be included.
This could have been handled manually via 'depends', but this allows
to avoid repeating code and be sure that all is defined when extra
programs targets are checked.
`g_app_info_launch_default_for_uri_async()` has already returned by this
point, so waiting a long time is not really going to help.
Wait for 3× as long as the successful case took, which should allow for
long enough to catch true negatives, with a bit of variance.
On my system, this means waiting for about 14ms, rather than the 100ms
which this previous slept for. This speeds the test up by about 5%.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
We were generating .desktop files with different content when installed
tests were enabled, and thus making impossible to test some cases
because there was no built file until installed.
To avoid this, always generate both versions of desktop files while
install only the one containing the install path prefix if needed.
Given that it can be computed using an error-prone strings comparisons it
is better to provide a variable everywhere, so that we don't have the
risk of comparing values that are always false.
We have tests that are failing in some environments, but it's
difficult to handle them because:
- for some environments we just allow all the tests to fail: DANGEROUS
- when we don't allow failures we have flacky tests: A CI pain
So, to avoid this and ensure that:
- New failing tests are tracked in all platforms
- gitlab integration on tests reports is working
- coverage is reported also for failing tests
Add support for `can_fail` keyword on tests that would mark the test as
part of the `failing` test suite.
Not adding the suite directly when defining the tests as this is
definitely simpler and allows to define conditions more clearly (see next
commits).
Now, add a default test setup that does not run the failing and flaky tests
by default (not to bother distributors with testing well-known issues) and
eventually run all the tests in CI:
- Non-flaky tests cannot fail in all platforms
- Failing and Flaky tests can fail
In both cases we save the test reports so that gitlab integration is
preserved.
We have gnulib warnings in windows under clang:
../glib/gnulib/vasnprintf.c:2429:21: warning: variable 'flags' set but not
used [-Wunused-but-set-variable]
int flags = dp->flags;
^
../glib/gnulib/vasnprintf.c:4853:19: warning: unannotated fall-through
between switch labels [-Wimplicit-fallthrough]
case TYPE_LONGINT:
^
See: https://gitlab.gnome.org/3v1n0/glib/-/jobs/2361750