This typically indicates a bug in the program, where a GTask has been
created, but a bug in the control flow has caused it to not return a
value.
There is one situation where it might be legitimate to finalise a GTask
without returning: if an error happens in your *_async() start function
after you’ve created a GTask, but before the async operation returns to
the main loop; and you report the error using g_task_report_*error()
rather than reporting it using the newly constructed GTask.
Another situation is where you are just using GTask as a convenient way
to move some work to another thread, without the complexity of creating
and running your own thread pool. GDBus does this with
g_dbus_interface_method_dispatch_helper(), for example.
In most other cases, it’s a bug. Emit a debug message about it, but not
a full-blown warning, as that would create noise in the legitimate
cases.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
the gio dbus codegen test has 10 test cases in it.
Each test case is given 100 seconds to complete.
That is far longer than they should need.
Furthermore, the entire test is only given 60s
to complete.
This commit makes the internal timeout more consistent
with the external timeout, by giving each of the 10
test cases 6 seconds instead of 100s.
`HAVE_COCOA` should be used only in the places where we’re actually
depending on the Cocoa toolkit. It should not be used as a general way
of detecting building on a Darwin-based OS such as macOS.
Conversely, there are a few places in the code where we do want to
specifically detect the Cocoa toolkit (and others where we specifically
want to detect Carbon), so keep `HAVE_COCOA` and `HAVE_CARBON` around.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
Helps: #2802
This reverts commit 476e33c3f3.
We’ve decided to remove `G_OS_DARWIN` in favour of recommending people
use `__APPLE__` instead. As per the discussion on #2802 and linked
issues,
* Adding a new define shifts the complexity from “which of these
platform-provided defines do I use” to “which platform-provided
defines does G_OS_DARWIN use”
* There should ideally be no cases where a user of GLib has to use
their own platform-specific code, since GLib should be providing
appropriate abstractions
* Providing a single `G_OS_DARWIN` to cover all Apple products (macOS
and iOS) hides the complexity of what the user is actually testing:
are they testing for the Mach kernel, the Carbon and/or Cocoa user
space toolkits, macOS vs iOS vs tvOS, etc
Helps: #2802
Some of GLib's unit tests are under an apparently GLib-specific
permissive license, vaguely similar to the BSD/MIT family but with the
GPL's lack-of-warranty wording. This is not on SPDX's list of
well-known licenses, so we need to use a custom license name prefixed
with LicenseRef if we want to represent this in SPDX/REUSE syntax.
Most of the newer tests seem to be licensed under LGPL-2.1-or-later
instead.
Signed-off-by: Simon McVittie <smcv@collabora.com>
In g_proxy_resolver_lookup_async() we have some error validation that
detects invalid URIs and directly returns an error, bypassing the
interface's lookup_async() function. This is great, but when the
interface's lookup_finish() function gets called later, it may assert
that the source tag of the GTask matches the interface's lookup_async()
function, which will not be the case.
As suggested by Philip, we need to check for this situation in
g_proxy_resolver_lookup_finish() and avoid calling into the interface
here if we did the same in g_proxy_resolver_lookup_async(). This can be
done by checking the source tag.
I added a few new tests to check the invalid URI "asdf" used in the
issue report. The final case, using async GProxyResolver directly,
checks for this bug.
Fixes#2799
Similar to g_source_set_static_name, this avoids
strdup overhead for debug-only information in
possibly hot code paths.
We also add a macro wrapper for g_task_set_name that
uses __builtin_constant_p to decide whether to use
g_task_set_name or g_task_set_static_name.
We already set names on most sources, this
one was just forgotten. This lets us set
a static name, and prevents g_task_attach_source
from setting a non-static one.
We need to make sure that such binaries are built and available at test time
or we may fail some tests requiring them (directly or through desktop file).
As per this, and because now generated desktop files are available both
at build and install time, don't skip some tests we were used to, but
actually enforce they are running.
We have some test programs on which some tests depend on, for example
appinfo-test is a tool that is used by the desktop-app-info tests.
So test can now have an 'extra_programs' key where the extra program
names can be included.
This could have been handled manually via 'depends', but this allows
to avoid repeating code and be sure that all is defined when extra
programs targets are checked.
`g_app_info_launch_default_for_uri_async()` has already returned by this
point, so waiting a long time is not really going to help.
Wait for 3× as long as the successful case took, which should allow for
long enough to catch true negatives, with a bit of variance.
On my system, this means waiting for about 14ms, rather than the 100ms
which this previous slept for. This speeds the test up by about 5%.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
We were generating .desktop files with different content when installed
tests were enabled, and thus making impossible to test some cases
because there was no built file until installed.
To avoid this, always generate both versions of desktop files while
install only the one containing the install path prefix if needed.
Given that it can be computed using an error-prone strings comparisons it
is better to provide a variable everywhere, so that we don't have the
risk of comparing values that are always false.
We have tests that are failing in some environments, but it's
difficult to handle them because:
- for some environments we just allow all the tests to fail: DANGEROUS
- when we don't allow failures we have flacky tests: A CI pain
So, to avoid this and ensure that:
- New failing tests are tracked in all platforms
- gitlab integration on tests reports is working
- coverage is reported also for failing tests
Add support for `can_fail` keyword on tests that would mark the test as
part of the `failing` test suite.
Not adding the suite directly when defining the tests as this is
definitely simpler and allows to define conditions more clearly (see next
commits).
Now, add a default test setup that does not run the failing and flaky tests
by default (not to bother distributors with testing well-known issues) and
eventually run all the tests in CI:
- Non-flaky tests cannot fail in all platforms
- Failing and Flaky tests can fail
In both cases we save the test reports so that gitlab integration is
preserved.