g_has_typeof macro is wrongly in the public g_ namespace, internaly
symbols are usually in the glib_ namespace. This will also allow to
define glib_typeof differently on non-GNUC compilers (e.g. c++11
decltype).
This introduces no functional changes, but makes the refcount handling a
little easier to follow by no longer splitting a ref/unref pair across
three callbacks. Now, the ref/unref pairs are all within function-local
scopes.
Coverity CID: #1430783
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
By default, when using g_subprocess_launcher_take_fd() to pass an
FD to a child, the GSubprocessLauncher object also takes ownership
of the FD in the parent, and closes it during finalize(). This is
a reasonable assumption in the majority of the cases, but sometimes
it isn't a good idea.
An example is when creating a GSubprocessLauncher in JavaScript:
here, the destruction process is managed by the Garbage Collector,
which means that those sockets will remain opened for some time
after all the references to the object has been droped. This means
that it could be not possible to detect when the child has closed
that same FD, because in order to make that work, both FDs
instances (the one in the parent and the one in the children) must
be closed. This can be a problem in, as an example, a process that
launches a child that communicates with Wayland using an specific
socket (like when using the new API MetaWaylandClient).
Of course, it isn't a valid solution to manually call close() in
the parent process just after the call to spawn(), because the FD
number could be reused in the time between it is manually closed,
and when the object is destroyed and closes again that FD. If that
happens, it will close an incorrect FD.
One solution could be to call run_dispose() from Javascript on the
GSubprocessLauncher object, to force freeing the resources.
Unfortunately, the current code frees them in the finalize()
method, not in dispose() (this is fixed in !1670 (merged) ) but it
isn't a very elegant solution.
This proposal adds a new method, g_subprocess_launcher_close(),
that allows to close the FDs passed to the child. To avoid problems,
after closing an FD with this method, no more spawns are allowed.
Fix: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/1677
Originally, GSocketClient returned whatever error occured last. Turns
out this doesn't work well in practice. Consider the following case:
DNS returns an IPv4 and IPv6 address. First we'll connect() to the
IPv4 address, and say that succeeds, but TLS is enabled and the TLS
handshake fails. Then we try the IPv6 address and receive ENETUNREACH
because IPv6 isn't supported. We wind up returning NETWORK_UNREACHABLE
even though the address can be pinged and a TLS error would be more
appropriate. So instead, we now try to return the error corresponding
to the latest attempted GSocketClientEvent in the connection process.
TLS errors take precedence over proxy errors, which take precedence
over connect() errors, which take precedence over DNS errors.
In writing this commit, I made several mistakes that were caught by
proxy-test.c, which tests using GSocketClient to make a proxy
connection. So although adding a new test to ensure we get the
best-possible error would be awkward, at least we have some test
coverage for the code that helped avoid introducing bugs.
Fixes#2211
We should never return unknown errors to the application. This would be
a glib bug.
I don't think it's currently possible to hit these cases, so asserts
should be OK. For this to happen, either (a) a GSocketAddressEnumerator
would have to return NULL on its first enumeration, without returning an
error, or (b) there would have to be a bug in our GSocketClient logic.
Either way, if such a bug were to exist, it would be better to surface
it rather than hide it.
These changes are actually going to be effectively undone in a
subsequent commit, as I'm refactoring the error handling, but the commit
history is a bit nicer with two separate commits, so let's go with two.
GSocketAddressEnumerator encapsulates the details of how DNS happens, so
we don't have to think about it. But we may have taken encapsulation a
bit too far, here. Usually, we resolve a domain name to a list of IPv4
and IPv6 addresses. Then we go through each address in the list and try
to connect to it. Name resolution happens exactly once, at the start.
It doesn't happen each time we enumerate the enumerator. In theory, it
*could*, because we've designed these APIs to be agnostic of underlying
implementation details like DNS and network protocols. But in practice,
we know that's not really what's happening. It's weird to say that we
are RESOLVING what we know to be the same name multiple times. Behind
the scenes, we're not doing that.
This also fixes#1994, where enumeration can end with a RESOLVING event,
even though this is supposed to be the first event rather than the last.
I thought this would be hard to fix, even requiring new public API in
GSocketAddressEnumerator to peek ahead to see if the next enumeration is
going to return NULL. Then I decided we should just fake it: always emit
both RESOLVING and RESOLVED at the same time right after each
enumeration. Finally, I realized we can emit them at the correct time if
we simply assume resolving only happens the first time. This seems like
the most elegant of the possible solutions.
Now, this is a behavior change, and arguably an API break, but it should
align better with reasonable expectations of how GSocketClientEvent
ought to work. I don't expect it to break anything besides tests that
check which order GSocketClientEvent events are emitted in. (Currently,
libsoup has such tests, which will need to be updated.) Ideally we would
have GLib-level tests as well, but in a concession to pragmatism, it's a
lot easier to keep network tests in libsoup.
This isn't an API guarantee, but it's a potentially-surprising
behavior difference between the sync and async functions that is good
to know about, especially because our sync and async functions are
normally identical.
The linux kernel does not know that the socket will be used
for connect or listen and if you bind() to a local address it must
reserve a random port (if port == 0) at bind() time, making very easy
to exhaust the ~32k port range, setting IP_BIND_ADDRESS_NO_PORT tells
the kernel to choose random port at connect() time instead, when the
full 4-tuple is known.
`g_local_file_fstatat()` needs to fall back to returning an error if
`fstatat()` isn’t defined, which is the case on older versions of macOS
(as well as Windows, which was already handled). Callers shouldn’t call
`g_local_file_fstatat()` in these cases. (That’s already the case.)
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
Fixes: #2203
Expose a function that prepares an attribute query string to be passed
to g_file_query_info() to get a list of attributes normally copied with
the file. This function is used by the implementation of
g_file_copy_attributes, and it's useful if one needs to split
g_file_copy_attributes into two stages, for example, when nautilus does
a recursive move of a directory. When files are moved from the source
directory, its modification time changes. To preserve the mtime on the
destination directory, it has to be queried before moving files and set
after doing it, hence these two stages.
Signed-off-by: Maxim Mikityanskiy <maxtram95@gmail.com>
The GSubprocessLauncher class lacks a dispose() method, and frees
all their resources in the finalize() method.
This is a problem with Javascript because the sockets passed to a
child process using g_subprocess_launcher_take_fd() aren't closed
in the parent space until the object is fully freed. This means
that if the child closes a socket, it won't be detected until the
GSubprocessLauncher object has been freed by the garbage
collector.
Just closing the socket externally is not a valid solution,
because the finalize() method will close it again, and since
another file/pipe/socket could have been opened in the meantime
and use the same FD number, the finalize() method would close
an incorrect FD.
An example is launching a child process that uses its own
socket for Wayland: the parent creates two sockets with
socketpair(), passes one to the Wayland API (wl_client_create()),
and the other is passed to the child process using
g_subprocess_launcher_take_fd(). But now there are two instances
of that second socket: one in the parent, and another one in the
child process. That means that, if the child closes its socket (or
dies), the Wayland server will not detect that until the
GSubprocessLauncher object is fully destroyed. That means that a
GSubprocessLauncher created in Javascript will last for several
seconds after the child dies, and every window or graphical element
will remain in the screen until the Garbage Collector destroys the
GSubprocessLauncher object.
This patch fixes this by moving the resource free code into a
dispose() method, which can be called from Javascript. This allows
to ensure that any socket passed to the child with
g_subprocess_launcher_take_fd() can be closed even from Javascript
just by calling the method run_dispose().
Fix https://gitlab.gnome.org/GNOME/glib/-/merge_requests/1670
This combines a massive code re-folding with functionlity expansion
that allows us to track multiple verbs per handler or per application.
Also fixes a few issues and removes a function that made no sense.
Like G_SOURCE_REMOVE and G_SOURCE_CONTINUE, these make it clearer what
it means to return TRUE or FALSE.
In particular, in GDBus methods that fail, the failure case still needs
to return TRUE (unlike the typical GError pattern), leading to comments
like this:
g_dbus_method_invocation_return_error (invocation, ...);
return TRUE; /* handled */
which can now be replaced by:
g_dbus_method_invocation_return_error (invocation, ...);
return G_DUS_METHOD_INVOCATION_HANDLED;
G_DBUS_METHOD_INVOCATION_UNHANDLED is added for symmetry, but is very
rarely (perhaps never?) useful in practice.
Signed-off-by: Simon McVittie <smcv@collabora.com>
The previous parsing code could read off the end of a URI if it had an
incorrect %-escaped character in.
Fix that, and more closely implement parsing for the syntax defined in
RFC 6874, which is the amendment to RFC 3986 which specifies zone ID
syntax.
This requires reworking some network-address tests, which were
previously treating zone IDs incorrectly.
oss-fuzz#23816
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
Emptying trash over `gio trash` is a bit slow in comparison to plain
`rm -r`. On my system, it took about 3 min to empty the trash with a
folder containing 600 000 files, which is not ideal as `rm -r` call
took just a few seconds. I found that `g_file_delete` is implemented
differently for locations provided by the trash backend. The trash
backend prevents modifications of trashed content thus the delete
operation is allowed only for the top-level files and folders. So it
is not necessary to recursive delete all files as the permission
denied error is returned anyway. Let's call `g_file_delete` only for
top-level items, which reduces the time necessary for emptying trash
from minutes to seconds...
See: https://gitlab.gnome.org/GNOME/nautilus/-/issues/1589
Some filesystems don't have meaningful access times under at least some
circumstances (see #2189, #2205). In this situation the traditional stat()
and related kernel interfaces have to put something meaningless in the
st_atime field, and have no way to signal that it is meaningless.
However, statx() does have a way to signal that the atime is meaningless:
if the filesystem doesn't provide a useful access time, it will unset
the STATX_ATIME bit (as well as filling in the same meaningless value
for the stx_atime field that stat() would have used, for compatibility).
We don't actually *need* the atime, so never include it in the required
mask. This was already done for one code path in commit 6fc143bb
"gio: Allow no atime from statx" to fix#2189, but other callers were
left unchanged in that commit, and receive the same change here.
It is not actually guaranteed that *any* of the flags in the
returned stx_mask will be set (the only guarantee is that items in
STATX_BASIC_STATS have at least a harmless compatibility value, even if
their corresponding flag is cleared), so it might be better to follow
this up by removing the concept of the required mask entirely. However,
as of Linux 5.8 it looks as though STATX_ATIME is the only flag in
STATX_BASIC_STATS that might be cleared in practice, so this simpler
change fixes the immediate regression.
Resolves: https://gitlab.gnome.org/GNOME/glib/-/issues/2205
Signed-off-by: Simon McVittie <smcv@collabora.com>
It is not allowed to be `NULL` or unset if requested by the file
attribute matcher. Derive it from the basename. This doesn’t handle the
situation of a failed UTF-16 to UTF-8 conversion very well, but will at
least return something.
Note that the `g_filename_display_basename()` function can’t be used as
`GWinHttpFile` provides its URI in UTF-16 rather than in the file system
encoding.
This fixes a crash when using GIMP on Windows. Thanks to lillolollo for
in-depth debugging assistance.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Fixes: #2194
For interoperability with libdbus, we want to use compatible timeouts.
In particular, this fixes a spurious failure of the `gdbus-server-auth`
test caused by libdbus and gdbus choosing to expire the key (cookie) at
different times, as diagnosed by Thiago Macieira. Previously, the libdbus
client would decline to use keys older than 7 minutes, but the GDBus
server would not generate a new key until the old key was 10 minutes old.
For completeness, also adjust the other arbitrary timeouts in the
DBUS_COOKIE_SHA1 mechanism to be the same as in libdbus. To make it
easier to align with libdbus, create internal macros with the same names
and values used in dbus-keyring.c.
* maximum time a key can be in the future due to clock skew between
systems sharing a home directory
- spec says "a reasonable time in the future"
- was 1 day
- now 5 minutes
- MAX_TIME_TRAVEL_SECONDS
* time to generate a new key if the newest is older
- spec says "If no recent keys remain, the server may generate a new
key", but that isn't practical, because in reality we need a grace
period during which an old key will not be used for new authentication
attempts but old authentication attempts can continue (in practice both
libdbus and GDBus implemented this logic)
- was 10 minutes
- now 5 minutes
- NEW_KEY_TIMEOUT_SECONDS
* time to discard old keys
- spec says "the timeout can be fairly short"
- was 15 minutes
- now 7 minutes
- EXPIRE_KEYS_TIMEOUT_SECONDS
* time allowed for a client using an old key to authenticate, before
that key gets deleted
- was at least 5 minutes
- now at least 2 minutes
- at least (EXPIRE_KEYS_TIMEOUT_SECONDS - NEW_KEY_TIMEOUT_SECONDS)
Based on a merge request by Philip Withnall.
Fixes: #2164
Thanks: Philip Withnall
Thanks: Thiago Macieira
Signed-off-by: Simon McVittie <smcv@collabora.com>
This doesn't trigger the cancellation assertion issue when run locally
(the task didn't return yet, so the error is simply overwritten), but
perhaps it ever does in CI. Anyhow, it's good to have a cancellation
test.
After a splice operation is finished, it attempts to 1) close input/output
streams, as per the given flags, and 2) return the operation result (maybe
an error, too).
However, if the operation gets cancelled early and the streams indirectly
closed, the splice operation will try to close both descriptors and return
on the task when both are already closed. The catch here is that getting the
streams closed under its feet is possible, so the completion callback would
find both streams closed after returning on the first close operation and
return the error, but then the second operation could be able to trigger
a second error which would be returned as well.
What happens here is up to further race conditions, if the task didn't
return yet, the returned error will be simply replaced (but the old one not
freed...), if it did already return, it'll result in:
GLib-GIO-FATAL-CRITICAL: g_task_return_error: assertion '!task->ever_returned' failed
Fix this by flagging the close_async() callbacks, and checking that both
close operations did return, instead of checking that both streams are
closed by who knows.
This error triggers a semi-frequent CI failure in tracker, see the summary at
https://gitlab.gnome.org/GNOME/tracker/-/issues/240
statx does not provide stx_atime when querying a file in a read-only
mounted file system. So call to statx should not expect it to be in
the mask. Otherwise we would fail with ERANGE for querying any file in
a read-only file system.
Fixes#2189.
The `make_pollfd()` call can’t fail because it only does so if
`cancellable == NULL`, and we’ve already checked that. Assert that’s the
case, to shut Coverity up and to catch behavioural changes in future.
Coverity CID: #1159433
Signed-off-by: Philip Withnall <withnall@endlessm.com>
`g_strsplit()` never returns `NULL`, although it can return an empty
strv (i.e. with its first element being `NULL`).
Drop a redundant `NULL` check.
Coverity CID: #1430976
Signed-off-by: Philip Withnall <withnall@endlessm.com>