GSocket: fix GIOCondition on timed-out socket operation

The docs for g_socket_set_timeout() claimed that if an async operation
timed out, the GIOCondition passed to the source callback would be
G_IO_IN or G_IO_OUT (thus prompting the caller to call
g_socket_receive/send and get a G_IO_ERROR_TIMED_OUT), but in fact it
ended up being 0, and gio/tests/socket.c was erroneously testing for
that instead of the correct value. Fix this.
This commit is contained in:
Dan Winship
2011-08-27 09:59:02 -04:00
parent 60f23ecbbc
commit cef679d004
2 changed files with 4 additions and 3 deletions

View File

@@ -178,7 +178,7 @@ test_ip_async_timed_out (GSocket *client,
if (data->family == G_SOCKET_FAMILY_IPV4)
{
g_assert_cmpint (cond, ==, 0);
g_assert_cmpint (cond, ==, G_IO_IN);
len = g_socket_receive (client, buf, sizeof (buf), NULL, &error);
g_assert_cmpint (len, ==, -1);
g_assert_error (error, G_IO_ERROR, G_IO_ERROR_TIMED_OUT);
@@ -554,7 +554,7 @@ main (int argc,
g_test_add_func ("/socket/ipv4_sync", test_ipv4_sync);
g_test_add_func ("/socket/ipv4_async", test_ipv4_async);
g_test_add_func ("/socket/ipv6_sync", test_ipv6_sync);
g_test_add_func ("/socket/ipv6_sync", test_ipv6_async);
g_test_add_func ("/socket/ipv6_async", test_ipv6_async);
#ifdef G_OS_UNIX
g_test_add_func ("/socket/unix-from-fd", test_unix_from_fd);
g_test_add_func ("/socket/unix-connection", test_unix_connection);