os.sysconf is not available on all platforms (like Windows) but it
is used to retrieve the number of online processors. If missing,
assume one processor (building on such a platform will most likely
not work, though).
Fixes: #948 ("Windows compatibility") (at least it improves the
Windows support a bit)
A workflow token can be created via "osc token --create --operation
workflow --scm-token <SCM_TOKEN>".
Triggering a workflow token via osc is probably unlikely - that's
why it is not yet implemented (it would also make the UI a bit
awkward because one has to specify a concrete http header).
Fixes: #943 ("implement osc token --operation=workflow")
The use of makeurl makes the code more readable/maintainable (IMHO)
and it also does proper percentage encoding of the query string (not
that the osc codebase cares much about it, though:/).
Newer rexml Ruby gem used on OBS server side uses stricter XPath parsing.
This change fixes incorrect XPath that was accepted by older rexml,
but not accepted by newer one.
Signed-off-by: Oleg Girko <ol@infoserver.lv>
Offer a force ("f") choice if, for instance, "osc meta prj foobar -e"
fails due to a HTTPError in metafile.edit. If the force choice is
selected, a new url is constructed by invoking the metafile._URLFactory
instance with a "force='1'" argument (this adds a "force=1" to the
original url's query string (*)) and the corresponding file is PUTed
to the new url. If this PUT fails again and now the "y" choice is
selected, the file is PUTed to the original url (*).
(*): Stricly speaking, from metafile.edit's POV, the concrete url
depends on the passed in metafile._URLFactory instance, though.
Note: the metafile._URLFactory class and its is_force_supported method
is a gross hack. That's why this class is marked as private (that is,
we can remove it at any point in time again without breaking the
API/3rd party applications). An alternative to the metafile._URLFactory
approach would be manual URL parsing and manual URL construction
(adding "force=1" to the query string)... but this is also pretty
awkward (if done properly).
Fixes: #916 ("for osc meta edit change y/n to y/n/f")
Fixes: #942 ("Offer -f when prjmeta change leads to repo_dependency")
The order is now:
- ~/.osc_cookiejar, if it exists
- $XDG_STATE_HOME/osc/cookiejar if XDG_STATE_HOME neither null nor empty
- ~/.local/state/osc/cookiejar
Do not try to run source services when building in a non package wc. This
is the behavior we had prior commit c39c3b8cae
("Cleanup the source services execution code in do_build").
There is no "sane" way to execute the source services in case of a
non package wc build because we cannot export the OBS_SERVICE_PACKAGE
env variable with a meaningful value.
Fixes: #936 ("'osc build --local-package ...' fails with 'not an osc
package working copy'")
Be a bit more compliant to the XDG base directory spec: "If
$XDG_CONFIG_HOME is either not set or empty, a default equal to
$HOME/.config should be used." [1].
Now, if the $XDG_CONFIG_HOME env variable is empty, we use the
default.
[1] https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
When building a package from a directory that is not a checked-out
OBS working, the error message:
"Error: "<directory>" is not an osc package working copy."
is generated.
This occurs when build.main() attempts to run source services which
is probably not a good idea as these are part of the core.Package
infrastructure which cannot be initialized for such packages.
It is probably best to disable the source services in this case.
See Issue#936.
Suggested-by: Marcus Huewe <suse-tux@gmx.de>
Signed-off-by: Egbert Eich <eich@suse.com>
The old code does not support the --binary option in combination
with the --verbose option. Specifying --binary and --verbose at
the same time results in a crash (because the binary listing
contains no <title>...</title> element).
In order to fix this, do not try to access a <title>...</title>
element when --binary and --verbose are both specified. Instead,
in this case, include information about the repo, arch, version,
and release of the corresponding binary element.
Fixes: #933 ("osc se -v -B crash")
The old code uses a variable .rXYZ suffix (where XYZ is the revision
of the package wc during the merge operation). Now, if Package.mergefile
is invoked during an update, XYZ represents the "old" revision. That
is, if a merge conflict happens, then a subsequent "osc resolved <file>"
will not unlink the <file>.rXYZ file (because
Package.clear_from_conflictlist only takes the current rev into account).
In order to fix this, use a fixed ".new" suffix. This way,
Package.clear_from_conflictlist can properly unlink the corresponding
*.new file. This naming scheme for the "upfilename" is in line with
"osc pull" and "osc repairlink".
Note: if a working copy was updated with an "old" osc version (without
this commit) and a "new" osc version (with this commit) is used to run
"osc resolved <file>", then the <file>.rXYZ file is _NOT_ removed (it
is not worth the effort to add compat code for this).
A password can be deleted via "osc config -d <apiurl> pass". Actually,
if we really want to support password deletion, we should introduce
a --delete-password option because the "pass" config option can be
considered as an implementation detail, which we should not expose
to our users.
The password store can be changed (without entering the password
again) via "osc config <apiurl> --select-password-store". This
command deletes the password from the current password store and
stores it in the selected password store.
Previously, the --select-password-store option had no meaningful
semantics. In order to use it, one always had to provide a password
and explicitly pass "pass" as the config option (the same could be
achieved by using --change-password). Hence, in a strict sense,
this change breaks the UI.
Without the slash splitting, "osc browse prj/pkg" interprets the
argument as a project, which is wrong. Hence, perform the slash
splitting (as most commands do).
Always send the sha256sums of all tracked files in case of a
frozen package wc. For instance, this is needed if the package is
a plain link (no branch) because in this case the backend might
request a sha256sum for a tracked but unmodified file (this can
happen because the backend cannot expand the link).
The new behavior is in line with a pulled/linkrepair package wc.
Fixes: #924 ("Transmitting file data There is no sha256 sum for
file")
When trying to commit a non-existent package via Project.commit it
is treated as an external package (because a non-existent package
has no "state" inside the project). That is, Project.commitExtPackage
is called, which fails with a FileNotFoundError in case of a
non-existent package (and the traceback is printed to the user).
In order to fix this, treat a non-existent package as broken package.
That is, simply print an info message and do not error out with a
traceback (note: the commit is _not_ aborted).
Fixes: #920 ("osc commit should fail gracefully in case of
nonexistent filename")
Sccache is an alternate build caching system to ccache/icecream. It
supports C, C++ and Rust. It can optionally have distributed or remote
caches via redis, s3 object stores, memcached, azure storage or
google cloud storage.
This can help to significantly improve the performance of Rust rebuilds.
For example, Kanidm changes from 400s to 122s on a rebuild, and rust-lang
rebuilds improve from 7200s to 4770s. With some changes to the rust
packages especially this will be possible to speed up over version
changes as well.
See also: obs-build PR https://github.com/openSUSE/obs-build/pull/680
Do not use a preinstallimage if the local build is executed as a non-root
(the preinstallimage contains device nodes which usually cannot be created
by a non-root user - this is not a problem in the non-preinstallimage
codepath (see [1])).
[1] https://github.com/openSUSE/osc/pull/908#issuecomment-806903856
Use the _signkey route for retrieving the signkey. Use the "old" way
as a fallback when talking with an old API. We should probably also
use this route in the fetch module.
The old code only supports a file whose size is less then or equal
to INT_MAX (due to a reasonable(!) limit in M2Crypto). The actual
issue is in core.http_request which mmap(...)s the file, wraps it
into a memoryview/buffer and then passes the memoryview/buffer to
urlopen. Eventually, the whole memoryview/buffer is read into memory
(see m2_PyObject_GetBufferInt). If the file is too large (> INT_MAX),
m2_PyObject_GetBufferInt raises a ValueError (which is perfectly
fine!).
Reading a whole file into memory is completely insane. In order to
avoid this, we now simply pass a file-like object to urlopen (more
precisely, the file-like object is associated with the Request
instance that is passed to urlopen). The advantange is that the
file-like object is processed in chunks of 8192 bytes (see
http.client.HTTPConnection) (that is, only 8192 bytes are read into
memory (instead of the whole file)).
There are two pitfalls when passing a file-like object to urlopen:
* By default, a chunked Transfer-Encoding is applied. It seems that
some servers (like api.o.o) do not like this (PUTing a file with
a chunked Transfer-Encoding to api.o.o results in status 400). In
order to avoid a chunked Transfer-Encoding, we explicitly set a
Content-Length header (we also do this in the non-file case (just
for the sake of completeness)).
* If the request fails with status 401, it is retried with an
appropriate Authorization header. When retrying the request, the
file's offset has to be repositioned to the beginning of the file
(otherwise, a 0-length body is sent which most likely does not
match the Content-Length header).
Note: core.http_request's "data" and "file" parameters are now mutually
exclusive because specifying both makes no sense (only one of them
is considered) and it simplifies the implementation a bit.
Fixes: #202 ("osc user authentification seems to be broken with last
commit")
Fixes: #304 ("osc ci - cannot handle more than 2 GB file uploads")