Previously MockModuleBuilder was checking the module state to see if
it should run a final createrepo, but since eafa93037f, finalize() is
called before changing the module state; add an explicit boolean to
GenericBuilder.finalize() to avoid worrying about ordering.
This is needed for offline local builds to build a component which is
stored on local git repository.
This PR also adds OfflineLocalBuildConfiguration configuration class
for offline local builds to set the RESOLVER.
There are following changes introduced in this commit:
- The `koji_tag` of module builds imported from the local repositories
is now in `repofile:///etc/yum.repos.d/some.repo` format to store the
repository from which the module was imported to local MBS DB.
- The `koji_tag` of fake base module is set to empty `repofile://`
and in `MockModuleBuilder` the `conf.base_module_repofiles` list
is used as source for the repositories defining platform. We can't
simply use single repository, because there might be fedora.repo
and fedora-update.repo and so on.
- The list of default .repo files for platform are passed using the
`-r` switch in `build_module_locally` `mbs-manager` command.
- The LocalResolver (subclass of DBResolver) is added which is used
to resolve the build dependencies when building modules offline
locally.
- The `MockModuleBuilder` enables the buildrequired modules and
repositories from which they come in the mock config.
With this commit, it is possible to build testmodule locally
without any external infra.
This is the first PR in many for Offline local builds. This PR:
- Adds --offline flag to build_module_locally mbs-manager command to enable
offline local builds.
- If this flag is used, new `import_builds_from_local_dnf_repos` method is
called which uses DNF API to get all the available installable modulemd
files and imports each module into MBS local SQLite database.
- It also adds fake "platform:stream" module based on the /etc/os-release,
so the buildrequirements of the imported modules are satisfied.
The idea here is that in upcoming commits, I will create LocalResolver
which will be similar to DBResolver with some extra rules to resolve
local module builds. This new LocalResolver will still be based on
the models.ModuleBuild methods and therefore we need the modules
imported in database.
This fixes problems from #1184 when new platform stream is added and the
input module depends on another module which is not built against this
new platform, but in the same time the input module wants to be built
against this new platform stream. See the unit-test for more info.
The problem is that `MMDResolver.solve()` method sets the `alternative`
to an empty list and stores the data there only if the alternative
is found valid. If the alternative is not found valid, then it stays
set to an empty list, but the code using `alternative` later expects
it to only contains valid alternatives.
This commit fixes this by setting the `alternative` only in a case
we found it is valid alternative.
Before this commit, the compatible base modules for Module Stream Expansion
have been found without any limitation, just based on the stream version.
It was therefore possible that `platform:lp29.0.0` was found as compatible
module for `platform:f29.1.0` although those platform streams are not
compatible at all.
In this commit, the module can be treated as compatible only if it has
the same virtual stream as the input module. The idea behind this
is that both `platform:f29.0.0` and `platform:f29.1.0` should include
the `virtual_streams: [f29]` in their XMD section which tells MBS
that they are actually compatible. The `lp29` stream will not
have the same virtual stream (most likely it won't have any virtual
stream at all).
The `virtual_streams` is already used for this use-case in `MMDResolver`,
but it was not used to limit the inputs to `MMDResolver` which is what
this commit is doing.
This commit also fixes the issue in `get_last_builds_in_stream_version_lte`
which was simply broken if multiple stream_versions of single base module
existed and their builds had different version. In this case, only
builds with single (randomly chosen) version were returned.
This will help us determine why a modulemd couldn't be parsed
other than formatting issues. An example of this is if the database
session becomes stale, the exception is not due to the modulemd being
invalid.
Since the required parameters vary based on if the modulemd
comes from SCM or a direct submission, the concept of optional
parameters doesn't really apply.
The MBS submission API endpoint should not accept every parameter
that is also a column on the ModuleBuild table. There are two
reasons for this. The first is that a user should be notified if
the supplied parameter is invalid, whereas it could get silently
ignored. The second reason is that a nefarious user could pass
in specially crafted API parameters causing MBS to do something
unexpected or undesired.
The Koji tag extra options used to be hard-coded and to change them,
we had to release new MBS version.
We do not change them often, but right now fedora-infra is requesting
to use new `mock.new_chroot` option and we need to release new MBS
because of that.
This commit makes such changes easier in the future.
When importing a base module, we must ensure the value that will be
used in the RPM disttags doesn't contain a dash since a dash isn't
allowed in the release field of the NVR.
MBS uses the base module's stream that was buildrequired by the module
in the RPM disttags for that module build. The stream name may not be
ideal for all situations, so now this is customizable by setting the
xmd['mbs']['disttag_marking'] in the base module's modulemd.
Currently, the Koji tags have properly set the scrmod- prefix, but
the build target still sets module- prefix even for scratch builds.
In this commit, build target has the scrmod- prefix too.
The current code reads the data, converts them to unicode string and
then uses the `len()` of that string as filesize. This is wrong,
because Koji expects filesize to be really number of bytes, not number
of characters.
Therefore, in this commit, the filesize is computed from raw data (bytes).