Verbose output from mock is not useful for someone trying to figure out
why their module build failed, and in fact makes it harder by adding
quite a bit of extraneous noise.
Mock doesn't normally log anything to stdout - so it's confusing to mention
separate logs in the messages. Combine the two output streams together.
(This is what koji does as well.)
The different build threads all are using the same basic build root
contents, so there's no reason to use separate caches - point the
root cache plugin for mock to a single location. (There's locking
inside Mock for updating the root cache.)
The mod time for the mock configuration file is used to determine
whether the root cache is out-of-date or not, so we want to avoid
changing the configuration timestamps when we don't change content
when we're just writing a per-thread mock configuration file again
with no substantive changes.
We do this by only updating the master mock.cfg file when we're
actually adding content, and propagating its mod time to the
per-thread configuration files.
When we want to pass in SCM options particular to a specific module build,
do that on the mock command line rather than by modifying mock.cfg - this
avoids invalidating the root cache.
For local builds, required modules are not necessarily in the local
database, so the method of looking up the build to find the koji tag
doesn't work reliably. However, the caller has the koji_tag - so just
pass it in.
When downloading files from Koji to make a local repository, display
a temporary status of which files are being displayed to the console
appended after any log messages. Updates are done by erasing the status
that was written, adding a log message, then writing the status again.
When running in local mode, add a special handler that filters log
messages to the console to produce attractive output. Implemented
behaviors include:
- INFO level messages are only displayed if done through
MBSLogger.console() rather than MBSLogger.info().
- Timestamps and thread names are omitted unless the local build
is started with the -d option
- Warning/error messages have the level highlighted in red
- Special handling can be added to log messages, initially:
- Long running operations can be displayed to the console as
"Doing foo ... <pause>done"
all_reused_in_prev_batch should only check components in the previous
batch, not components in all batches including future batches. This
was accidentally regressed by some code refactoring in c36bd7ebac.
Since the build log is already within the a build-specific directory, we
don't need to put the build ID (which ends up always being "2") into the
build log filename.
Logging during a build that occurs before the build directory is created
used to be logged to the console, but not retained in the build log
file. This made referring to the build log file confusing. Solve this
by buffering logs in memory until the log file is created and replaying
them.
A little bit of hackery is needed to avoid saving dangling references to
libsolv objects.
* Allow MBS_CONFIG_FILE="" to entirely suppress loading any configuration
file (useful for running tests and avoiding loading a system-wide
configuration file.)
* When loading the configuration file:
* If the default configuration file path doesn't exist, silently fall back
to the default configuration
* For any other OSError, print the exact error
* Let any other exception throw through, to allow people to debug their
configuration file
We use module builds as an intermediate build for building flatpaked
applications on Fedora. As Flatpaks in Fedora are officially supported
only on aarch64 and x86_64 we wanted to limit the builds just to these
architectures to save Fedora resources. We were able to do with commit
https://src.fedoraproject.org/modules/flatpak-common/c/65a01f which
works perfectly in koji/mbs, but doesn't work when run locally as the
build fails with following errors and exceptions:
info: Getting tag for flatpak-common:f36:3620220516070452
info: Start to handle flatpak-common:f36:3620220516070452:cab77b58 which is in init state.
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/module_build_service/scheduler/handlers/modules.py", line 182, in init
record_module_build_arches(mmd, build)
File "/usr/lib/python3.10/site-packages/module_build_service/scheduler/submit.py", line 150, in record_module_build_arches
arches = get_build_arches(mmd, conf)
File "/usr/lib/python3.10/site-packages/module_build_service/scheduler/submit.py", line 95, in get_build_arches
new_arches = _check_buildopts_arches(mmd, arches)
File "/usr/lib/python3.10/site-packages/module_build_service/scheduler/submit.py", line 131, in _check_buildopts_arches
print(arches not in unsupported_arches, file=sys.stderr)
TypeError: unhashable type: 'list'
info: State transition: 'init' -> 'failed', <ModuleBuild flatpak-common, id=2, stream=f36, version=3620220516070452, scratch=False, state 'failed', batch 0, state_reason 'An unknown error occurred while validating the modulemd'>
warning: Note that retrieved module state 4 doesn't match message module state 'failed'
Traceback (most recent call last):
File "/usr/bin/mbs-manager", line 33, in <module>
sys.exit(load_entry_point('module-build-service==3.6.1', 'console_scripts', 'mbs-manager')())
File "/usr/lib/python3.10/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/flask/cli.py", line 596, in main
return super().main(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/usr/lib/python3.10/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3.10/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/lib/python3.10/site-packages/flask/cli.py", line 440, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "/usr/lib/python3.10/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/module_build_service/manage.py", line 171, in build_module_locally
module_build_service.scheduler.local.main(module_build_ids)
File "/usr/lib/python3.10/site-packages/module_build_service/scheduler/local.py", line 57, in main
raise_for_failed_build(module_build_ids)
File "/usr/lib/python3.10/site-packages/module_build_service/scheduler/local.py", line 39, in raise_for_failed_build
raise ValueError("Local module build failed.")
ValueError: Local module build failed.
The problem is that the code as it's now will fail to proceed if it will
detect any unsupported architecture - in this case the aarch64 even
tough the local x86_64 is supported. The check should be redone so if
there's an unsupported architecture detected, we should check that isn't
not the local one and proceed with the build, otherwise fail (the local
hardware doesn't support any of the specified architectures).
Use dogpile.cache to avoid repeatedly making the same queries to the MBS.
We frequently query a list of modules, and then later in the build go
back and ask for details of those modules again by nsvc, so special case
querying a single module by nsvc and prime the cache for this from lists
of results.