Mock doesn't normally log anything to stdout - so it's confusing to mention
separate logs in the messages. Combine the two output streams together.
(This is what koji does as well.)
The different build threads all are using the same basic build root
contents, so there's no reason to use separate caches - point the
root cache plugin for mock to a single location. (There's locking
inside Mock for updating the root cache.)
The mod time for the mock configuration file is used to determine
whether the root cache is out-of-date or not, so we want to avoid
changing the configuration timestamps when we don't change content
when we're just writing a per-thread mock configuration file again
with no substantive changes.
We do this by only updating the master mock.cfg file when we're
actually adding content, and propagating its mod time to the
per-thread configuration files.
When we want to pass in SCM options particular to a specific module build,
do that on the mock command line rather than by modifying mock.cfg - this
avoids invalidating the root cache.
For local builds, required modules are not necessarily in the local
database, so the method of looking up the build to find the koji tag
doesn't work reliably. However, the caller has the koji_tag - so just
pass it in.
In MBS, there are two cases to send a message when a module build moves
to a new state. One is to create a new module build, with
ModuleBuild.create particularly, when user submit a module build.
Another one is to transition a module build to a new state with
ModuleBuild.transition. This commit handles these two cases in a little
different ways.
For the former, existing code is refactored by moving the publish call
outside ModuleBuild.create.
For the latter, message is sent in a hook of SQLAlchemy ORM event
after_commit rather than immediately inside the ModuleBuild.transition.
Both of these changes ensure the message is sent after the changes are
committed into database successfully. Then, the backend can have
confidence that the database has the module build data when receive a
message.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
This also includes `from __future__ import absolute_import`
in every file so that the imports are consistent in Python 2 and 3.
The Python 2 tests fail without this.
This moves the code used by the backend and API to common/submit.py,
the code used just by the API to web/submit.py, and the code used
just by the backend to scheduler/submit.py.
This puts backend specific code in either the builder or scheduler
subpackage. This puts API specific code in the new web subpackage.
Lastly, any code shared between the API and backend is placed in the
common subpackage.
The following handler arguments are not used at all:
1. `build_id` in handlers/components.py:build_task_finalize
2. `build_name` in handlers/tags.py:tagged
To support multiple backend, we need to get rid of `further_work` concept
which is used in multiple places in the MBS code. Before this commit, if
one handler wanted to execute another handler, it planned this work by
constructing fake message and returning it. MBSConsumer then planned
its execution by adding it into the event loop.
In this commit, the new `events.scheduler` instance of new Scheduler
class is used to acomplish this. If handler wants to execute another
handler, it simply schedules it using `events.scheduler.add` method.
In the end of each handler, the `events.scheduler.run` method is
executed which calls all the scheduled handlers.
The idea is that when Celery is enabled, we can change the
`Scheduler.run` method to execute the handlers using the Celery, while
during the local builds, we could execute them directly without Celery.
Use of Scheduler also fixes the issue with ordering of such calls. If
we would call the handlers directly, they could have been executed
in the middle of another handler leading to behavior incompatible
with the current `further_work` concept. Using the Scheduler, these
calls are executed always in the end of the handler no matter when
they have been scheduled.
This patch drops message objects, defined by class BaseMessage and its
subclasses, and pass event info arguments to event handler directly.
Different event handler requires different arguments to handle a kind of
specific event. The event info is parsed from the raw message received
from message bus.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Please note that this patch does not change the use of database session
in MBS. So, in the frontend, the database session is still managed by
Flask-SQLAlchemy, that is the db.session. And the backend, running event
handlers, has its own database session created from SQLAclehmy session
API directly.
This patch aims to reduce the number of scoped_session created when call
original function make_db_session. For technical detailed information,
please refer to SQLAlchemy documentation Contextual/Thread-local
Sessions.
As a result, a global scoped_session is accessible from the
code running inside backend, both the event handlers and functions
called from handlers. The library code shared by frontend and backend,
like resolvers, has no change.
Similarly, db.session is only used to recreate database for every test.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
This also removes the outdated comments around authorship of each
file. If there is still interest in this information, one can just
look at the git history.
This patch separates the use of database session in different MBS components
and do not mix them together.
In general, MBS components could be separated as the REST API (implemented
based on Flask) and non-REST API including the backend build workflow
(implemented as a fedmsg consumer on top of fedmsg-hub and running
independently) and library shared by them. As a result, there are two kind of
database session used in MBS, one is created and managed by Flask-SQLAlchemy,
and another one is created from SQLAclhemy Session API directly. The goal of
this patch is to make ensure session object is used properly in the right
place.
All the changes follow these rules:
* REST API related code uses the session object db.session created and
managed by Flask-SQLAlchemy.
* Non-REST API related code uses the session object created with SQLAlchemy
Session API. Function make_db_session does that.
* Shared code does not created a new session object as much as possible.
Instead, it accepts an argument db_session.
The first two rules are applicable to tests as well.
Major changes:
* Switch tests back to run with a file-based SQLite database.
* make_session is renamed to make_db_session and SQLAlchemy connection pool
options are applied for PostgreSQL backend.
* Frontend Flask related code uses db.session
* Shared code by REST API and backend build workflow accepts SQLAlchemy session
object as an argument. For example, resolver class is constructed with a
database session, and some functions accepts an argument for database session.
* Build workflow related code use session object returned from make_db_session
and ensure db.session is not used.
* Only tests for views use db.session, and other tests use db_session fixture
to access database.
* All argument name session, that is for database access, are renamed to
db_session.
* Functions model_tests_init_data, reuse_component_init_data and
reuse_shared_userspace_init_data, which creates fixture data for
tests, are converted into pytest fixtures from original function
called inside setup_method or a test method. The reason of this
conversion is to use fixture ``db_session`` rather than create a
new one. That would also benefit the whole test suite to reduce the
number of SQLAlchemy session objects.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Currently, we are using just `conf.arches` and `conf.base_module_arches`
to define the list of arches for which the RPMs in a submitted module are
built. This is not enough, because RCM needs to generate modules based
on the base modules which should use different arches.
This commit changes the MBS to take the list of arches from the buildrequired
module build. It checks the buildrequires for "privileged" module or base
module and if it finds such module, it queries the Koji to find out the list
of arches to set for the module.
The "privileged" module is a module which can override base module arches
or disttag. Previously, these modules have been defined by
`allowed_disttag_marking_module_names` config option. In this commit,
this has been renamed to `allowed_privileged_module_names`.
The list of arches are stored per module build in new table represented
by ModuleArch class and are m:n mapped to ModuleBuild.
This also moves the methods load_mmd and load_mmd_file to
module_build_service.utils.general.
This also removes some MSE unit tests with a mix of positive and
negative streams since this is not supported in libmodulemd v2. The
user will be presented with a syntax error if they try to submit
such a modulemd file.
Previously MockModuleBuilder was checking the module state to see if
it should run a final createrepo, but since eafa93037f, finalize() is
called before changing the module state; add an explicit boolean to
GenericBuilder.finalize() to avoid worrying about ordering.
This is needed for offline local builds to build a component which is
stored on local git repository.
This PR also adds OfflineLocalBuildConfiguration configuration class
for offline local builds to set the RESOLVER.
There are following changes introduced in this commit:
- The `koji_tag` of module builds imported from the local repositories
is now in `repofile:///etc/yum.repos.d/some.repo` format to store the
repository from which the module was imported to local MBS DB.
- The `koji_tag` of fake base module is set to empty `repofile://`
and in `MockModuleBuilder` the `conf.base_module_repofiles` list
is used as source for the repositories defining platform. We can't
simply use single repository, because there might be fedora.repo
and fedora-update.repo and so on.
- The list of default .repo files for platform are passed using the
`-r` switch in `build_module_locally` `mbs-manager` command.
- The LocalResolver (subclass of DBResolver) is added which is used
to resolve the build dependencies when building modules offline
locally.
- The `MockModuleBuilder` enables the buildrequired modules and
repositories from which they come in the mock config.
With this commit, it is possible to build testmodule locally
without any external infra.
- Keep scratch module builds in the 'done' state.
- Make koji tagging for scratch modules unique so the same
commit can be resubmitted.
- Use alternate prefix for scratch module build components so they can
be identified later.
- Prevent scratch build components from being reused.
- Assorted code and comment cleanup.
Signed-off-by: Merlin Mathesius <mmathesi@redhat.com>
MBS calls some read-only Koji APIs which does not require to log into a
session. This patch makes it optional to choose whether to login a
session and use anonymous session properly to call those read-only APIs.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>