Two problems occurred with the moksha/twisted handling of SIGINT:
* While KeyboardInterrupt caused the moksha loop to exit, it just
left the test in a confused state, instead of triggering standard
pytest behavior and aborting the entire test run.
* The handler was left-over for remaining tests that prevent Control-C
from working at all.
Fix that by using mock.patch to override moksha's signal handler with
our own signal handler that stores the KeyboardInterrupt in the
current EventTrap, and restores the default signal handler after
the loop ends.
Note that since the KeyboardInterrupt is always handled in the main thread,
we don't get a useful backtrace from the child thread.
Event handlers decorated with @celery_app_task don't raise an exception -
they just log the exception, leaving the Moksha hub running. This meant
that any failures in test_build would result in the test suite hanging
rather than failing usefully.
We solve this by adding a new class EventTrap which acts as a context
manager. Any exceptions that occur in event handlers are set on the
current EventTrap, and the test_build tests re-raise the exception.
No implementations of MBS are using Greenwave, and there are no current plans
to do so. Koji Resolver will be sufficient for any usecase dependent on gating.
This also includes `from __future__ import absolute_import`
in every file so that the imports are consistent in Python 2 and 3.
The Python 2 tests fail without this.
This moves the code used by the backend and API to common/submit.py,
the code used just by the API to web/submit.py, and the code used
just by the backend to scheduler/submit.py.
This puts backend specific code in either the builder or scheduler
subpackage. This puts API specific code in the new web subpackage.
Lastly, any code shared between the API and backend is placed in the
common subpackage.
This merges the configuration from conf/config.py to
module_build_service/config.py. This also greatly simplifies the logic
in `init_config`. Additionally, `init_config` is no longer aware of
Flask. This will allow us to eventually break up the configuration
between the API and the backend.
The following handler arguments are not used at all:
1. `build_id` in handlers/components.py:build_task_finalize
2. `build_name` in handlers/tags.py:tagged
We started using `events.scheduler` to plan fake events instead of internal
queue implemented by fedmsg-hub. Thanks to that, we can stop using fedmsg-hub
completely for local builds.
In this commit, the `scheduler.local.main` is rewritten to not use fedmsg-hub.
The original `scheduler.local.main` is still needed by test_build tests and
therefore it is moved there.
To support multiple backend, we need to get rid of `further_work` concept
which is used in multiple places in the MBS code. Before this commit, if
one handler wanted to execute another handler, it planned this work by
constructing fake message and returning it. MBSConsumer then planned
its execution by adding it into the event loop.
In this commit, the new `events.scheduler` instance of new Scheduler
class is used to acomplish this. If handler wants to execute another
handler, it simply schedules it using `events.scheduler.add` method.
In the end of each handler, the `events.scheduler.run` method is
executed which calls all the scheduled handlers.
The idea is that when Celery is enabled, we can change the
`Scheduler.run` method to execute the handlers using the Celery, while
during the local builds, we could execute them directly without Celery.
Use of Scheduler also fixes the issue with ordering of such calls. If
we would call the handlers directly, they could have been executed
in the middle of another handler leading to behavior incompatible
with the current `further_work` concept. Using the Scheduler, these
calls are executed always in the end of the handler no matter when
they have been scheduled.
This patch drops message objects, defined by class BaseMessage and its
subclasses, and pass event info arguments to event handler directly.
Different event handler requires different arguments to handle a kind of
specific event. The event info is parsed from the raw message received
from message bus.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Message classes and FedmsgMessageParser are moved into dedicated Python module
under scheduler/ directory.
FedmsgMessageParser is decoupled from messaging.py by initializing a parser
object with known fedmsg services. This decouple avoids cycle import between
parser.py and messaging.py.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Please note that this patch does not change the use of database session
in MBS. So, in the frontend, the database session is still managed by
Flask-SQLAlchemy, that is the db.session. And the backend, running event
handlers, has its own database session created from SQLAclehmy session
API directly.
This patch aims to reduce the number of scoped_session created when call
original function make_db_session. For technical detailed information,
please refer to SQLAlchemy documentation Contextual/Thread-local
Sessions.
As a result, a global scoped_session is accessible from the
code running inside backend, both the event handlers and functions
called from handlers. The library code shared by frontend and backend,
like resolvers, has no change.
Similarly, db.session is only used to recreate database for every test.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
This also removes the outdated comments around authorship of each
file. If there is still interest in this information, one can just
look at the git history.
This patch separates the use of database session in different MBS components
and do not mix them together.
In general, MBS components could be separated as the REST API (implemented
based on Flask) and non-REST API including the backend build workflow
(implemented as a fedmsg consumer on top of fedmsg-hub and running
independently) and library shared by them. As a result, there are two kind of
database session used in MBS, one is created and managed by Flask-SQLAlchemy,
and another one is created from SQLAclhemy Session API directly. The goal of
this patch is to make ensure session object is used properly in the right
place.
All the changes follow these rules:
* REST API related code uses the session object db.session created and
managed by Flask-SQLAlchemy.
* Non-REST API related code uses the session object created with SQLAlchemy
Session API. Function make_db_session does that.
* Shared code does not created a new session object as much as possible.
Instead, it accepts an argument db_session.
The first two rules are applicable to tests as well.
Major changes:
* Switch tests back to run with a file-based SQLite database.
* make_session is renamed to make_db_session and SQLAlchemy connection pool
options are applied for PostgreSQL backend.
* Frontend Flask related code uses db.session
* Shared code by REST API and backend build workflow accepts SQLAlchemy session
object as an argument. For example, resolver class is constructed with a
database session, and some functions accepts an argument for database session.
* Build workflow related code use session object returned from make_db_session
and ensure db.session is not used.
* Only tests for views use db.session, and other tests use db_session fixture
to access database.
* All argument name session, that is for database access, are renamed to
db_session.
* Functions model_tests_init_data, reuse_component_init_data and
reuse_shared_userspace_init_data, which creates fixture data for
tests, are converted into pytest fixtures from original function
called inside setup_method or a test method. The reason of this
conversion is to use fixture ``db_session`` rather than create a
new one. That would also benefit the whole test suite to reduce the
number of SQLAlchemy session objects.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Most of the issues are caused by the use of SQLAlchemy database session. Some
inline comments describe the issues in detail.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Currently, we are using just `conf.arches` and `conf.base_module_arches`
to define the list of arches for which the RPMs in a submitted module are
built. This is not enough, because RCM needs to generate modules based
on the base modules which should use different arches.
This commit changes the MBS to take the list of arches from the buildrequired
module build. It checks the buildrequires for "privileged" module or base
module and if it finds such module, it queries the Koji to find out the list
of arches to set for the module.
The "privileged" module is a module which can override base module arches
or disttag. Previously, these modules have been defined by
`allowed_disttag_marking_module_names` config option. In this commit,
this has been renamed to `allowed_privileged_module_names`.
The list of arches are stored per module build in new table represented
by ModuleArch class and are m:n mapped to ModuleBuild.
GenericResolver.extract_modulemd is not removed, but deprecated. Call of it
will result in a deprecation message printed. Any new code should call
load_mmd.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
These module builds will basically act as metadata-only module builds.
This will be more useful as additional features stem from these types
of builds.