* Add a script make-dist-tarball.py that runs 'setup.py sdist' and compares
what gets distributed with git and with a set of patterns of extra files
added during distribution, and with git files that we don't want to
distribute.
* Extend MANIFEST.in or add __init__.py files to distribute:
run-unittests.sh
tox.in
conf/client_secrets.json
tests/test_builder
tests/integration
* Don't include all of module_build_service - this is too prone to backup
files and development trash files; instead add just the files we need
that aren't *.py
* Move global-excludes to the end - they modify the existing list of files;
and add *.db - for some reason, .mbs_local_build.db files were
getting disted.
To clean things up and facilitate future changes to test cases, split
the method for creating a builder apart from the fixture so that it can
be used independently.
For local builds, required modules are not necessarily in the local
database, so the method of looking up the build to find the koji tag
doesn't work reliably. However, the caller has the koji_tag - so just
pass it in.
This will enable building modules from the new libmodulemd v3 format.
This format fixes major issue with modularity which enables clear
upgrade paths for multicontext modules.
Signed-off-by: Martin Curlej <mcurlej@redhat.com>
This also includes `from __future__ import absolute_import`
in every file so that the imports are consistent in Python 2 and 3.
The Python 2 tests fail without this.
This moves the code used by the backend and API to common/submit.py,
the code used just by the API to web/submit.py, and the code used
just by the backend to scheduler/submit.py.
This puts backend specific code in either the builder or scheduler
subpackage. This puts API specific code in the new web subpackage.
Lastly, any code shared between the API and backend is placed in the
common subpackage.
This merges the configuration from conf/config.py to
module_build_service/config.py. This also greatly simplifies the logic
in `init_config`. Additionally, `init_config` is no longer aware of
Flask. This will allow us to eventually break up the configuration
between the API and the backend.
The following handler arguments are not used at all:
1. `build_id` in handlers/components.py:build_task_finalize
2. `build_name` in handlers/tags.py:tagged
To support multiple backend, we need to get rid of `further_work` concept
which is used in multiple places in the MBS code. Before this commit, if
one handler wanted to execute another handler, it planned this work by
constructing fake message and returning it. MBSConsumer then planned
its execution by adding it into the event loop.
In this commit, the new `events.scheduler` instance of new Scheduler
class is used to acomplish this. If handler wants to execute another
handler, it simply schedules it using `events.scheduler.add` method.
In the end of each handler, the `events.scheduler.run` method is
executed which calls all the scheduled handlers.
The idea is that when Celery is enabled, we can change the
`Scheduler.run` method to execute the handlers using the Celery, while
during the local builds, we could execute them directly without Celery.
Use of Scheduler also fixes the issue with ordering of such calls. If
we would call the handlers directly, they could have been executed
in the middle of another handler leading to behavior incompatible
with the current `further_work` concept. Using the Scheduler, these
calls are executed always in the end of the handler no matter when
they have been scheduled.
This patch drops message objects, defined by class BaseMessage and its
subclasses, and pass event info arguments to event handler directly.
Different event handler requires different arguments to handle a kind of
specific event. The event info is parsed from the raw message received
from message bus.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Message classes and FedmsgMessageParser are moved into dedicated Python module
under scheduler/ directory.
FedmsgMessageParser is decoupled from messaging.py by initializing a parser
object with known fedmsg services. This decouple avoids cycle import between
parser.py and messaging.py.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
MBS will iterate through all the builds in buildrequires to determine
the expected list of arches on the associated Koji tag. In some cases,
these builds do not have a Koji tag. They should be skipped for this
operation.
Signed-off-by: Luiz Carvalho <lucarval@redhat.com>
Please note that this patch does not change the use of database session
in MBS. So, in the frontend, the database session is still managed by
Flask-SQLAlchemy, that is the db.session. And the backend, running event
handlers, has its own database session created from SQLAclehmy session
API directly.
This patch aims to reduce the number of scoped_session created when call
original function make_db_session. For technical detailed information,
please refer to SQLAlchemy documentation Contextual/Thread-local
Sessions.
As a result, a global scoped_session is accessible from the
code running inside backend, both the event handlers and functions
called from handlers. The library code shared by frontend and backend,
like resolvers, has no change.
Similarly, db.session is only used to recreate database for every test.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
This also removes the outdated comments around authorship of each
file. If there is still interest in this information, one can just
look at the git history.
PR #1331 made the assumption that the Ursa Major ursine RPMs were at
xmd["mbs"]["ursine_rpms"], but they are actually at
xmd["mbs"]["buildrequires"]["platform"]["ursine_rpms"]. This commit
handles the ursine RPMs generated by handle_collisions_with_base_module_rpms
separately since the base module the RPMs came from are not tracked in
that function.
The issue is that users don't get feedback from MBS about why a
component was not reused. There was added logic which enables to
store log messages in the database and can be viewed through the
REST api of MBS.
Ticket-ID: #1284
Signed-off-by: Martin Curlej <mcurlej@redhat.com>
When recover_orphaned_artifact is called for module-build-macros
and the module-build-macros is not tagged in the -build Koji tag,
then the tag_artifacts() is called to tag it there.
This is correct, but the issue is that module-build-macros need
to be added to "build" and "srpm-build" Koji tag groups, otherwise
it is not installed in the buildroot by default.
The original motivation for this refactor is to reuse make_module and
drop TestMMDResolver._make_mmd. Some tests require a modulemd created
and some tests also require those modulemd to be stored into database as
a module build. The problem is db_session has to be passed to
make_module even if no need to store into database.
Major changes in this patch:
* Argument db_session is optional.
* Arguments requires_list and build_requires_list are replaced by a
single argument dependencies which is a list of group of requires and
buildrequires
* A new make_module_in_db is created for creating and storing the new
modulemd into database conveniently.
* Tests are updated with the new make_module and make_module_in_db.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
This patch separates the use of database session in different MBS components
and do not mix them together.
In general, MBS components could be separated as the REST API (implemented
based on Flask) and non-REST API including the backend build workflow
(implemented as a fedmsg consumer on top of fedmsg-hub and running
independently) and library shared by them. As a result, there are two kind of
database session used in MBS, one is created and managed by Flask-SQLAlchemy,
and another one is created from SQLAclhemy Session API directly. The goal of
this patch is to make ensure session object is used properly in the right
place.
All the changes follow these rules:
* REST API related code uses the session object db.session created and
managed by Flask-SQLAlchemy.
* Non-REST API related code uses the session object created with SQLAlchemy
Session API. Function make_db_session does that.
* Shared code does not created a new session object as much as possible.
Instead, it accepts an argument db_session.
The first two rules are applicable to tests as well.
Major changes:
* Switch tests back to run with a file-based SQLite database.
* make_session is renamed to make_db_session and SQLAlchemy connection pool
options are applied for PostgreSQL backend.
* Frontend Flask related code uses db.session
* Shared code by REST API and backend build workflow accepts SQLAlchemy session
object as an argument. For example, resolver class is constructed with a
database session, and some functions accepts an argument for database session.
* Build workflow related code use session object returned from make_db_session
and ensure db.session is not used.
* Only tests for views use db.session, and other tests use db_session fixture
to access database.
* All argument name session, that is for database access, are renamed to
db_session.
* Functions model_tests_init_data, reuse_component_init_data and
reuse_shared_userspace_init_data, which creates fixture data for
tests, are converted into pytest fixtures from original function
called inside setup_method or a test method. The reason of this
conversion is to use fixture ``db_session`` rather than create a
new one. That would also benefit the whole test suite to reduce the
number of SQLAlchemy session objects.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>