Use dogpile.cache to avoid repeatedly making the same queries to the MBS.
We frequently query a list of modules, and then later in the build go
back and ask for details of those modules again by nsvc, so special case
querying a single module by nsvc and prime the cache for this from lists
of results.
Add new tests and modify existing tests so that test_mbs.py entirely
covers MBSResolver.py. To make the tests simpler, change from
explicitly expectations about the MBS API request => response sequence
to a stub local implementatin of the MBS API.
Fix some oddities in the DBResolver implementation of
get_compatible_base_module_modulemds() and make the MBSResolver version -
which was previously just buggy - match that. (Tests for the MBSResolver
version are added in a subsequent commit.)
* If an empty virtual_streams argument was passed in, *all* streams
were considered compatible. Throw an exception in this case - it
should be considered an error.
* If stream_version_lte=True, but the stream from the base module
wasn't in the form FOOx.y.z, then throw an exception. This was
previously treated like stream_version_lte=False, which is just
a recipe for confusion and mistakes.
test_get_reusable_module_use_latest_build() is rewritten to
comprehensively test all possibilities, including the case that changed
above.
test_add_default_modules_compatible_platforms() is changed to run
under allow_only_compatible_base_modules=False, since it expected
Fedora-style virtual streams (versions not in FOOx.y.z form, all
share the same stream), which doesn't make sense with
allow_only_compatible_base_modules=True.
This also includes `from __future__ import absolute_import`
in every file so that the imports are consistent in Python 2 and 3.
The Python 2 tests fail without this.
This moves the code used by the backend and API to common/submit.py,
the code used just by the API to web/submit.py, and the code used
just by the backend to scheduler/submit.py.
This puts backend specific code in either the builder or scheduler
subpackage. This puts API specific code in the new web subpackage.
Lastly, any code shared between the API and backend is placed in the
common subpackage.
This merges the configuration from conf/config.py to
module_build_service/config.py. This also greatly simplifies the logic
in `init_config`. Additionally, `init_config` is no longer aware of
Flask. This will allow us to eventually break up the configuration
between the API and the backend.
Please note that this patch does not change the use of database session
in MBS. So, in the frontend, the database session is still managed by
Flask-SQLAlchemy, that is the db.session. And the backend, running event
handlers, has its own database session created from SQLAclehmy session
API directly.
This patch aims to reduce the number of scoped_session created when call
original function make_db_session. For technical detailed information,
please refer to SQLAlchemy documentation Contextual/Thread-local
Sessions.
As a result, a global scoped_session is accessible from the
code running inside backend, both the event handlers and functions
called from handlers. The library code shared by frontend and backend,
like resolvers, has no change.
Similarly, db.session is only used to recreate database for every test.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Query Koji for the real stream name of each module and keep only those matching
requested `stream`.
This needs to be done, because MBS stores the stream name in the "version" field in Koji,
but the "version" field cannot contain "-" character. Therefore MBS replaces all "-"
with "_". This makes it impossible to reconstruct the original stream name from the
"version" field.
We therefore need to ask for real original stream name here and filter out modules based
on this real stream name.
If KojiResolver is enabled for buildrequired base module and
MBSResolver is used, then `MBSResolver.get_buildrequired_modulemds`
will use KojiResolver to get the list of buildrequired module builds.
Otherwise it uses the current behavior.
To implement this, the `KojiResolver.get_buildrequired_modules` was
split into two methods:
- `get_buildrequired_koji_builds` returns buildrequired Koji builds.
- `get_buildrequired_modules` calls `get_buildrequired_koji_builds`
and finds the corresponding ModuleBuilds in MBS DB.
In this commit, when component reuse code finds out that the base module uses
KojiResolver, it uses the `KojiResolver.get_buildrequired_modules` method
to find out possible modules to reuse and limits the original query just
by the IDs of these modules.
In order to do that, this commit splits the original
`KojiResolver.get_buildrequired_modulemds` into two methods:
- The `get_buildrequired_modules` returning the ModuleBuilds.
- The `get_buildrequired_modulemds` calling the `get_buildrequired_modules`
and returning modulemd metadata.
This also removes the outdated comments around authorship of each
file. If there is still interest in this information, one can just
look at the git history.
For KojiResolver, this method returns always an empty list. The compatible modules are
defined by the Koji tag inheritance, so there is no need to find out the compatible
base modules on MBS side.
This makes `mse.get_base_module_mmds` to ignore virtual streams and just use
the input base module as the only module without finding the compatible
base modules.
This commit:
- Adds KojiResolver class and KojiResolver tests.
- Changes the GenericResolver and its subclasses to pass base_module_mmds
instead of base_module_nsvc to get_buildrequired_modulemds. This is needed,
because KojiResolver needs to access XMD section of base module.
- Implements KojiResolver.get_buildrequired_modulemds to ask Koji for list of
modules tagged in the Koji tag and return their modulemds.
The `MBSResolver.get_buildrequired_modulemds` did not try to load
local module builds and always just queried the remote MBS instance.
This commit fixes it by using local module if available.
The original motivation for this refactor is to reuse make_module and
drop TestMMDResolver._make_mmd. Some tests require a modulemd created
and some tests also require those modulemd to be stored into database as
a module build. The problem is db_session has to be passed to
make_module even if no need to store into database.
Major changes in this patch:
* Argument db_session is optional.
* Arguments requires_list and build_requires_list are replaced by a
single argument dependencies which is a list of group of requires and
buildrequires
* A new make_module_in_db is created for creating and storing the new
modulemd into database conveniently.
* Tests are updated with the new make_module and make_module_in_db.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
This patch separates the use of database session in different MBS components
and do not mix them together.
In general, MBS components could be separated as the REST API (implemented
based on Flask) and non-REST API including the backend build workflow
(implemented as a fedmsg consumer on top of fedmsg-hub and running
independently) and library shared by them. As a result, there are two kind of
database session used in MBS, one is created and managed by Flask-SQLAlchemy,
and another one is created from SQLAclhemy Session API directly. The goal of
this patch is to make ensure session object is used properly in the right
place.
All the changes follow these rules:
* REST API related code uses the session object db.session created and
managed by Flask-SQLAlchemy.
* Non-REST API related code uses the session object created with SQLAlchemy
Session API. Function make_db_session does that.
* Shared code does not created a new session object as much as possible.
Instead, it accepts an argument db_session.
The first two rules are applicable to tests as well.
Major changes:
* Switch tests back to run with a file-based SQLite database.
* make_session is renamed to make_db_session and SQLAlchemy connection pool
options are applied for PostgreSQL backend.
* Frontend Flask related code uses db.session
* Shared code by REST API and backend build workflow accepts SQLAlchemy session
object as an argument. For example, resolver class is constructed with a
database session, and some functions accepts an argument for database session.
* Build workflow related code use session object returned from make_db_session
and ensure db.session is not used.
* Only tests for views use db.session, and other tests use db_session fixture
to access database.
* All argument name session, that is for database access, are renamed to
db_session.
* Functions model_tests_init_data, reuse_component_init_data and
reuse_shared_userspace_init_data, which creates fixture data for
tests, are converted into pytest fixtures from original function
called inside setup_method or a test method. The reason of this
conversion is to use fixture ``db_session`` rather than create a
new one. That would also benefit the whole test suite to reduce the
number of SQLAlchemy session objects.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
Most of the issues are caused by the use of SQLAlchemy database session. Some
inline comments describe the issues in detail.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
This also moves the methods load_mmd and load_mmd_file to
module_build_service.utils.general.
This also removes some MSE unit tests with a mix of positive and
negative streams since this is not supported in libmodulemd v2. The
user will be presented with a syntax error if they try to submit
such a modulemd file.
GenericResolver.extract_modulemd is not removed, but deprecated. Call of it
will result in a deprecation message printed. Any new code should call
load_mmd.
Signed-off-by: Chenxiong Qi <cqi@redhat.com>
There are following changes introduced in this commit:
- The `koji_tag` of module builds imported from the local repositories
is now in `repofile:///etc/yum.repos.d/some.repo` format to store the
repository from which the module was imported to local MBS DB.
- The `koji_tag` of fake base module is set to empty `repofile://`
and in `MockModuleBuilder` the `conf.base_module_repofiles` list
is used as source for the repositories defining platform. We can't
simply use single repository, because there might be fedora.repo
and fedora-update.repo and so on.
- The list of default .repo files for platform are passed using the
`-r` switch in `build_module_locally` `mbs-manager` command.
- The LocalResolver (subclass of DBResolver) is added which is used
to resolve the build dependencies when building modules offline
locally.
- The `MockModuleBuilder` enables the buildrequired modules and
repositories from which they come in the mock config.
With this commit, it is possible to build testmodule locally
without any external infra.
This commit introduces new to_text_type helper method and calls it
for return value of all mmd.dumps() calls. That way, we always
end up with proper unicode string represntation on both python
major versions.
This commit also adds unicode character to description of all
the yaml files we use in the tests so we can be sure MBS can
handle unicode characters properly.
This might be temporary fix, depending on the result of discussion
at https://github.com/fedora-modularity/libmodulemd/issues/184.
Imagine we have "platform:f29.0.0" and "platform:f29.1.0" base modules.
We also have "DBI" module we want to build agaisnt "platform:f29.1.0".
This "DBI" module depends on "perl" module which is only build against
"platform:f29.0.0".
Currently, DBI build would fail to resolve the dependencies, because
it wouldn't find "perl" module, because it is built against different
platform stream.
This PR changes the MSE code to include buildrequired module builds built
against all the compatible platform streams.
It does so by introducing following changes:
- MSE code uses new get_base_module_mmds() method to find out all the
compatible platform modules. This needed new methods in DBResolver
and MBSResolver.
- For each buildrequired module defined by name:stream, the MSE code then
finds particular NSVC built against each compatible platform module.
Side effect of these code changes is that every module now must buildrequire
some base module.