Experimental Tests¶
Note
Experimental tests may be moved to the default test section at some point. As that time their name would be changed, and the old name will become deprecated (though for a time, just as a deprecation warning).
test_all_models_register_on_metadata¶
Diffs the set of tables registered by alembic’s env.py
versus the set
of full tables we find throughout your models package/module.
Enabling all_models_register_on_metadata (TL;DR)¶
You can either enable this test with no configuration, which will attempt to
identify the source module from which the env.py
is loading its
MetaData
and automatically search in that module/package
# pyproject.toml
[tool.pytest.ini_options]
pytest_alembic_include_experimental = 'all_models_register_on_metadata'
# or setup.cfg/pytest.ini
[pytest]
pytest_alembic_include_experimental = all_models_register_on_metadata
Or you can manually import and execute the test somewhere in your own tests. Using this mechanism, you would be able to circumvent the automatic detection and provide the module/package directly.
from pytest_alembic import tests
def test_all_models_register_on_metadata(alembic_runner):
tests.experimental.test_all_models_register_on_metadata(alembic_runner, 'package.models')
How all_models_register_on_metadata works¶
The problem this test attempts to solve is best described with an example. Consider the following package structure:
package/
models/
__init__.py
foo.py
bar.py
baz.py
other_packages/
other_modules.py
migrations/
env.py
Next, a typical package containing a MetaData
or declarative_base
and models or tables. Yours may look superficially different than ours, but you will almost
certainly define your base, and either define or import any models or tables
after its definition.
import sqlalchemy
from sqlalchemy import Column, types
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
from package.models import (
foo,
bar,
)
The specifics of the table definitions are not particularly important, so we’ll
omit bar.py
and baz.py
(imagine they’re essentially identical!), but here’s
foo.py
.
from package.models import Base
class Foo(Base):
__tablename__ = "foo"
id = Column(types.Integer(), autoincrement=True, primary_key=True)
Finally, an excerpt from what is commonly autogenerated by running
alembic init
.
...
from package.models import Base
target_metadata = Base.metadata
...
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=target_metadata)
...
And now we get to the crux of the problem.
A keen eye may have noticed that baz
is not being imported above, and that’s
not a mistake! Elsewhere in your code (other_packages/other_modules
, for
example) you will likely import all of your models at some point. So when you go
to actually use the models, you may not even notice that there is anything wrong.
However as far as alembic is concerned:
It will load the
env.py
env.py
only importspackage.models
(which notably omitspackage.models.baz
!)Base
/Base.metadata
will therefore only havefoo
andbar
tables registered on it.
So when you go to run alembic revision --autogenerate
, it will be unaware of the
“baz” table and either omit its creation or suggest it be dropped if you had already
created it.
This test is meant to be a lint against such scenarios and will fail in any case
where there is no direct import of any tables defined on a MetaData during the
course of executing the env.py
through alembic.
Note
The original inspiration for this test was actually a refactor which changed some pre-existing imports around.
This lead to an already created table no longer being incidentally imported
(somewhere else in the codebase!) during the normal course of importing
our equivalent of package.models
.
This immediately resulted in an --autogenerate
suggesting that the table
be dropped, since it was alembic assumes you’ve deleted the model entirely!
test_downgrade_leaves_no_trace¶
Attempts to ensure that the downgrade for every migration precisely undoes the changes performed in the upgrade.
Enabling downgrade_leaves_no_trace (TL;DR)¶
# pyproject.toml
[tool.pytest.ini_options]
pytest_alembic_include_experimental = 'downgrade_leaves_no_trace'
# or setup.cfg/pytest.ini
[pytest]
pytest_alembic_include_experimental = downgrade_leaves_no_trace
Or you can manually import and execute the test somewhere in your own tests. Using this mechanism, you would be able to circumvent the automatic detection and provide the module/package directly.
from pytest_alembic import tests
def test_downgrade_leaves_no_trace(alembic_runner):
tests.experimental.test_downgrade_leaves_no_trace(alembic_runner)
How downgrade_leaves_no_trace works¶
This test works by attempting to produce two autogenerated migrations.
The first is the comparison between the original state of the database before the given migration’s upgrade occurs, and the MetaData produced by having performed the upgrade.
This should approximate the autogenerated migration that alembic would have generated to produce your upgraded database state itself.
The 2nd is the comparison between the state of the database after having performed the upgrade -> downgrade cycle for this revision, and the same MetaData used in the first comparison.
This should approximate what alembic would have autogenerated if you actual performed the downgrade on your database.
In the event these two autogenerations do not match, it implies that your upgrade -> downgrade cycle produces a database state which is different (enough for alembic to detect) from the state of the database without having performed the migration at all.
Note
This isn’t perfect! Alembic autogeneration will not detect many kinds of changes! If you encounter some scenario in which this does not detect a change you’d expect it to, alembic already has extensive ability to customize and extend the autogeneration capabilities.