Skip to content

Provide "downgrade" functionality to migrate and make dandi-cli use migrate! #343

@yarikoptic

Description

@yarikoptic

While trying to release new dandi-schema and reviewing dandi-cli tests fails in

decided to review what we have ATM:

  • way to "upgrade" in dandischema::migrate where we provide necessary transformations on a dict to go to current (higher only!) schema version from a prior.
    • we refuse to "downgrade"
  • client refuses to upload if current dandischema is not identical:
ERROR tests/test_upload.py::test_upload_sync[False-False] - dandi.exceptions.SchemaVersionError: Server requires schema version 0.6.10; client only supports 0.7.0.  You may need to upgrade dandi and/or dandischema.
❯ git grep -p 'migrate('
dandiapi/api/management/commands/migrate_published_version_metadata.py=def migrate_published_version_metadata(*, dandiset: str, published_version: str, to_version: str):
dandiapi/api/management/commands/migrate_published_version_metadata.py:        metanew = migrate(metadata, to_version=to_version, skip_validation=False)
dandiapi/api/management/commands/migrate_version_metadata.py=def migrate_version_metadata(*, to_version: str):
dandiapi/api/management/commands/migrate_version_metadata.py:            metanew = migrate(metadata, to_version=to_version, skip_validation=True)
web/src/migrate.ts=const __dirname = path.dirname(__filename);
web/src/migrate.ts:async function migrate(version: string) {
web/src/migrate.ts=if (process.argv.length !== 3) {
web/src/migrate.ts:migrate(process.argv[2]);

@dandi/dandi-archive - when are those migrations triggered? (IIRC we might have done some migrations early in the days. but not recently)

But then also question comes to @dandi/dandi-archive -- do we even need to downgrade for upload to dandi-archive or it would swallow anything and just would keep failing to validate?

I was assuming migration would be done by server upon client providing some metadata with prior version.

I think we can actually very easily implement downgrade (should be easier than upgrade really!) functionality as to drop or mutate back where feasible for added fields. We can even refuse to downgrade if it would to loss of metadata, such as "releaseNotes" here! So we would just silently downgrade through the range of "compatible transformations". We could also add some dedicated exception as SchemaDowngradeError (with base class SchemaMigrateError(ValueError) to be base) to raise when we do encounter some value which we cannot downgrade.

Then we would ensure that client downgrades to the version expected by the server before upload, if can't -- only then raise an Error.

I think it should be even feasible to review our schema changes and add a number to "retroactively" going back.

WDYT @satra @candleindark @waxlamp ? may be I do not see some gotcha which would disallow us to do so.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions