Development Overview: The Basics
The table below describes files and folders used by the Migration Utility along with a description and purpose for each resource.
|Overview Item||Needed by Whom?||Brief Description & Purpose|
(console application created via dotnet publish)
The table below outlines some important conventions in the as-shipped Migration Utility code.
Database upgrades are performed in place rather than creating a new database copy
As a secondary concern, this upgrade method was chosen to ease the upgrade process for a Cloud-based ODS (e.g., on Azure).
Sequence of events that occur during upgrade
Specifics differ for each version, but in general the upgrade sequence executes as follows
Minimize the number of scripts with complex dependencies on other scripts in the same directory/upgrade step.
One script per table in each directory, where possible
Scripts are named in the format:
#### TableName [optional_tags].sql
This convention does not apply to operations that are performed dynamically
Troubleshooting, Timeout prevention
Custom, unknown extensions on the ODS are common. As part of the process of upgrading a highly-customized ODS, an installer is likely to run into a sql exception somewhere in the middle of upgrade (usually caused by a foreign key dependency, schema bound view, etc).
In general, we do not want to attempt to modify an unknown/custom extension on an installer's behalf to try and prevent this from happening. It is important that a installer be aware of each and every change applied to their custom tables. Migration of custom extensions will be handled by the installer.
Considering the above, in the event an exception does occur during upgrade, we want to make the troubleshooting process as easy as possible. If an exception is thrown, an installer should immediately be able to tell:
Many issues may be fixable from the above information alone. If more detail is needed, the installer can view the code in the referenced script file. By separating script changes by table, we make an effort to ensure that there are only a few lines to look though (rather than hundreds)
In addition, each script will be executed in a separate transaction. Operations such as index creation can take a long time on some tables with a large ODS. Splitting the code into separate transactions helps prevent unexpected timeout events
The major downside of this approach is the large number of files it can produce. For example, the
With updates becoming more frequent in the future, future versions should not be impacted as heavily.
Most change logic is held in sql scripts (as of V3)
Given this advantage, effort was made to ensure that each part of the migration tool (console utility, library, integration tests) could be replaced individually as needed
The current upgrade utility contains a library making use of DbUp to drive the upgrade process. In the future, if/when this tool no longer suits our needs, we should be able to take existing scripting and port it over to an alternative upgrade tool (such as RoundhousE), or even a custom built tool if the need ever arises.
This convention could (and should) change in the future if upgrade requirements become too complex to execute from SQL scripting alone.
Two types of data validation/testing options
Prevent data loss
The first type of validation, (dynamic, sql based) is executed on on data that we know should not ever change during the upgrade.
The second type of data validation, integration test based, is used to test the logic and transformations where we know the data should change:
Together, the two validation types (validation of data that changes, and validation of data that does not change) can be used to create test coverage wherever it is needed for a given ODS upgrade.
The the dynamic validation is performed via reusable stored procedures that are already created and available during upgrade.
See scripts in the "*