There are several kinds of database migration:
- Moving a database.
- Upgrading a database.
- Integrating two or more databases.
It is to move a database to a more powerful server with the same version. If the database structure can be kept unchanged, duplicating a (physical standby) database could be the fastest and almost risk-free solution. The best thing is that if the duplicated database is a physical standby, the data can stay synchronized in any minutes. When the D-Day comes, just do a switchover and abandon the source database.
It is to upgrade a database from one server to another with higher version. Here we are talking about, for example, moving a 10g database to another 11g-database-ready server to achieve the upgrading job. This kind of migration will happen among two machines.
Self-upgrading in the same machine could contains many uncertainties, especially when upgrade to a major version. Most lead architects wouldn't approve the method, if there still have available servers can be used.
Integrating several databases into one database would be a huge work. There could be a lot of name collision must be resolved, the tablespaces could be reorganized, schemas could be changed or merged, and some users could be vanished or created during integration. This could be most complicated situation to migrate a database, and it needs to be planned completely in scope and must go down to the last details.
Here I am going to introduce a systematical sequence for a hybrid situation which is mixed up with migration, upgradation and integration. Therefore, I will not mention RMAN and Data Pump in this post, just the sequence of steps arrangement.
There are three major phases involved to achieve our goal.
- Dictionary migration phase
- Entity migration phase
- Post migration phase
You should prepare all the metadata and make sure all the objects are valid and fit the new database. It's something like to build the house on the new location first when you switch a place to live. I call the phase as dictionary migration phase.
After all objects are ready, you can decorate the new house by import data and create indexes. That's right, we create indexes after importing data. I call the phase as entity migration phase.
After all entities are imported, we can start to check the new database for ensuring everything is completely migrated, such as comparing the two database in parameters and tables for ensuring the migration is successful.
A. Dictionary Migration Phase
- Create all TABLESPACE.
- Create all PROFILE.
- Create all ROLE.
- Create all USER (including grant SYSTEM and ROLE privileges as well).
- Assign tablespace quota (TABLESPACE_QUOTA) to users
- Create DATABASE LINK, SEQUENCE, DIRECTORY, TYPE
- Create all TABLE (including TABLE, COMMENT and SEQUENCE).
- Create all PROCEDURE.
- Create all SYNONYM
- Create all other objects.
- Grant all OBJECT PRIVILEGE.
- Recompile all PROCEDURE.
- Request AP team to verify all the program units.
- Compare the objects between the source and target databases.
- Disable referential constraints and trigger
New target database could adopt a new storage policy, which would be more adaptive for the new machine. Therefore, the migration plan could rearrange all the segments into several new tablespaces. In such case, you must create the new tablespaces by yourself.
Profiles can confine users on resource consumption and password changing. So, before creating all users, you should create profile first, if there's no profile assigned while creating users, they will go to the default profile which is the one you should take care about.
The concept of role is very similar with OS GROUP, you can assign system and object privileges to a role according to pre-defined functions, then assign it to several users who may play the same role, which saves time and management works rather than assigning privileges to users individually.
Profiles, roles and users are the very basic objects to a database in users management. Most of security policies are defined through the three objects. In the era of two-tier, users are usually individuals people, they could be versatile, untrained and error-prone. Hence, we should confine their system privileges and prevent worse things happened.
Users may not have been assigned quota on necessary tablespaces, you might have to do it manually. For example:
SQL> alter user sh quota 10g on example;
The 4 objects are relatively independent before creating tables, you may leverage the data pump to import them by assigning INCLUDE clause to these 4 objects only. The DDL of database links generated by database tools contain no or encoded passwords, therefore, they might not be executable in the target database. You should create them one by one manually or by data pump.
Since primary keys and unique constraints could trigger index building while importing data, we should avoid to define the constraints at this moment. Only TABLE, COMMENT and SEQUENCE will be defined in this step.
Please note that, the main purpose of creating SEQUENCE in this step is only for making the stored procedures to pass through compilation.
PROCEDURE represents all kinds of program units, they can be procedures, functions, triggers, types and packages. They are the core functions to serve for business logic. You can try to compile all the program units in this step, but most of program units will still be invalid at this moment.
Some triggers may affect the data loading and increase your migration time, you could defer their creation to a proper step.
SYNONYM itself has no functions intrinsically, but it can represent any core objects in any SQL statement. So we create them after all core objects are created.
The elements of a database consolidation include Consumer Groups, Windows and Resource Plans, but they are not the core objects to a database. So, you can create them anytime after the users are created. But the creation sequence would better be as Consumer Groups, Windows and then Resource Plans.
OBJECT PRIVILEGE can build the relationships between users and objects, and must be created after all objects are created, they will make some program units functional as used to be.
After recompile all the program units, there could be some program units still INVALID due to the absence of external tables. You can save and record them for compilation after all data is imported.
AP team can insert some dummy data to test their program units for ensuring the environment on the target database is ready.
There are several tools can do the comparison, like SQL Developer, TOAD for Oracle, etc. You can select and check the key object types to compare on the tools.
If you have already imported constraints and triggers. You have to disable them for entering next phase. Otherwise, you might get errors or much slower speed on importing table data.
B. Entity Migration Phase
- Import all table data.
- Add all CONSTRAINT.
- Recreate all SEQUENCE objects.
You can use original exp/imp or data pump to import all table data from the source. Since we want AP to verify the data as soon as the database open to read only, so we choose not to import index data in this step.
For small to medium scaled databases, we can use database link to migrate data from the source database to the target database.
You might have noticed that we did not add CONSTRAINT before this phase. Because the process of data import will be faster without constraints. After all data has been imported, you can add all constraints on tables, it will trigger columns that are primary or unique key to create indexes.
Since the values of SEQUENCE could have been changed or reset for some reasons, we should recreate them to the specific number we want them to be before the target database goes public. From now on, no testing data, only real incoming data is allowed, otherwise, the new transactions could fail to comply with the business logic.
C. Post Migration Phase
- Allow all involved DBAs, developers and testers to verify their own schema.
- Create all required INDEX.
- Gather all statistics.
- Compare the data between the source and target databases.
- Open database to public.
The goal of whole design in this approach is to shorten the data import and get AP team involved to verify the data completeness and ensure the data integrity as soon as possible. After the database is verified, we can unblock the network to public.
Please notice that, though we open the database for validation, no one should essentially change the data in this step.
If you concern about the serviceability more than performance, it would be better not to create indexes before the target database opening. The users might sense the response time is longer than expected while indexes are building.
To support some business queries and speed up comparison between source and target database, we can gather statistics on database-level like this:
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS;
For making sure all the data are transported to the target database without loss, DBA can compare the initialization parameters and the data for ensuring the data integrity.
How to Compare Two Databases After Migration
When everything is ready, you can allow all users to access the database.