dcsimg
December 14, 2018
Hot Topics:

Migrating an Oracle Database from AWS EC2 to AWS RDS, Part 4

  • May 25, 2018
  • By Deepak Vohra
  • Send Email »
  • More Articles »

We are exploring migrating an Oracle Database from an EC2 instance to a managed service RDS. In the first of four articles, "Migrating an Oracle Database from AWS EC2 to AWS RDS, Part 1," we created database instances on EC2 and RDS. In the second article, "Migrating an Oracle Database from AWS EC2 to AWS RDS, Part 2," we created an IAM user for database migration and also created a database table to migrate. In the second article only, we created a replication instance and replication endpoints. In the third article, "Migrating an Oracle Database from AWS EC2 to AWS RDS, Part 3," we created a migration task to migrate existing changes. In this continuation article, we shall migrate ongoing changes to data. This article has the following sections:

Creating and Running a Replication Task to Migrate Ongoing Changes

In the following sub-sections, we shall create a task to replicate ongoing changes. To demonstrate ongoing replication, we shall first start the task and subsequently create a table and add data. Drop the table DVOHRA.WLSLOG, as shown in Figure 1; we shall be creating the same table to demonstrate ongoing replication.

Dropping Table DVOHRA.WLSLOG
Figure 1: Dropping Table DVOHRA.WLSLOG

Adding Supplemental Logging

Database Migration Service requires that supplemental logging be enabled to enable change data capture (CDC) that is used for replicating ongoing changes. Supplemental logging is the process of storing information about which rows of data in a table have changed. Supplemental logging adds supplemental or extra column data in redo log files whenever an update on a table is performed. The columns that have changed are recorded as supplemental data in redo log files along with an identifying key, which could be the primary key or unique index. If a table does not have a primary key or unique index, all scalar columns are recorded in the redo log files to uniquely identify a row of data, which could make the redo log files large in size. Oracle Database supports the following kinds of supplemental logging:

  • Minimal Supplemental Logging: Only the minimal amount of data that is required by LogMiner for the DML changes is recorded in redo log files.
  • Database Level identification Key Logging: Different kinds of database level identification key logging are supported—ALL, PRIMARY KEY, UNIQUE, and FOREIGN KEY. With the ALL level, all columns (except LOBs, Longs, and ADTs) are recorded in redo log files. For PRIMARY KEY, only primary key columns are stored in redo log files when a row containing a primary key is updated; it is not required that a primary key column is updated. The FOREIGN KEY kind stores only the foreign keys of a row in redo log files when any of the red log files are updated. The UNIQUE kind stores only the columns in a unique composite key or bitmap index when any column in the unique composite key or bitmap index has changed.
  • Table Level Supplemental Logging: Specifies at the table level which columns are stored in redo log files. Table-level identification key logging supports the same levels as by the database-level identification key logging; ALL, PRIMARY KEY, UNIQUE, and FOREIGN KEY. At table level, user-defined supplemental log groups are also supported, which let a user define which columns are to be supplementally logged. The user defined supplemental log groups may be conditional or unconditional.

For ongoing replication, we need to set minimal supplemental logging and table level supplemental logging for ALL columns.

In SQL*Plus, run the following statement to set minimal supplemental logging:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

The output is as follows:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

To find the status of the minimal supplemental logging, run the following statement. And, if the output has a SUPPLEME column value as YES, minimal supplemental logging is enabled.

SQL> SELECT supplemental_log_data_min FROM v$database;

SUPPLEME
--------
YES

Setting minimal supplemental logging and verifying status output is shown in Figure 2.

Setting and Verifying Minimal Supplemental Logging
Figure 2: Setting and Verifying Minimal Supplemental Logging

We shall also set table-level identification key logging when we add table and table data to demonstrate ongoing replication after the task has started. If we add table and table data before creating and starting a task- we won't be able to demonstrate ongoing replication.

To create a task for ongoing replication, click Create task, as shown in Figure 3.

Tasks>Create task
Figure 3: Tasks>Create task

In the Create task wizard, specify a task name and description, and select the replication instance, source endpoint, and target endpoint, as shown in Figure 4. Select Migration type as Migrate existing data and replicate ongoing changes.

Selecting Migration Type for Ongoing Replication
Figure 4: Selecting Migration Type for Ongoing Replication

A message shown in Figure 5 indicates that supplemental logging is required to be enabled for ongoing replication. The message is not to indicate that supplemental logging has not been enabled, but only as a reminder. We have already enabled supplemental logging. Select the checkbox Start task on create.

Message about supplemental logging requirement for replicating ongoing changes
Figure 5: Message about supplemental logging requirement for replicating ongoing changes

The Task Settings are the same as for migrating existing data only (see Figure 6).

Task Settings
Figure 6: Task Settings

For Table mappings, at least one selection rule is required. Add a selection rule to include all tables in the DVOHRA table, as shown in Figure 7.

Adding a selection rule
Figure 7: Adding a selection rule

The selection rule added is shown in Figure 8.

Selection rule
Figure 8: Selection rule

Click Create task to create the task, as shown in Figure 9.

Create task
Figure 9: Create task

A new task gets added with status as Creating, as shown in Figure 10.

Task added with status Creating
Figure 10: Task added with status Creating

When the selection and transformation rules for all existing data have been applied and data migrated, the task status becomes Load complete, replication ongoing (see Figure 11).

Load complete, replication ongoing
Figure 11: Load complete, replication ongoing

The Table statistics tab lists no tables as having migrated or replicated, as shown in Figure 12.

Table statistics
Figure 12: Table statistics

To explore the CloudWatch logs, click the Logs tab and click the link, as shown in Figure 13.

Logs
Figure 13: Logs

The CloudWatch logs get displayed, as shown in Figure 14. The last entry in the logs is about starting replication. The replication ongoing task does not terminate after loading existing data, if any, but continues to run.

CloudWatch logs
Figure 14: CloudWatch logs

Adding a Table to an Oracle Database Instance on EC2

Next, create a table and add table data to demonstrate ongoing replication. Run the following two statements together so that table level supplemental logging gets set when the table is created. Modify the script to make the schema different.

CREATE TABLE DVOHRA.wlslog(time_stamp VARCHAR2(255) PRIMARY KEY,
   category VARCHAR2(255),type VARCHAR2(255),servername
   VARCHAR2(255),code VARCHAR2(255),msg VARCHAR2(255));
alter table DVOHRA.WLSLOG add supplemental log data (ALL) columns;

The table level supplemental logging gets set when the table is created.

SQL> CREATE TABLE DVOHRA.wlslog(time_stamp VARCHAR2(255)
   PRIMARY KEY,category VARCHAR2(255),type VARCHAR2(255),servername
   VARCHAR2(255),code VARCHAR2(255),msg VARCHAR2(255));
alter table DVOHRA.WLSLOG add supplemental log data (ALL) columns;
Table created.
SQL>
Table altered.

The output is shown in SQL*Plus in Figure 15.

Creating Table and setting Supplemental Logging
Figure 15: Creating Table and setting Supplemental Logging

As yet, we have only created the table, and not added any table data. The DDL for the table gets migrated, as indicated by the Table statistics in Figure 16.

DDLs for Table Migrated
Figure 16: DDLs for Table Migrated

Adding Table Data

Next, run the following SQL script to add data to the table created. Modify the script to make the schema different.

SQL> INSERT INTO DVOHRA.wlslog(time_stamp,category,type,
   servername,code,msg) VALUES('Apr-8-2014-7:06:16-PM-PDT',
   'Notice','WebLogicServer','AdminServer','BEA-000365','Server
   state changed to STANDBY');
INSERT INTO DVOHRA.wlslog(time_stamp,category,type,servername,
   code,msg) VALUES('Apr-8-2014-7:06:17-PM-PDT','Notice',
   'WebLogicServer','AdminServer','BEA-000365','Server state
   changed to STARTING');
INSERT INTO DVOHRA.wlslog(time_stamp,category,type,servername,
   code,msg) VALUES('Apr-8-2014-7:06:18-PM-PDT','Notice',
   'WebLogicServer','AdminServer','BEA-000365','Server state
   changed to ADMIN');
INSERT INTO DVOHRA.wlslog(time_stamp,category,type,servername,code,
   msg) VALUES('Apr-8-2014-7:06:19-PM-PDT','Notice',
   'WebLogicServer','AdminServer','BEA-000365','Server state
   changed to RESUMING');
INSERT INTO DVOHRA.wlslog(time_stamp,category,type,servername,code,
   msg) VALUES('Apr-8-2014-7:06:20-PM-PDT','Notice',
   'WebLogicServer','AdminServer','BEA-000361','Started WebLogic
   AdminServer');
INSERT INTO DVOHRA.wlslog(time_stamp,category,type,servername,code,
   msg) VALUES('Apr-8-2014-7:06:21-PM-PDT','Notice',
   'WebLogicServer','AdminServer','BEA-000365','Server state
   changed to RUNNING');

1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>
1 row created.

Subsequently, run the Commit statement.

SQL> COMMIT;
Commit complete.

Exploring the Replicated Database Table

The Table statistics lists Inserts as the number of rows of data added, as shown in Figure 17.

Table statistics list 6 Inserts
Figure 17: Table statistics list 6 Inserts

The task continues to run after replicating ongoing changes. Add another row of data.

SQL> INSERT INTO DVOHRA.wlslog(time_stamp,category,type,
   servername,code,msg) VALUES('Apr-8-2014-7:06:22-PM-PDT',
   'Notice','WebLogicServer','AdminServer','BEA-000360','Server
   started in RUNNING mode');

1 row created.
SQL> COMMIT;

Commit complete.

SQL>

Click Refresh data from server, as shown in Figure 18.

Refresh data from server
Figure 18: Refresh data from server

The total number of Inserts in Table statistics becomes 7, as shown in Figure 19.

Table statistics with Inserts as 7
Figure 19: Table statistics with Inserts as 7

Dropping and Reloading Data

To drop and reload table data, click Drop and reload table data, as shown in Figure 20.

Drop and Reload Table Data
Figure 20: Drop and Reload Table Data

Click Refresh data from server (see Figure 21).

Refresh data from server
Figure 21: Refresh data from server

The icon and State column for the table indicates that the table is being reloaded, as shown in Figure 22.

Table is being reloaded
Figure 22: Table is being reloaded

When table reload has completed, the table State column becomes Table completed, as shown in Figure 23. After reloading table data, the Full Load Rows displays a value of 7 and Inserts is 0 because a reload is not ongoing replication, but a full load.

Table Reload completed

Figure 23: Table Reload completed

Because the table data is dropped and reloaded and the source table data has not changed, the CloudWatch logs include a message "Some changes from the source database had no impact when applied to the target database.", as shown in Figure 24.

Some changes from the source database had no impact when applied to the target database
Figure 24: Some changes from the source database had no impact when applied to the target database

When the reload of the DVOHRA.wlslog table has completed, the message "Load finished for table DVOHRA.wlslog. 7 rows received" gets displayed, as shown in Figure 25.

CloudWatch log Message for Load having completed
Figure 25: CloudWatch log Message for Load having completed

Stopping and Starting a Task

A task of the type that includes ongoing replication does not stop by itself unless an error occurs. To stop the task, click Stop (see Figure 26).

Stopping a Task
Figure 26: Stopping a Task

In the Stop Task dialog, click Stop, as shown in Figure 27.

Confirmation Dialog to stop a task
Figure 27: Confirmation Dialog to stop a task

The task status becomes Stopping, as shown in Figure 28.

Stopping a task
Figure 28: Stopping a task

When a task stops, the status becomes Stopped, as shown in Figure 29.

Task Stopped
Figure 29: Task Stopped

To start a stopped task, click Start/Resume, as shown in Figure 30.

Starting or Resuming a Task
Figure 30: Starting or Resuming a Task

In the Start task dialog, click Start to start task from the stopped point (see Figure 31). The other option is to restart the task.

Starting Task after stopping
Figure 31: Starting Task after stopping

The task status becomes Starting, as shown in Figure 32.

Starting a task
Figure 32: Starting a task

When the migration of existing data has completed, the task continues to run with the Status as Load complete, replication ongoing, as shown in Figure 33.

Load complete, replication ongoing
Figure 33: Load complete, replication ongoing

Deleting Databases

The RDS DB instance may be deleted with the Instance Actions>Delete command. The Oracle database on the EC2 instance may be stopped with Actions>Instance State>Stop, as shown in Figure 34.

Stopping EC2 Instance
Figure 34: Stopping EC2 Instance

Conclusion

In four articles, we discussed migrating an Oracle Database from AWS EC2 to AWS RDS.






Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Sitemap

Thanks for your registration, follow us on our social networks to keep up-to-date