Architecture & DesignUsing Docker Swarm (Legacy) with Oracle

Using Docker Swarm (Legacy) with Oracle

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Docker Swarm (legacy standalone version) is a clustering software for Docker, using which a cluster (or pool) of Docker hosts may be represented as a single host interface to external clients. Docker Swarm is integrated with Docker Engine 1.12, but Docker 1.12 has issues with some of the other software that makes use of Docker Engine, such as Docker Compose not being supported in the Docker 1.12 Swarm mode, Kubernetes 1.2 and 1.3 not being supported with Docker 1.12, and OpenShift Origin having some issues with Docker 1.12. Although the Docker 1.12 integrated Swarm mode is preferred, Docker Swarm legacy standalone may still be used to avoid the aforementioned issues.

With Docker Swarm, a cluster of Docker hosts is exposed as a single “virtual” host. Docker Swarm provides distributed Docker. Swarm may be installed by using one of the two available mechanisms: Docker image for Swarm or Swarm executable binaries. Using the Docker image is the recommended method and has benefits such as not requiring installing the Swarm binaries, a single Docker run command for installing Swarm, and an isolated Docker container running Swarm without the need to set environment variables. Docker Swarm has the following components:

  • Docker Swarm Cluster
  • Docker Swarm Manager
  • Node/s in the Swarm Cluster

The Swarm manager manages the nodes in the Swarm cluster and provides a single interface or virtual host for the Swarm cluster to external clients. In this tutorial, we shall create a Docker Swarm cluster of three nodes and subsequently run Oracle Database on the cluster.

Setting the Environment

We have used Ubuntu instances created from AMI Ubuntu Server 14.04 LTS (HVM), SSD Volume Type – ami-fce3c696. We have created four Amazon EC2 instances; one for the Swarm master and the Swarm manager, which also is the Docker client machine, and three for the three nodes in the Swarm cluster. Add the “default” security group to each of the Ubuntu instances; the default security group allows all inbound/outbound traffic on an Amazon EC2 host. The following software is required for this tutorial:

  • Docker
  • Docker Image for Docker Swarm
  • Docker Image for Oracle Database

Obtain the Public IP Address of each of the Amazon EC2 instances and SSH Login to each of the Ubuntu instances. For example, the following command logs in to the Ubuntu instance with Public IP 54.174.46.34, which we have used as the Swarm master node.

ssh -i "docker.pem" ubuntu@54.174.46.34

Install Docker on each Ubuntu host. Start Docker and verify status on each Ubuntu instance with the following commands:

sudo service docker start
sudo service docker status

Similarly, log in to the other Ubuntu instances and install Docker.

ssh -i "docker.pem" ubuntu@54.209.22.205
ssh -i "docker.pem" ubuntu@52.87.161.230
ssh -i "docker.pem" ubuntu@54.209.79.188

The four Ubuntu instances are shown in an Amazon EC2 panel.

Four Ubuntu instances, shown in an Amazon EC2 panel
Figure 1: Four Ubuntu instances, shown in an Amazon EC2 panel

Creating a Docker Swarm Cluster

Create a Docker Swarm cluster using the Docker image “swarm” with the following command on the Swarm master node. First, log in to the node if not already logged in.

ssh -i "docker.pem" ubuntu@54.174.46.34
sudo docker run --rm swarm create

A Swarm cluster gets created and a Cluster ID gets output. Copy the Cluster ID; we shall need it when configuring the individual nodes in the cluster.

Creating a Swarm cluster
Figure 2: Creating a Swarm cluster

Swarm cluster exposes port 2375.

Starting the Docker Swarm Manager

A Docker Swarm manager manages the Swarm cluster. Install the Swarm manager on any of the nodes in the cluster. We have installed the Swarm manager on the Swarm master node, which is the machine on which we installed Swarm using the Docker image. The command to start the Swarm manager has the following syntax:

docker run -t -p <swarm_port>:2375
   -t swarm manage token://<cluster_id>

The swarm_port may be set to any available port. We have used port 2376. The <cluster_id> is the ID generated with the previous command to create a Swarm cluster. Using swarm port 2376 and the cluster ID for the Swarm cluster, run the following command to start the Swarm manager.

sudo docker run -t -p 2376:2375
   -t swarm manage
   token://b678bc935d3e8abd52f44318370e4f82

Swarm manager gets started and listens for HTTP on address :2375.

Swarm manager listening for HTTP
Figure 3: Swarm manager listening for HTTP

Starting Docker Swarm Agents

In this section, we shall start a Swarm agent on each of the three nodes to join the nodes with the Swarm cluster. The Swarm manager is able to access each of the nodes added to the Swarm cluster. The command syntax to add a node to the Swarm cluster has the following syntax:

docker run -d swarm join --addr=<node_ip:2375>
   token://<cluster_id>

The node_ip is a variable and is the IP Address (Public or Private) for the node (machine) to be added to the Swarm cluster. The cluster_ip is the token returned when the Swarm cluster was installed using the Docker image for Swarm.

For example, for node0, first SSH Login to the Ubuntu instance using the Public IP address if not already logged in.

ssh -i "docker.pem" ubuntu@54.209.22.205

Run the following command to join the node to the Swarm cluster:

sudo docker run -d swarm join --addr=54.209.22.205:2375
   token://b678bc935d3e8abd52f44318370e4f82

The node gets added to the Swarm cluster.

Adding the node to the Swarm cluster
Figure 4: Adding the node to the Swarm cluster

The node gets registered with the Swarm manager as indicated by the “Registered Engine …” message on the Swarm manager.

Registering the node
Figure 5: Registering the node

Similarly for node1, first SSH Login to the Ubuntu instance if not already logged in.

ssh -i "docker.pem" ubuntu@52.87.161.230

Using the Public IP Address and the cluster, join the node with the cluster:

sudo docker run -d swarm join --addr=52.87.161.230:2375
   token://b678bc935d3e8abd52f44318370e4f82

The node1 also gets joined with the cluster.

Joining the node1 with the cluster
Figure 6: Joining the node1 with the cluster

The node1 IP Address gets registered with the Swarm manager, as indicated by the 2nd “Registered Engine …” message on the Swarm manager.

The node1 IP Address gets registered with the Swarm manager
Figure 7: The node1 IP Address gets registered with the Swarm manager

Similarly for node2, first SSH Login to the Ubuntu instance if not already logged in, and subsequently join the node with the cluster.

ssh -i "docker.pem" ubuntu@54.209.79.188
sudo docker run -d swarm join
   --addr=54.209.79.188:2375
   token://b678bc935d3e8abd52f44318370e4f82

The node2, which is the third node, also joins the Swarm cluster.

The third node joins the Swarn cluster
Figure 8: The third node joins the Swarn cluster

The Swarm manager registers the third node, as indicated by the third “Registered Engine …” message.

Registering the third node
Figure 9: Registering the third node

Listing the Nodes in the Swarm Cluster

The nodes in the Swarm cluster may be listed with the following command:

docker run --rm swarm list token://<cluster_id>

Substituting the cluster_id runs the following command in a new terminal on the same Ubuntu instance on which the Swarm manager is running.

sudo docker run --rm swarm list
   token://b678bc935d3e8abd52f44318370e4f82

The three nodes get listed in the host:port format. The host is the Public IP Address of each of the nodes and the port is the port exposed by the Swarm cluster Docker container, port 2375.

Listing the three nodes
Figure 10: Listing the three nodes

These nodes are not to be accessed directly. The swarm cluster may be accessed on the <swarm_ip:swarm_port> host:port combination, in which the swarm_ip is the Public IP Address on the Ubuntu instance running the Swarm manager, and swarm_port is 2376. The <swarm_ip:swarm_port> for the Swarm configuration used in this tutorial is 172.30.1.69:2376. All commands to access the Swarm cluster are run on a new terminal on the same Ubuntu instance as the one running the Swam manager.

Getting Information About the Swarm Cluster

To get information about the Swarm cluster, run a command with the following syntax from the Swarm master node, which is used as the Docker client node.

docker -H tcp://<swarm_ip:swarm_port> info

The actual command run is the following:

sudo docker -H tcp://172.30.1.69:2376 info

Information about the Swarm cluster gets output.

Output information about the Swarm cluster
Figure 11: Output information about the Swarm cluster

Having created a Swarm cluster, next we shall install Oracle Database on the cluster.

Running an Oracle Database on the Swarm Cluster

To run Docker image for Oracle Database, run a Docker command with the following syntax:

docker -H tcp://<swarm_ip:swarm_port> run ...

The Docker image for Oracle Database is sath89/oracle-xe-11g and the ports to be reserved are 8080, for the admin console; and 1521, for the Oracle Database. Run the following command from the Swarm master node, which is also the Docker client node.

sudo docker -H tcp://172.30.1.69:2376
   run --name orcldb -d -p 8080:8080
   -p 1521:1521 sath89/oracle-xe-11g

A Docker container for Oracle Database gets created on the Swarm cluster.

Creating a Docker container
Figure 12: Creating a Docker container

Listing the Docker Containers in the Swarm Cluster

The Docker container/s created for Oracle Database may be listed with a command with the following syntax:

docker -H tcp://<swarm_ip:swarm_port> ps

Substituting 172.30.1.69:2376 for <swarm_ip:swarm_port>, run the following command from the Swarm master node to list the Docker containers:

sudo docker -H tcp://172.30.1.69:2376 ps

The single Docker container for the Docker image sath89/oracle-xe-11g gets listed. It is not important to know on which node in the Swarm cluster the Docker container is running, but the IP Address of the node running the Docker container gets listed as 52.87.161.230.

Listing the single Docker container
Figure 13: Listing the single Docker container

The Docker container ID is used to access the Docker container. The detail about the Docker container may be listed with the following command:

sudo docker inspect fc229d90a5f7

The details about the Docker container gets listed.

Listing details about the Docker container
Figure 14: Listing details about the Docker container

The IP Address of the Docker container, though not required for this tutorial, also gets listed.

Listing the Docker container's IP Address
Figure 15: Listing the Docker container’s IP Address

The Docker images installed on the Swarm cluster may be listed with the following command:

sudo docker -H tcp://172.30.1.69:2376 images

The Docker images include “swarm”, which is the Docker Swarm image, and the Docker image for Oracle Database. Other Docker images that may have been installed earlier on the Swarm cluster also get listed.

Listing other Docker images that were installed earlier
Figure 16: Listing other Docker images that were installed earlier

Creating an Oracle Database Table

Next, we shall start an interactive bash shell to connect to the Docker container running the Oracle Database and create a database table. The command syntax to start a bash shell is the following in which the <container_name> is the name specified with the -name command parameter when running the docker -H tcp://<swarm_ip:swarm_port> run… command for the Docker image for Oracle Database.

sudo docker -H tcp://<swarm_ip:swarm_port>
   exec -it <container_name> bash

Substituting 172.30.1.69:2376 for <swarm_ip:swarm_port>, run the following command to start an interactive bash shell:

sudo docker -H tcp://172.30.1.69:2376
   exec -it orcldb bash

In the bash shell, start SQL*Plus with the following command.

sqlplus

Specify user-name as “system” when prompted and specify password as “oracle” when prompted. SQL*Plus gets started.

Specifying a user-name and password
Figure 17: Specifying a user-name and password

Next, create a user called “OE” and set its password to “OE” also. Also, grant CONNECT and RESOURCE roles to the “OE” user.

CREATE USER OE QUOTA UNLIMITED ON SYSTEM
   IDENTIFIED BY OE;
GRANT CONNECT, RESOURCE TO OE;

The “OE” user gets created and the roles get granted.

Creating the "OE" user and granting the roles
Figure 18: Creating the “OE” user and granting the roles

Next, create a database table called OE.WLSLOG with the following SQL script:

CREATE TABLE OE.WLSLOG(time_stamp VARCHAR2(45)
   PRIMARY KEY,category VARCHAR2(25),type VARCHAR2(25),
   servername VARCHAR2(25),code VARCHAR2(25),msg VARCHAR2(45));

The database table gets created.

Creating the database table
Figure 19: Creating the database table

Add data to the OE.WLSLOG table with the following SQL script:

INSERT INTO OE.WLSLOG VALUES('Apr-8-2014-7:06:16-PM-PDT',
   'Notice', 'WebLogicServer','AdminServer','BEA-000365',
   'Server state changed to STANDBY');
INSERT INTO OE.WLSLOG VALUES('Apr-8-2014-7:06:17-PM-PDT',
   'Notice', 'WebLogicServer','AdminServer','BEA-000365',
   'Server state changed to STARTING');
INSERT INTO OE.WLSLOG VALUES('Apr-8-2014-7:06:18-PM-PDT',
   'Notice', 'WebLogicServer','AdminServer','BEA-000365',
   'Server state changed to ADMIN');
INSERT INTO OE.WLSLOG VALUES('Apr-8-2014-7:06:19-PM-PDT',
   'Notice', 'WebLogicServer','AdminServer','BEA-000365',
   'Server state changed to RESUMING');
INSERT INTO OE.WLSLOG VALUES('Apr-8-2014-7:06:20-PM-PDT',
   'Notice', 'WebLogicServer','AdminServer','BEA-000331',
   'Started WebLogic AdminServer');
INSERT INTO OE.WLSLOG VALUES('Apr-8-2014-7:06:21-PM-PDT',
   'Notice', 'WebLogicServer','AdminServer','BEA-000365',
   'Server state changed to RUNNING');
INSERT INTO OE.WLSLOG VALUES('Apr-8-2014-7:06:22-PM-PDT',
   'Notice', 'WebLogicServer','AdminServer','BEA-000360',
   'Server started in RUNNING mode');

Data gets added; seven rows of data get created.

Adding data and creating seven rows of data
Figure 20: Adding data and creating seven rows of data

Run a SQL query using the SELECT statement on the OE.WLSLOG table:

SELECT * FROM OE.WLSLOG;

The output from the SQL query is shown in SQL*Plus.

The SQL query's output
Figure 21: The SQL query’s output

The seven rows of data added get listed.

Listing the added rows of data
Figure 22: Listing the added rows of data

Exit SQL*Plus with the following command:

exit

Exit the bash shell with the same command:

exit

The command prompt is returned to the Swarm master machine Ubuntu user.

Returning the command prompt
Figure 23: Returning the command prompt

Listing Logs for the Swarm Cluster

The logs generated by the Docker container may be listed with the following command syntax:

docker -H tcp://<swarm_ip:swarm_port> logs ...

For the orcldb Docker container, run the following command to list the logs:

sudo docker -H tcp://172.30.1.69:2376 logs orcldb

The Docker container logs get listed.

Listing the Docker container logs
Figure 24: Listing the Docker container logs

Conclusion

In this tutorial, we discussed using Docker Swarm, the legacy standalone version. We installed Swarm using a Docker image. A Swarm cluster consists of a Swarm Manager and one or more Swarm agent nodes. First, the Swarm Manager has to be started and subsequently the Swarm agents are joined to the Swarm Manger to form a Swarm cluster. A Swarm cluster is accessed by a TCP protocol connection at <swarm_ip:swarm_port>. We created a Swarm cluster consisting of a Swarm Manger and two Swarm agent nodes, and subsequently installed Oracle Database on the cluster to create a database table.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories