Code: Using Oracle PDBs in Continuous Integration pipelines

assets/images/posts/devops_terminal.png

TL;DR - Take Me to the Code

When developing against an Oracle Database, developers will often use a local installation or a container image to run their database CI tests against. This widely used practice gives the developer an isolated database testing environment, which they can control, with minimal fuss. However, it is highly unlikely that this Development environment will be equivalent to the Testing (UAT) or Production environment come deployment time.

Differences in the database initialisation parameters, versions and patches (not to mention the data itself) between a local or containerised Development database and UAT/Production databases could lead to unforeseen issues during the software delivery lifecycle. This is not only a headache for the developer, but for the DBA who has to help identify the discrepancies when things don’t work as expected (the ole “but it works on my machine…” dilemma).

Enter Oracle Multitenant with Pluggable Databases (PDBs)…

PDBs in CI Pipelines

Including PDBs in your CI pipeline couldn’t be simpler and requires no interaction with the DBA once the Container Database (CDB) has been provisioned:

CI PDB

  1. Developer Pushes code
  2. Pipeline Creates/Clones a uniquely identified PDB
  3. Pipeline Run CI Tests against the isolated PDB
  4. Pipeline Drops PDB

Depending on your requirements, there are a few options for the Continuos Integration Database Architecture, however, what is most important is that the testing CDB is representative of the Production environment. For the PDB, you can create an new PDB for full deployment testing, a metadata clone for schema testing, or a full local/remote clone for existing deployment testing with data.

For large databases, consider snapshot techniques such as Snapshot Standby’s or Exadata Snapshots.

If you are in the Oracle Cloud, the Autonomous Database is perfect for CI testing (spin up, test, and destroy), as is the Oracle Base Database Service.

What You’ll Need

  • Your own GitHub or GitLab account. While other forges exist (i.e. Bitbucket, Azure DevOps, etc.), the provided example code has .gitlab-ci.yml and .github/workflows pipelines for those two. Alternatively, in conjunction with a GitHub/GitLab account, if you have your own installation of Jenkins, there is an example Jenkinsfile.

  • An Oracle Container Database (CDB), preferably 19c or later, where you can have up to three PDBs without additional multitenant licensing. For simplicity, the database should be using Oracle Managed Files (OMF).

  • The hostname (or scan name) and port for the listener the CDB is registered on.

  • The domain name, if applicable, of the CDB.

  • The username/password of a common user in the CDB which has the CREATE PLUGGABLE DATABASE system privilege. The example uses the SYS account to avoid additional setup, but it is strongly advised to create and use a least privileged database user.

Each Automation platform (GitHub, GitLab, Jenkins, etc.) has its own ways of defining pipeline variables or secrets. Refer to their documentaion for the specifics, but the following variables/secrets need to be defined:

  • CDB_PASS: The password for the CDB SYS user
  • CDB_HOST: The host or scan name for the CDB listener
  • CDB_PORT: The port for the CDB listener
  • CDB_NAME: The name for the CDB
  • DB_DOMAIN: The domain, if applicable, for the CDB starting with a “.”

Once the above is in place, use the example pipeline templates found in my repository to try the pipeline out.

Enjoy!


Comments