Friday 3 November 2017

Oracle Cloud Infrastructure and Terraform first steps on Windows

Oracle has released the 'Terraform provider for Oracle Cloud Infrastructure'. This tutorial is written to help with the first steps with Terraform and OCI on Windows.
In contrast to procedural tools like Oracle's ocicli, Terraform implements a declarative approach. That means, the user declares how the resulting architecture should look like and not how to get there. To do all the footwork, Terraform needs to know how to work with the target environment. That is what the Terraform provider for OCI is for.
To start with Terraform on OCI, you need to install Terraform first from releases.hashicorp.com/terraform and put it in your Path.


Check it with terraform -version. Next get the Terraform Provider for OCI from github.com/oracle/terraform-provider-oci. According to Oracle's documentation on GitHub, the windows_amd64 folder found in the windows.zip file must be placed in %APPDATA%/terraform.d/plugins/. A %APPDATA%/terraform.rc would only be needed for compatibility with Terraform 0.9, so that can be skipped if not needed.

Now for the first steps with Terraform and OCI, we need at least the Tenancy OCID and the User OCID from our OCI environment as well as an private/public API key pair, the hash of the public key and the region name. For creating a VCN, we also need the compartment OCID.


The name of the region is on the top and the Tenancy OCID can be copied from the bottom of the the OCI welcome page.


The User OCID is found on the users settings page.


The Compartment OCID is found under Identity/Compartments. Make sure to pick the right compartment.

The API key we need is best created with openssl. Under Windows, get Git Bash with Git for Windows from git-for-windows.github.io and install it. Open Git Bash and run

$ mkdir ~/.oraclebmc (or any directory that is fine for you)
$ openssl genrsa -out ~/.oraclebmc/bmcs_api_key.pem 2048
$ chmod 0700 ~/.oraclebmc
$ chmod 0600 ~/.oraclebmc/bmcs_api_key.pem
$ openssl rsa -pubout -in ~/.oraclebmc/bmcs_api_key.pem -out ~/.oraclebmc/bmcs_api_key_public.pem
$ cat ~/.oraclebmc/bmcs_api_key_public.pem
$ openssl rsa -pubout -outform DER -in ~/.oraclebmc/bmcs_api_key.pem | openssl md5
$ openssl rsa -pubout -outform DER -in ~/.oraclebmc/bmcs_api_key.pem | openssl md5 > ~/.oraclebmc/bmcs_api_key_fingerprint

That should result in the following files:


The public key needs to be copied to the OCI UI, so copy the output from the cat command above ...


... click on Add Public Key and paste your key.


After adding the key, the OCI displays your keys fingerprint information. That should be the same as in bmcs_api_key.pem. If not, retake the steps from the first openssl command.

We need a few environment variables to pass these parameters to terraform. Create a working directory (eg. D:\Terraform-OCI), there create a batch file such as terraform-env.cmd and set the following

setx TF_VAR_tenancy_ocid "<your tenancy ocid>"
setx TF_VAR_user_ocid "<your user ocid>"
setx TF_VAR_fingerprint "<your api key fingerprint from bmcs_api_key.pem>"
setx TF_VAR_private_key_path "d:\Arnes\oraclebmc\bmcs_api_key.pem"
setx TF_VAR_compartment_ocid "<your compartment ocid>"
setx TF_VAR_region "us-phoenix-1"

Setx requires to close the shell and open a new one. Now we need to pass these variables to Terraform, so create a variables.tf with the following:

variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "region" {}

variable "compartment_ocid" {}

variable "VPC-CIDR" {
  default = "10.0.0.0/16"
}

The first six variables are for Terraform to take our according environment variables. Every environment variable beginning with TF_VAR_ will automatically be mapped. We don't need the  CIDR variable right now, but later.
Next we need to pass the mandatory variables to the OCI provider. Create a provider.tf with the following:

provider "oci" {
  tenancy_ocid     = "${var.tenancy_ocid}"
  user_ocid        = "${var.user_ocid}"
  fingerprint      = "${var.fingerprint}"
  private_key_path = "${var.private_key_path}"
  region           = "${var.region}"

Now we are ready for a first run to see if everything is set up correctly. In the working directory run a terraform init.


The OCI provider has been correctly initialized. With terraform plan we can check what Terraform is planning to do: 


Nothing of course, as we don't gave it any instructions on what to build. So let's create something. First we need to create a Virtual Cloud Network (VCN), so create a network.tf with the following

resource "oci_core_virtual_network" "Arne-VCN" {
  cidr_block     = "${var.VPC-CIDR}"
  compartment_id = "${var.compartment_ocid}"
  display_name   = "Arne-VCN"
  dns_label      = "arnevcn"
}

That will define a simple VCN using the CIDR block defined in VPC-CIDR. To see what Terraform will do with that, re-run terraform plan.


This time, Terraform displays, that it wants to create the VCN we defined. Looks good, so let Terraform execute that plan with terraform apply.


That should be done in seconds. Double-check that with the OCI web UI.


If everything worked, the newly created VCN should be shown here. Re-running terraform plan should show no changes. 
For a little more complex example, add the following to network.tf to add an internet gateway and a routing roule to the VCN.

resource "oci_core_internet_gateway" "Arne-IGW" {
  compartment_id = "${var.compartment_ocid}"
  display_name   = "Arne-IGW"
  vcn_id         = "${oci_core_virtual_network.Arne-VCN.id}"
}

resource "oci_core_route_table" "Arne-RT" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id         = "${oci_core_virtual_network.Arne-VCN.id}"
  display_name   = "Arne-RT"

  route_rules {
    cidr_block        = "0.0.0.0/0"
    network_entity_id = "${oci_core_internet_gateway.Arne-IGW.id}"
  }
}

Terraform plan should show, that the VCN won't be touched, as it already exists. But the internet gateway and the routing table with the rule will be created.
This is a very simple example, but the basic setup for using Terraform with OCI is done.

Monday 29 May 2017

Oracle Cloud Compute: Workaround for Docker / XFS issue with Oracle Linux



With the latest Oracle Linux 7.2 UEK4 images on the Oracle Cloud, the following error occurs when running Docker containers.

docker: Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/66c96d19f86d1262f2472cda6639dacfde8f13d0bebff9e434b74b9cebdcc32f-init/merged: invalid argument.

This is related to the ftype parameter of the xfs filesystem, which is set at formatting. It can be verified by running

xfs_info /

and the output should look like this:


This is a screenshot from a fresh OL 7.2 UEK4 instance after a full yum update. As you can see, ftype is zero. Unfortunately, that setting is not compatible with the overlayfs from Docker and cannot be changed after formatting.
For more infos on this issue see https://github.com/moby/moby/issues/23930. To solve this, /var/lib/docker must be moved to a different volume with xfs and ftype=1 set or just use a different file system.

This example shows, how to solve this with an EXT4 volume.


On the Compute dashboard, go to the Storage Tab and click on Create Storage Volume.


Set the settings to your like and create the volume.


Klick on the hamburger menu next to your newly created volume and choose Attach to instance.


Choose your Compute instance and attach the volume.

To mount the volume, just follow the steps from the Oracle documentation: Mounting a Storage Volume on a Linux Instance.

First find out the device name via

ls /dev/xvd*

The output should look like this


The first device, the boot volume, is always /dev/xvdb, the next one is /dev/xvdc etc. In my example, the new volume is disk #2, so it is /dev/xvdc.
So the next step is a

fdisk /dev/xvdc

and go through the following steps:


In short, select n for new partition, p for primary, leave 1 as the default partition number and accept both sector numbers (dependent on the size of your volume). Then write the configuration with w.

Next create a file system, eg.

sudo mkfs -t ext4 /dev/xvdc

and the result should look like this


Now we need a mount point, which would be /u01 according to Oracle's styleguides. By default, the OL image already comes with a /u01 directory as mount point, so we can use this.
As we want this to be a permanent mount, add this to /etc/fstab by

sudo vi /etc/fstab

and add the following line

/dev/xvdc /u01 ext4 defaults 0 0

so it should look like this:


Save and restart the instance to check if everything worked. After reboot, check if the file system is mounted, eg. with

mount | grep /u01; ls /u01

and the output should look like this:


Now create a location where to move the directory /var/lib/docker, for example /u01/var/lib. Use any method you like to move the docker directory to the new location. I prefer to use tar, so I have an additional backup, eg.

cd /var/lib
tar cvfz docker.tgz docker
cd /u01/var/lib/
tar xvfz /var/lib/docker.tgz
mv /var/lib/docker /var/lib/docker.bak
cd /var/lib
ln -s /u01/var/lib/docker/ docker

Then set a symlink to your newly created docker lib directory and restart the docker daemon and try it with any image.


Should run now, that's it.

Friday 31 March 2017

Oracle Cloud: create Docker container with Java via CLI

This tutorial demonstrates, how to set up an instance of Oracle Application Container Cloud Service (ACCS, under the hood running Docker) with a pre-configured Java environment via the PaaS Service Manger (PSM) command line interface (CLI).
If you have followed my former postings on this
you should be ready to go when running a Windows client. Whatever OS and editor you are using, the tutorial expects that you have setup your psm, so it speaks to your Oracle Cloud Identity Domain.
The general idea of the Cloud Stack Manager is, that you define your wanted environment in a template first. That environment might consist of several instances and is called a stack. When you got your template, upload it to the Oracle Cloud Stack Manager and create as many instances, as you like.
I have always been a friend of starting with the simplest possible example and don't move on before that is understood and running. So this tutorial shows step-by-step how to set up a stack with a single ACCS Java instance.

1. Create a template

Pick your favorite editor and write your first template. Ideally, the editor supports running shell commands in the background and pipes the output to an editor window. 
Here is my template:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
    template:
        templateName: Simple-Java-Template
        templateVersion: 42.0
        templateDescription: "Hello Template World"
        parameters:
            containerName:
                label: Container Name
                description: Please enter the name of the Java Container
                type: string
                mandatory: true
                sensitive: false
                minLength: 2
                allowedPattern: "[a-zA-Z]*"
            parameter1:
                label: Parameter1
                description: Some Parameter 1
                type: string
                mandatory: false
                sensitive: false
            parameter2:
                label: Parameter2
                description: Some Parameter 2
                type: string
                mandatory: false
                sensitive: false        
        parameterGroups:
          - label: Mandatory Parameters
            parameters: 
              - containerName
          - label: Some Other Parameters
            parameters: [ parameter1, parameter2 ]
        resources:
            abContainer: 
                type: apaas
                parameters:
                    name: { "Fn::GetParam":containerName }
                    subscription: MONTHLY
                    runtime: Java

The first general section is quite obvious: a name, version and description.
Then comes the parameters section. In fact, I only need one parameter to ask for the name of the container. But to demonstrate the Parameter Groups, I added two unused parameters.
The parameterGroups section covers the groups Mandatory Parameters and Some Other Parameters. These groups are for the UI to visually group the parameters.
The resources section is where the wanted resources are listed with their parameters. Here we only have one ACCS instance with Java taste. For the name parameter, we insert the value of the containerName parameter by using Fn::GetParam. The only other mandatory parameter is the chosen subscription.

2. Import the template

Now that we have a template, we are ready to go. It is a good idea to validate the template first by using PSM with the following line:

psm stack validate-template -f Simple-Java-Template.yaml

The output should look like this (from Notepad++)


As everything is OK with the template, it can be imported by using


D:\programme\python\Scripts\psm.cmd stack import-template -f "D:\Project\PSM\Simple-Java-Template.yaml"

and the output should look like this:


Switch to the UI to double check that the template is available in the Cloud Stack Manager


Of course this can also be done using the PSM command


psm stack list-templates

but the result gets quite a bit long. Just the relevant first lines:


3. Create an instance from the template

Now that we have a valid template imported into Cloud Stack Manager, we can start creating instances based on that.

When creating an instance via the UI, the following dialog comes up

That shows what the parameter groups are good for: they group the parameters.

But this tutorial is about using the command line. Create an instance using the PSM via

psm stack create -n ArneStack -t Simple-Java-Template -f ROLLBACK -p containerName:ArneJava

Copy the returned Job ID and check the status via

psm stack operation-status -j 4761735

and the output should look like


The same can be found in the UI when clicking on the status text.

Wait until psm stack operation-status returns something like


{
    "activityDate":"2017-03-31T13:37:31.896+0000",
    "message":"Creation of stack ArneStack successful."
}

Double check by executing


psm stack list

to see your newly created stack listed:


4. Test and delete the instance

Go the the UI and click on the name of the stack. This will bring you to the Stack Overview.

Click on the Application Web URL


That should bring up the test page. So with the instance is up and running this tutorial is finished. Delete the instance with


psm stack delete -n ArneStack

5. More information

Two more complex examples are delivered with your Oracle Cloud subscription, you will find them in the templates section of the Cloud Stack Manager UI. One with a MySQL DB and a PHP container (Oracle-LMP, a LAMP stack without Apache) and one with an Oracle DB and a WebLogic Server (Oracle-JCS-DBCS-Template).

Some Documentation:

Tuesday 28 March 2017

Running Oracle Cloud Stack Manager CLI from Notepad++

When creating template files for the Oracle Cloud Stack Manager (PSM), I find it usefull to run the PSM Command Line Interface (PSM CLI) directly from my favorite editor Notepad++. Here is a short how-to.

1. Install 32bit Notepad++

The required plugins won't run with the 64bit version. Not an issue for me, as I never brought the 32bit version to the limits. Get it from the authors site:
https://notepad-plus-plus.org/download/v7.3.3.html

2. Install the Notepad++ Plugin Manger

Not sure, if the newer distributions already come with it. If you don't have a menu item Plugins|Plugin Manager, get it from Sourceforge and install it: https://sourceforge.net/projects/npppluginmgr/

3. Install NppExec

Open Plugin Manager (Plugins|Plugin Manager|Show Plugin Manager), pick the NppExec Plugin from the list of available Plugins. After restart, it should be available.


4. Save a command for PSM CLI and run it

With a PSM template open, hit F6 in Notepad++. Enter the following command for eg. PSM validation (adjust the path to your environment) and save it.

D:\programme\python\Scripts\psm.cmd stack validate-template -f "$(FULL_CURRENT_PATH)" 


5. Re-run PSM validate while editing

Via CTRL-F6, the last command can be re-run, so just write your template and CTRL-F6 to validate it.


Tuesday 14 March 2017

VMWare Cloud Onboarding and Scaling with Ravello

This tutorial is about importing a VM from a VMWare infrastructure on-premise into the cloud with all its settings (storage, networking, VLAN's etc). Maybe you are running out of resources, but need to grow your VM or just need to free some resources in your VMWare environment without deleting some of the VM's.



Here is the VMWare client tooling, showing a running Linux VM. That 2012-Test VM should be migrated to a cloud environment.


Switch to Ravello and navigate to Library/VM's


Select Import VM.


If you haven't installed the upload tool, you get a download link. Install the Import Tool if necessary. Then log into the Import Tool and select Upload.


Choose Extract and Upload VM's from vCenter, vSphere or ESX (recommended).


Login with your VMWare credentials


Browse your VM's.


Use the search, if you have too many. Then choose the wanted VM and click Upload.


Waiting for the upload to be done. A good time to get some new coffee...


Finally done. That one took about an hour.


Switch to your library to find your new VM. Ravello also imported the VM sizing, which is 1 CPU, 4 GB RAM and 36 GB disk. Click on Edit and Verify VM and finish editing. The point about that is: after uploading the VM, it is available within seconds, as it runs in its original format. No conversion is necessary, the VM can immediately be used in an application.


To use that newly uploaded VM in an application, go to Applications, click on Create Application and on Create to finish that dialog.


Drag & Drop the VM into the application.


Switching to the NIC's tab shows, that the VLAN tag has also been imported.


On the Network tab see the networking. Click on Publish to publish the application to a cloud service provider.


Make your choice, then click on Publish. In this case, we did choose Performance Option. Only with that Option you can choose a location. A reason to do that may be latency requirements. Here, we selected the Location 'Europe West 2'.


Here you are, your on-premise VM is now running in the cloud.
But what if you might want some more horsepower, now that you can access resources from the cloud? Why not add more RAM and CPU?


As we already published this application to a cloud service provider, we need to stop the application first. Hint: if you already know, that you want more capacity, do this before publishing.


Change the settings to your like and click on Save.



Now this is important, so that is probably why the developers highlighted it and show you a warning. Click the Update button, or your changes won't apply.


Restart the VM, here are your 4 cores.

So this tutorial demonstrated, how fast and easy a VM can be copied from a given VMWare environment on-premise to a cloud service provider using Oracle Ravello. Relevant attributes as networking inlcuding VLAN-tagging have been imported. The VM has just been moved to the cloud, but thanks to Ravello it does not need to be converted (as with other vendors solutions), it just runs. Once running in the cloud, it is easy to scale up (and down again) the VM to your needs.