Free ebook, DevOps with OpenShift

When getting started with OpenShift the concepts op Docker, Kubernetes and additional OpenShift toolset can be overwhelming from both the development and infrastructure background perspective. Luckily three OpenShift experts at Red Hat explain in the book “DevOps with OpenShift” how to configure Docker application containers and the Kubernetes cluster manager with OpenShift’s tools.

The book covers (and I quote):

Discover how this infrastructure-agnostic container management platform can help companies navigate the murky area where infrastructure-as-code ends and application automation begins.

  • Get an application-centric view of automation—and understand why it’s important
  • Learn patterns and practical examples for managing continuous deployments such as rolling, A/B, blue-green, and canary
  • Implement continuous integration pipelines with OpenShift’s Jenkins capability
  • Explore mechanisms for separating and managing configuration from static runtime software
  • Learn how to use and customize OpenShift’s source-to-image capability
  • Delve into management and operational considerations when working with OpenShift-based application workloads
  • Install a self-contained local version of the OpenShift environment on your computer

Red Hat offers the eBook for free on their website as a promotion for download here. It’s a great tutorial and sort of a must-read for everyone starting with OpenShift.





Leave a comment

Posted by on 23-08-2017 in Uncategorized


Tags: , ,

How to install and run OpenShift Origin on your Mac OS X ?

Installing OpenShift Origin on OS X

The easiest way to install (many software) on your OS X machine is through Homebrew. So let’s try that! :)

brew update
brew install openshift-cli

Check the installation

jvzoggel$ oc version
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth


When running OpenShift on your Mac OS X host.

jvzoggel$ oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ...
WARNING: Cannot verify Docker version
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ... OK
-- Checking Docker daemon configuration ... FAIL

Error: did not detect an --insecure-registry argument on the Docker daemon
Ensure that the Docker daemon is running with the following argument: 
You can run this command with --create-machine to create a machine with the right argument.

We have to add the registry to our Docker Daemon through preferences -> Daemon and select Apply & Restart.

Next when I tried to run the cluster I get this error:

jvzoggel$ oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... 
   WARNING: Cannot verify Docker version
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ... 
   Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... 
   Using as the server IP
-- Starting OpenShift container ... FAIL
   Error: exec: "socat": executable file not found in $PATH

The error is what misleading because socat is a dependency (check here) in the homebrew openshiftcli and therefor should be installed automatically through the brew command. The real reason seems the Docker Toolbox on OSx requires us to add an additional parameter when starting or stopping our OpenShift cluster.

oc cluster up --docker-machine=openshift
oc cluster down --docker-machine=openshift

Voila! Hope it helps


Leave a comment

Posted by on 21-08-2017 in Uncategorized


Tags: , , ,

How to restore your NEM NanoWallet ?


If you lose your NEM Nano Wallet configuration in your client due to any reason (system crash, accidentally purge, etc.) there is still an easy way to restore your client wallet and gain access again to your account. For this blogpost, I will use a wallet on the NEM testnet, however the same steps work for NEM mainnet wallets. After reading this might add to the consensus how important it is to securely backup your password and private key after creating your wallet.

When creating your wallet, you received 4 important parts of information

  1. your password entered during the creation
  2. the .wlt file
  3. a text-string containing the raw wallet text string
  4. your secret private key

With a combination of these, or in extreme measures the secret private key, you can restore your wallet in different ways.

Option 1 – import the .wlt file

The easiest solution would be re-import the .wlt NanoWallet file. By default, your browser will automatically download the .wlt file to its default download location on your machine when the wallet is created. You should always make a secure backup (as instructed during creation) of this file in a safe place for moments like this. Click on the Import Wallet button to browse to the folder where the file is located. You should see a notification in the top-right corner then the process succeeded and the wallet should be available again in the “Wallet name:” pulldown menu.

Option 2 – restore the .wlt file manually

If you don’t have a backup of your .wlt file but made a copy of the raw wallet file text-string during the creation process you can manually restore the .wlt file. The .wlt file is nothing more than a text file containing your raw wallet string. So use a text-editor on Windows like Notepad or Notepad++ or the nano / vi editor on OS X / Linux to create a new .wlt file and paste the raw wallet text-string in the document and save. Make sure your file has the .wlt extension otherwise the NanoWallet won’t accept it when you perform the steps from Option 1 mentioned above here.

Option 3 – use your private key

With the might of the private key you can always restore your wallet. In the NanoWallet menu click SIGN UP and select the private Key wallet option. You can select any new Wallet name and/or new password, however the secret(!) Private Key is what identifies your account as unique and allows its recovery. When you enter the private key in the text-box the NanoWallet will automatically detect add show the Account address. In the example TC7WGJ-E5MX4D-ZTESBY-DVHLOH-ZVXEOD-J2XR3T-EUFU which is my personal testnet wallet which contains some free testnet NEM from the Testnet Faucet. Donations are welcome by the way, I still need some testnet NEM for my Testnet Supernode. ;-)

After creating your private key wallet, you can now select the restored wallet in the LOGIN section. Select the restored wallet in the pulldown, use the new password and login your account. The account shows my (test) NEM and we are back on track.


With either one of these options you can always restore your NanoWallet account and get access to your NEM account.

  • the .wlt file + password
  • the raw wallet key string + password
  • the private key

That’s also the main reason you should securely store away this information when creating it.

Hope it helps.


Posted by on 14-06-2017 in Uncategorized


Tags: ,

What is the NEM blockchain ?

Origin and concept

The idea of NEM started on the forum where the initial plan was to improve Nxt. The concept of Nxt was more than currency and contains features as assets, smart contracts, a message system and a marketplace. A group of developers decided against forking Nxt in January 2014 and instead wrote a new codebase from scratch. At launch in early 2015, the team of NEM was reportedly large, over 15 developers and almost 30 marketers or translators. The project originally was named “New Economy Movement” but this term however was quickly deprecated (2015) and just NEM in short is used since then. From day one NEM was designed to scale, be secure, fast and easy to build on. The support of a standard API based on open standards simplifies the adoption for developers.

In late 2015, some core NEM developers began working with Tech Bureau Corp on Mijin. Mijin is the first private blockchain project based on NEM technology. In 2016, several Japanese banks ran successful experiments on Mijin, which achieved 1,500 transactions per second with 2.5 million virtual bank accounts.


The NEM code is written entirely in Java in a client-server architecture which allows light clients to operate without running a local full copy of the NEM blockchain. Many of code is open source and can be located on Github here. However, the NEM server based component (NIS) is closed source. Currently the NEM project is working to replace the server based java code with a new C++ rewrite. This project is called “Catapult” and the developers inform that, when finished, it will be released as open-source. There is no formal indication when Catapult will be released further then the statement “it’s done, when it’s done”.


In 2016, somewhere during the Ethereum DAO crisis, several Japanese banks were engaged in the Mijin network testing. The successful results using the NEM blockchain as a new payment infrastructure, probably combined with the increasing interest in altcoins resulted in NEM jumping to the top of the largest cryptocurrencies by market capitalization. At the moment of writing NEM is the number 4 coin out there and quickly gaining popularity.

Why NEM ?

NEM contains some unique features and the combination of those, is actually what makes NEM such a promising blockchain technology.

NEM introduced a new consensus mechanism called Proof of Importance (PoI). It functions similarly to proof-of-stake, but it includes more variables than one’s NEM account holdings. The mechanism is designed to reward user’s contribution to the NEM community. This is a totally different approach then the more known PoW (proof-of-work) where clients perform mining to handle transactions. No energy craving mining is needed anyway, because all the 8,999,999,999 XEM (the currency of NEM) is premined.

NEM has an easy to use and open standard (JSON and REST) API that work directly with NIS. These can be requested for any transaction on any open node for free. This is a huge advantage since developers don’t have to learn a new specific language.

Support of a domain naming system on NEM called namespaces. Like the Internet namespaces these contain higher level domains and subdomains. This allows one person with one domain to create many different subdomains for their different projects or outside business accounts. It also helps to build and maintain a reputation system for Mosaics (see below).

Support of blockchain-based assets, which are called Mosaics. These mosaics themselves are customizable by amount in either a fixed capped or mutable quantity. They can be designed to be transferable or not, divisible or not, have personalized descriptions, and can be sent alongside encrypted messages in one transaction. Mosaics can be applied with a levy so that any mosaic being sent on the network will have to have a special fee paid above the normal transactions fees. Each mosaic gets a name that is under a unique domain in the Namespace system. New features are planned in the near future.

NEM includes native multisig transactions as part of the platform. While apps for other blockchains do support multisig transactions, the feature is often specific to an app, service, or wallet. NEM’s native functionality provides a standard which is applicable to all services.

NEM implements a node reputation system to help guard against a malicious node attack. Each node maintains a constantly updating trust value in each other node, based largely on how successful its attempts to synchronize have been and how accurate its feedback is about other nodes. The goal is to allow successful synchronization about the blockchain even when a large portion of the node network is colluding.


No one can predict how the future of NEM will look like. We all know there is many speculation about the whole distributed ledger technology in common. But the popularity of NEM seems to be rising within the blockchain community and with the release of Catapult the NEM community expects a huge leap for NEM regarding adoption and market capitalization. So as always, time will tell.

Leave a comment

Posted by on 13-06-2017 in Uncategorized


Tags: , ,

How to setup SSH access to Oracle Compute Cloud Service Instances

After playing around with the CLI it’s time to run some instance on the Oracle Compute Cloud Service. Oracle offers a broad range of images divided in 3 categories namely: Oracle images, Private images and Marketplace. The marketplace holds almost 400 turn-key solutions (from PeopleSoft to WordPress) where the category Oracle images are mostly Oracle Enterprise Linux distributions.

For this blog I will start a Oracle Linux 7.2 machine on the Oracle Compute Cloud and connect through SSH from my own machine.

Setting up security (SSH)

First we need to create a private and public keypair to authenticate against the Linux instance. Where the private key is safely stored on my desktop, the public key will be uploaded to the Oracle Compute Cloud. Run the following command:

jvzoggel$ ssh-keygen -b 2048 -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/jvzoggel/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): verySecret1
Enter same passphrase again: verySecret1
Your identification has been saved in /Users/jvzoggel/.ssh/id_rsa.
Your public key has been saved in /Users/jvzoggel/.ssh/

In the Oracle Compute Cloud Service console we select Network -> SSH Public Keys.
Select the generated .pub file (which holds your public key and is safe to share).

Now that the Oracle cloud knows our public key it can allow secure authentication to it’s instances. However we need to do some security configuration to make sure the SSH traffic will be able to passthrough. This can be done during the instance creation, but I think it’s better to do it upfront.

Creating a secure ip list (source)

Under Network -> Shared Network -> Security IP-Lists we add a new entry. Any entry can hold multiple IP ranges, but in our case we will just add 1 IP address which is our public IP address on the internet. If you don’t know what your IP is entering the WWW then google on “what is my IP address” and many sites will help you out. Enter your address as shown below and select create.

Creating a secure list (target)

The next step is to create a security list. A security list is a bundle of 1 to many instances that you can use as source or destination in security rules. Before we create our security rule and even instance, we create the list upfront that will hold that 1 instance for security rule destination.

Creating a secure rule (bring it all together)

You can use security rules to control network access between your instances and the Internet. In this case we will create a rule that allows only SSH traphic, from our own machine to the soon to be created instance in our (now empty) security list. Oracle Compute recognises a lot of default security applications among them SSH. Make sure to select the IP list as source and list as destination.

Security should be all set, let’s start our first instance.

Creating a secure Instance on Oracle Compute Cloud

Under Instances -> Instance we select Oracle Images and get a latest version of Oracle Enterprise Linux. Make sure not to select Review and Create but use the “>” button on the right of it. My opinion the UX is not really explanatory here, it would be better to label it “Configure and Create” or something.

Go through the wizard and during the Instance step make sure to add the public SSH key we uploaded earlier. This will allow access to our instance with SSH without the need of a password.

In the Network step of the wizard we add the new instance to our freshly created security list. With this, the instance will inherit all the security rule configurations we made earlier.

Finish the wizard and wait for the Compute Cloud Orchestration to complete. After that your instance should be running.

Proof of the pudding

Check the public IP of your Oracle Compute Cloud instance and use it in your shell to connect with the SSH command.

And voila…

jvzoggel$ ssh -i /Users/jvzoggel/.ssh/id_rsa opc@ 
[opc@bd8ee6 /]
[opc@bd8ee6 /]$ whoami
[opc@bd8ee6 /]$
[opc@bd8ee6 /]$ cat /etc/oracle-release
Oracle Linux Server release 7.2



Posted by on 26-04-2017 in Uncategorized


Tags: , , , ,

Using the Oracle Public Cloud Command Line Interface (CLI)

The Oracle Public Cloud Command-Line Interface is a utility to enable management of your cloud environment from the command line. The current release (1.1.0) only supports the Compute service, but Oracle states that additional service support coming in future releases

I like command line interfaces and being familiar with Oracle’s cloud competitors implementation I was curious. So I downloaded the CLI tool here and since I had already python installed on my OS X the startup time as a newcomer is relatively short.

The initial setup

We need 3 variables to connect to the Oracle Cloud:

  • The REST API endpoint
  • domain/username
  • password

You can get the REST endpoint by logging in to the Oracle Cloud and check the service details under Oracle Compute Cloud Service.

So we get the REST Endpoint here for our OPC_API and the OPC_USER is a combination of prefix “/Compute-“, your domain and your Cloud username. So run the next 2 commands in your shell (and use your own version off course):

export OPC_API=""
export OPC_USER=/Compute-gse00000001/cloud.admin

We need to paste the password in a textfile, because the oracle-compute CLI otherwise will tell us:
ValidationError: Secure argument “password” can only be read from a file or terminal, but the argument “xxxxx” is not a regular file

So create a pwd.txt, store the password there and

chmod 600 /full/path/to/password/file


Next step is getting authenticated against the Oracle Compute Cloud.

oracle-compute auth /Compute-gse00000001/cloud.admin pwd.txt

This command returns an authentication token and sets the OPC_COOKIE environment variable. The token expires after 30 minutes. As the CLI tool handles authentication by managing the cookies file, you don’t need to run the export command yourself.

The authentication token expires 30 minutes from the time you run the auth command. The refresh_token command extends the expiry of the current authentication token with another 30 minutes, but not beyond the session expiry time, which is 3 hours.

oracle-compute refresh_token

You can now use all the CLI commands like list, delete, add, create, discover, get and more. At least for 30 minutes :)



Posted by on 25-04-2017 in Uncategorized


Tags: , , ,

How to upload large files to Oracle Support calls

I usually update Oracle Support calls with screenshots, text snippets or infamous one-liners. However yesterday I needed to update a call with half our ACM/BPM environment SOAINFRA table dump. So poking around on the net I found a cool script from André Karlsson that did the trick.



USER=''     #Add your Oracle Support iD
FILEname=`basename $1`

transport () {
set -x
        curl -T ${FILE} -o ${FILEname} -u ${USER} "https://${HOST}/upload/issue/${SR}/"
if [[ -n $2 ]] ; then
cat << EOF
${0} [file] [SR]



jvzoggel$ ./ [FILENAME] 3-1234567890


All credits to André Karlsson and his blogpost here:

Leave a comment

Posted by on 31-03-2017 in Uncategorized


Tags: ,