Advertisements
RSS

How to setup SSH access to Oracle Compute Cloud Service Instances

After playing around with the CLI it’s time to run some instance on the Oracle Compute Cloud Service. Oracle offers a broad range of images divided in 3 categories namely: Oracle images, Private images and Marketplace. The marketplace holds almost 400 turn-key solutions (from PeopleSoft to WordPress) where the category Oracle images are mostly Oracle Enterprise Linux distributions.

For this blog I will start a Oracle Linux 7.2 machine on the Oracle Compute Cloud and connect through SSH from my own machine.

Setting up security (SSH)

First we need to create a private and public keypair to authenticate against the Linux instance. Where the private key is safely stored on my desktop, the public key will be uploaded to the Oracle Compute Cloud. Run the following command:

jvzoggel$ ssh-keygen -b 2048 -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/jvzoggel/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): verySecret1
Enter same passphrase again: verySecret1
Your identification has been saved in /Users/jvzoggel/.ssh/id_rsa.
Your public key has been saved in /Users/jvzoggel/.ssh/id_rsa.pub.

In the Oracle Compute Cloud Service console we select Network -> SSH Public Keys.
Select the generated .pub file (which holds your public key and is safe to share).

Now that the Oracle cloud knows our public key it can allow secure authentication to it’s instances. However we need to do some security configuration to make sure the SSH traffic will be able to passthrough. This can be done during the instance creation, but I think it’s better to do it upfront.

Creating a secure ip list (source)

Under Network -> Shared Network -> Security IP-Lists we add a new entry. Any entry can hold multiple IP ranges, but in our case we will just add 1 IP address which is our public IP address on the internet. If you don’t know what your IP is entering the WWW then google on “what is my IP address” and many sites will help you out. Enter your address as shown below and select create.

Creating a secure list (target)

The next step is to create a security list. A security list is a bundle of 1 to many instances that you can use as source or destination in security rules. Before we create our security rule and even instance, we create the list upfront that will hold that 1 instance for security rule destination.

Creating a secure rule (bring it all together)

You can use security rules to control network access between your instances and the Internet. In this case we will create a rule that allows only SSH traphic, from our own machine to the soon to be created instance in our (now empty) security list. Oracle Compute recognises a lot of default security applications among them SSH. Make sure to select the IP list as source and list as destination.

Security should be all set, let’s start our first instance.

Creating a secure Instance on Oracle Compute Cloud

Under Instances -> Instance we select Oracle Images and get a latest version of Oracle Enterprise Linux. Make sure not to select Review and Create but use the “>” button on the right of it. My opinion the UX is not really explanatory here, it would be better to label it “Configure and Create” or something.

Go through the wizard and during the Instance step make sure to add the public SSH key we uploaded earlier. This will allow access to our instance with SSH without the need of a password.

In the Network step of the wizard we add the new instance to our freshly created security list. With this, the instance will inherit all the security rule configurations we made earlier.

Finish the wizard and wait for the Compute Cloud Orchestration to complete. After that your instance should be running.

Proof of the pudding

Check the public IP of your Oracle Compute Cloud instance and use it in your shell to connect with the SSH command.

And voila…

jvzoggel$ ssh -i /Users/jvzoggel/.ssh/id_rsa opc@120.140.10.50 
[opc@bd8ee6 /]
[opc@bd8ee6 /]$ whoami
opc
[opc@bd8ee6 /]$
[opc@bd8ee6 /]$ cat /etc/oracle-release
Oracle Linux Server release 7.2

References

Advertisements
 
Leave a comment

Posted by on 26-04-2017 in Uncategorized

 

Tags: , , , ,

Using the Oracle Public Cloud Command Line Interface (CLI)

The Oracle Public Cloud Command-Line Interface is a utility to enable management of your cloud environment from the command line. The current release (1.1.0) only supports the Compute service, but Oracle states that additional service support coming in future releases

I like command line interfaces and being familiar with Oracle’s cloud competitors implementation I was curious. So I downloaded the CLI tool here and since I had already python installed on my OS X the startup time as a newcomer is relatively short.

The initial setup

We need 3 variables to connect to the Oracle Cloud:

  • The REST API endpoint
  • domain/username
  • password

You can get the REST endpoint by logging in to the Oracle Cloud and check the service details under Oracle Compute Cloud Service.

So we get the REST Endpoint here for our OPC_API and the OPC_USER is a combination of prefix “/Compute-“, your domain and your Cloud username. So run the next 2 commands in your shell (and use your own version off course):

export OPC_API="https://api-z00.compute.us1.oraclecloud.com"
export OPC_USER=/Compute-gse00000001/cloud.admin

We need to paste the password in a textfile, because the oracle-compute CLI otherwise will tell us:
ValidationError: Secure argument “password” can only be read from a file or terminal, but the argument “xxxxx” is not a regular file

So create a pwd.txt, store the password there and

chmod 600 /full/path/to/password/file

Authentication

Next step is getting authenticated against the Oracle Compute Cloud.

oracle-compute auth /Compute-gse00000001/cloud.admin pwd.txt

This command returns an authentication token and sets the OPC_COOKIE environment variable. The token expires after 30 minutes. As the CLI tool handles authentication by managing the cookies file, you don’t need to run the export command yourself.

The authentication token expires 30 minutes from the time you run the auth command. The refresh_token command extends the expiry of the current authentication token with another 30 minutes, but not beyond the session expiry time, which is 3 hours.

oracle-compute refresh_token

You can now use all the CLI commands like list, delete, add, create, discover, get and more. At least for 30 minutes :)

References

 
1 Comment

Posted by on 25-04-2017 in Uncategorized

 

Tags: , , ,

How to upload large files to Oracle Support calls

I usually update Oracle Support calls with screenshots, text snippets or infamous one-liners. However yesterday I needed to update a call with half our ACM/BPM environment SOAINFRA table dump. So poking around on the net I found a cool script from André Karlsson that did the trick.

Script

#!/bin/bash

HOST='transport.oracle.com'
USER='your.oracle.account@yourprovider.org'     #Add your Oracle Support iD
FILE=$1
SR=$2
FILEname=`basename $1`

transport () {
set -x
        curl -T ${FILE} -o ${FILEname} -u ${USER} "https://${HOST}/upload/issue/${SR}/"
}
if [[ -n $2 ]] ; then
        transport
else
cat << EOF
${0} [file] [SR]
EOF

fi

Runtime

jvzoggel$ ./upload.sh [FILENAME] 3-1234567890

References

All credits to André Karlsson and his blogpost here:
https://www.protractus.com/2014/05/upload-attachment-to-oracle-support-from-command-line/

 
Leave a comment

Posted by on 31-03-2017 in Uncategorized

 

Tags: ,

How to push to AWS CodeCommit from Mac OS X

When trying to commit to a AWS CodeCommit GIT repository I receive the following error:

jvzoggel$ git push
 fatal: unable to access 'https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/myProject/': The requested URL returned error: 403

The Amazon website states:

If you are using macOS, use HTTPS to connect to an AWS CodeCommit repository. After you connect to an AWS CodeCommit repository with HTTPS for the first time, subsequent access will fail after about fifteen minutes. The default Git version on macOS uses the Keychain Access utility to store credentials. For security measures, the password generated for access to your AWS CodeCommit repository is temporary, so the credentials stored in the keychain will stop working after about 15 minutes. To prevent these expired credentials from being used, you must either:

  • Install a version of Git that does not use the keychain by default.
  • Configure the Keychain Access utility to not provide credentials for AWS CodeCommit repositories.

I used the second option to fix it, so:

  1. Open the Keychain Access utility (use Finder to locate it)
  2. Search for git-codecommit
  3. Select the row, right-click and then choose Get Info.
  4. Choose the Access Control tab.
  5. In Confirm before allowing access, choose git-credential-osxkeychain, and then choose the minus sign to remove it from the list.

screen-shot-2017-02-15-at-19-21-22

After removing git-credential-osxkeychain from the list, you will see a pop-up dialog whenever you run a Git command. Choose Deny to continue. The pop-up is really annoying so I will probably switch over to SSH soon.

References

 

 
Leave a comment

Posted by on 18-02-2017 in Uncategorized

 

Tags: , , , ,

How to install AWS CLI on Mac OS X

The 2nd time last month I had to do it myself / figure it out / explain it so I decided to note it down.

Install AWS CLI on your OSX

jvzoggel$ brew install awscli
jvzoggel$ echo 'complete -C aws_completer aws' >> ~/.bashrc

AWS Identity and Access Management

From the IAM console create your personal access key ID and secret access key. Make sure to note them both down in a safe place !

screen-shot-2017-02-15-at-18-43-45

Configure the aws-cli

Use the generated AWS IAM credentials to configure your AWS CLI  for connection.

jvzoggel$ aws configure
 AWS Access Key ID [None]: xxxxxxxxx
 AWS Secret Access Key [None]: xxxxx
 Default region name [None]: eu-west-1
 Default output format [None]:

jvzoggel$ aws --version
 aws-cli/1.11.48 Python/2.7.10 Darwin/16.4.0 botocore/1.5.11

 

References

 
Leave a comment

Posted by on 16-02-2017 in Uncategorized

 

Tags: , ,

Publishing Apache Avro messages on a Apache Kafka topic

In earlier posts I played around with both Apache Avro and Apache Kafka. The next goal was naturally to combine both and start publishing binary Apache Avro data on a Apache Kafka topic.

screen-shot-2016-09-11-at-3-28-49-pm

Generating Java from the Avro schema

I use the  Avro schema “location.avsc” from my earlier post.

$ java -jar avro-tools-1.8.1.jar compile schema location.avsc .

Which results in the Location.java for our project.

/**
* Autogenerated by Avro
*
* DO NOT EDIT DIRECTLY
*/
package nl.rubix.avro;

import org.apache.avro.specific.SpecificData;
// ... and more stuff

Make sure we have the maven dependencies right in our pom.xml:

<dependencies>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.11</artifactId>
        <version>0.10.0.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.avro</groupId>
        <artifactId>avro</artifactId>
        <version>1.8.1</version>
    </dependency>
  <dependency>
        <groupId>org.apache.avro</groupId>
        <artifactId>avro-maven-plugin</artifactId>
        <version>1.8.1</version>
    </dependency>
</dependencies>

We can now use the Location object in Java to build our binary Avro message

public ByteArrayOutputStream GenerateAvroStream() throws IOException
{
    // Schema
    String schemaDescription = Location.getClassSchema().toString();
    Schema s = Schema.parse(schemaDescription);
    System.out.println("Schema parsed: " + s);

    // Encode the data using JSON schema and embed the schema as metadata along with the data.
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    DatumWriter<GenericRecord> writer = new GenericDatumWriter<GenericRecord>(s);
    DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<GenericRecord>(writer);
    dataFileWriter.create(s, outputStream);

    // Build AVRO message
    Location location = new Location();
    location.setVehicleId(new org.apache.avro.util.Utf8("VHC-001"));
    location.setTimestamp(System.currentTimeMillis() / 1000L);
    location.setLatitude(51.687402);
    location.setLongtitude(5.307759);
    System.out.println("Message location " + location.toString());

    dataFileWriter.append(location);
    dataFileWriter.close();
    System.out.println("Encode outputStream: " + outputStream);

    return outputStream;
}

When we have our byteArrayOutput stream we can start publishing it on a Apache Kafka topic.

public void ProduceKafkaByte()
{
    try
    {
        // Get the Apache AVRO message
        ByteArrayOutputStream data = GenerateAvroStream();
        System.out.println("Here comes the data: " + data);

        // Start KAFKA publishing
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");

        KafkaProducer<String, byte[]> messageProducer = new KafkaProducer<String, byte[]>(props);
        ProducerRecord<String, byte[]> producerRecord = null;
        producerRecord = new ProducerRecord<String, byte[]>("test","1",data.toByteArray());
        messageProducer.send(producerRecord);
        messageProducer.close();
    }
    catch(IOException ex)
    {
        System.out.println ("Well this error happened: " + ex.toString());
    }
}

When we subscribe on our topic we can see the bytestream cruising by:

INFO Processed session termination for sessionid: 0x157d8bec7530002 (org.apache.zookeeper.server.PrepRequestProcessor)
Objavro.schema#####ype":"record","name":"Location","namespace":"nl.rubix.avro","fields":[{"name":"vehicle_id","type":"string","doc":"id of the vehicle"},{"name":"timestamp","type":"long","doc":"time in seconds"},{"name":"latitude","type":"double"},{"name":"longtitude","type":"double"}],"doc:":"A schema for vehicle movement events"}##<##O#P#######HC-001#ڲ#
=######@#####;@##<##O#P#######016-10-18 19:06:24,005] INFO Expiring session 0x157d8bec7530005, timeout of 30000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2016-10-18 19:06:24,005] INFO Processed session termination for sessionid: 0x157d8bec7530005 (org.apache.zookeeper.server.PrepRequestProcessor)

All code available in github here.
github

 
Leave a comment

Posted by on 19-10-2016 in Uncategorized

 

Tags: , , , , , , ,

Playing around with Apache Avro

When entering the world of Apache Kafka, Apache Spark and data streams, sooner or later you will find mentioning of another Apache project; namely Apache AVRO. So ….

What is Apache Avro ?

avro-logo

Avro is a remote procedure call and data serialization framework developed within Apache’s Hadoop project (source: wikipedia). It is much the same as Apache Thrift and Google Protocol Buffers. Probably the main reason for Avro to gain in popularity is due to the fact that Hadoop-based big data platforms natively support serialization and deserialization of data in Avro format. Avro is based upon JSON based schemas and messages can be sent in both JSON and binary format. If binary is used then the schema is sent together with the actual data,

Playing with Apache Avro from the command line

So first let’s create a Avro schema “location.avsc” for the data records

{"namespace": "nl.rubix.avro",
  "type": "record",
  "name": "Location",
  "fields": [
    {"name": "vehicle_id", "type": "string", "doc" : "id of the vehicle"},
    {"name": "timestamp", "type": "long", "doc" : "time in seconds"},
    {"name": "latitude", "type": "double"},
    {"name": "longtitude",  "type": "double"}
  ],
  "doc:" : "A schema for vehicle movement events"
}

And we have this example file “location1.json” with a valid data record

{"vehicle_id": "1", "timestamp": 1476005672, "latitude": 51.687402, "longtitude": 5.307759}

Working with the Avro tools

Download the latest version of the Avro tools (currently 1.8.1) from the Avro Releases page.

$ java -jar avro-tools-1.8.1.jar
Version 1.8.1 of Apache Avro
Copyright 2010-2015 The Apache Software Foundation

This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
----------------
Available tools:
          cat extracts samples from files
      compile Generates Java code for the given schema.
       concat Concatenates avro files without re-compressing.
   fragtojson Renders a binary-encoded Avro datum as JSON.
     fromjson Reads JSON records and writes an Avro data file.
     fromtext Imports a text file into an avro data file.
      getmeta Prints out the metadata of an Avro data file.
    getschema Prints out schema of an Avro data file.
          idl Generates a JSON schema from an Avro IDL file
 idl2schemata Extract JSON schemata of the types from an Avro IDL file
       induce Induce schema/protocol from Java class/interface via reflection.
   jsontofrag Renders a JSON-encoded Avro datum as binary.
       random Creates a file with randomly generated instances of a schema.
      recodec Alters the codec of a data file.
       repair Recovers data from a corrupt Avro Data file
  rpcprotocol Output the protocol of a RPC service
   rpcreceive Opens an RPC Server and listens for one message.
      rpcsend Sends a single RPC message.
       tether Run a tethered mapreduce job.
       tojson Dumps an Avro data file as JSON, record per line or pretty.
       totext Converts an Avro data file to a text file.
     totrevni Converts an Avro data file to a Trevni file.
  trevni_meta Dumps a Trevni file's metadata as JSON.
trevni_random Create a Trevni file filled with random instances of a schema.
trevni_tojson Dumps a Trevni file as JSON.

Generate data record from JSON to Avro

$ java -jar avro-tools-1.8.1.jar fromjson --schema-file location.avsc location1.json > location.avro

The result is an output “location.avro” with the Avro binary. The interesting about Avro is that is encapsulates both the schema and the content in it’s binary message.

Objavro.schema#####ype":"record","name":"Location","namespace":"nl.rubix.avro","fields":[{"name":"vehicle_id","type":"string","doc":"id of the vehicle"},{"name":"timestamp","type":"long","doc":"time in seconds"},{"name":"latitude","type":"double"},{"name":"longtitude","type":"double"}],"doc:":"A schema for vehicle movement events"}avro.codenull~##5############.1м##

Retrieving the JSON message from Avro data

$ java -jar avro-tools-1.8.1.jar tojson location.avro > location_output.json

Retrieving the Avro schema from Avro data

And because the schema is present in the data we can retrieve the schema as well.

$ java -jar avro-tools-1.8.1.jar getschema location.avro > location_output.avsc

References:

 
1 Comment

Posted by on 18-10-2016 in Uncategorized

 

Tags: , , , , , ,