Advertisements
RSS

How to push to AWS CodeCommit from Mac OS X

When trying to commit to a AWS CodeCommit GIT repository I receive the following error:

jvzoggel$ git push
 fatal: unable to access 'https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/myProject/': The requested URL returned error: 403

The Amazon website states:

If you are using macOS, use HTTPS to connect to an AWS CodeCommit repository. After you connect to an AWS CodeCommit repository with HTTPS for the first time, subsequent access will fail after about fifteen minutes. The default Git version on macOS uses the Keychain Access utility to store credentials. For security measures, the password generated for access to your AWS CodeCommit repository is temporary, so the credentials stored in the keychain will stop working after about 15 minutes. To prevent these expired credentials from being used, you must either:

  • Install a version of Git that does not use the keychain by default.
  • Configure the Keychain Access utility to not provide credentials for AWS CodeCommit repositories.

I used the second option to fix it, so:

  1. Open the Keychain Access utility (use Finder to locate it)
  2. Search for git-codecommit
  3. Select the row, right-click and then choose Get Info.
  4. Choose the Access Control tab.
  5. In Confirm before allowing access, choose git-credential-osxkeychain, and then choose the minus sign to remove it from the list.

screen-shot-2017-02-15-at-19-21-22

After removing git-credential-osxkeychain from the list, you will see a pop-up dialog whenever you run a Git command. Choose Deny to continue. The pop-up is really annoying so I will probably switch over to SSH soon.

References

 

Advertisements
 
Leave a comment

Posted by on 18-02-2017 in Uncategorized

 

Tags: , , , ,

How to install AWS CLI on Mac OS X

The 2nd time last month I had to do it myself / figure it out / explain it so I decided to note it down.

Install AWS CLI on your OSX

jvzoggel$ brew install awscli
jvzoggel$ echo 'complete -C aws_completer aws' >> ~/.bashrc

AWS Identity and Access Management

From the IAM console create your personal access key ID and secret access key. Make sure to note them both down in a safe place !

screen-shot-2017-02-15-at-18-43-45

Configure the aws-cli

Use the generated AWS IAM credentials to configure your AWS CLI  for connection.

jvzoggel$ aws configure
 AWS Access Key ID [None]: xxxxxxxxx
 AWS Secret Access Key [None]: xxxxx
 Default region name [None]: eu-west-1
 Default output format [None]:

jvzoggel$ aws --version
 aws-cli/1.11.48 Python/2.7.10 Darwin/16.4.0 botocore/1.5.11

 

References

 
Leave a comment

Posted by on 16-02-2017 in Uncategorized

 

Tags: , ,

Publishing Apache Avro messages on a Apache Kafka topic

In earlier posts I played around with both Apache Avro and Apache Kafka. The next goal was naturally to combine both and start publishing binary Apache Avro data on a Apache Kafka topic.

screen-shot-2016-09-11-at-3-28-49-pm

Generating Java from the Avro schema

I use the  Avro schema “location.avsc” from my earlier post.

$ java -jar avro-tools-1.8.1.jar compile schema location.avsc .

Which results in the Location.java for our project.

/**
* Autogenerated by Avro
*
* DO NOT EDIT DIRECTLY
*/
package nl.rubix.avro;

import org.apache.avro.specific.SpecificData;
// ... and more stuff

Make sure we have the maven dependencies right in our pom.xml:

<dependencies>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.11</artifactId>
        <version>0.10.0.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.avro</groupId>
        <artifactId>avro</artifactId>
        <version>1.8.1</version>
    </dependency>
  <dependency>
        <groupId>org.apache.avro</groupId>
        <artifactId>avro-maven-plugin</artifactId>
        <version>1.8.1</version>
    </dependency>
</dependencies>

We can now use the Location object in Java to build our binary Avro message

public ByteArrayOutputStream GenerateAvroStream() throws IOException
{
    // Schema
    String schemaDescription = Location.getClassSchema().toString();
    Schema s = Schema.parse(schemaDescription);
    System.out.println("Schema parsed: " + s);

    // Encode the data using JSON schema and embed the schema as metadata along with the data.
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    DatumWriter<GenericRecord> writer = new GenericDatumWriter<GenericRecord>(s);
    DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<GenericRecord>(writer);
    dataFileWriter.create(s, outputStream);

    // Build AVRO message
    Location location = new Location();
    location.setVehicleId(new org.apache.avro.util.Utf8("VHC-001"));
    location.setTimestamp(System.currentTimeMillis() / 1000L);
    location.setLatitude(51.687402);
    location.setLongtitude(5.307759);
    System.out.println("Message location " + location.toString());

    dataFileWriter.append(location);
    dataFileWriter.close();
    System.out.println("Encode outputStream: " + outputStream);

    return outputStream;
}

When we have our byteArrayOutput stream we can start publishing it on a Apache Kafka topic.

public void ProduceKafkaByte()
{
    try
    {
        // Get the Apache AVRO message
        ByteArrayOutputStream data = GenerateAvroStream();
        System.out.println("Here comes the data: " + data);

        // Start KAFKA publishing
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");

        KafkaProducer<String, byte[]> messageProducer = new KafkaProducer<String, byte[]>(props);
        ProducerRecord<String, byte[]> producerRecord = null;
        producerRecord = new ProducerRecord<String, byte[]>("test","1",data.toByteArray());
        messageProducer.send(producerRecord);
        messageProducer.close();
    }
    catch(IOException ex)
    {
        System.out.println ("Well this error happened: " + ex.toString());
    }
}

When we subscribe on our topic we can see the bytestream cruising by:

INFO Processed session termination for sessionid: 0x157d8bec7530002 (org.apache.zookeeper.server.PrepRequestProcessor)
Objavro.schema#####ype":"record","name":"Location","namespace":"nl.rubix.avro","fields":[{"name":"vehicle_id","type":"string","doc":"id of the vehicle"},{"name":"timestamp","type":"long","doc":"time in seconds"},{"name":"latitude","type":"double"},{"name":"longtitude","type":"double"}],"doc:":"A schema for vehicle movement events"}##<##O#P#######HC-001#ڲ#
=######@#####;@##<##O#P#######016-10-18 19:06:24,005] INFO Expiring session 0x157d8bec7530005, timeout of 30000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2016-10-18 19:06:24,005] INFO Processed session termination for sessionid: 0x157d8bec7530005 (org.apache.zookeeper.server.PrepRequestProcessor)

All code available in github here.
github

 
Leave a comment

Posted by on 19-10-2016 in Uncategorized

 

Tags: , , , , , , ,

Playing around with Apache Avro

When entering the world of Apache Kafka, Apache Spark and data streams, sooner or later you will find mentioning of another Apache project; namely Apache AVRO. So ….

What is Apache Avro ?

avro-logo

Avro is a remote procedure call and data serialization framework developed within Apache’s Hadoop project (source: wikipedia). It is much the same as Apache Thrift and Google Protocol Buffers. Probably the main reason for Avro to gain in popularity is due to the fact that Hadoop-based big data platforms natively support serialization and deserialization of data in Avro format. Avro is based upon JSON based schemas and messages can be sent in both JSON and binary format. If binary is used then the schema is sent together with the actual data,

Playing with Apache Avro from the command line

So first let’s create a Avro schema “location.avsc” for the data records

{"namespace": "nl.rubix.avro",
  "type": "record",
  "name": "Location",
  "fields": [
    {"name": "vehicle_id", "type": "string", "doc" : "id of the vehicle"},
    {"name": "timestamp", "type": "long", "doc" : "time in seconds"},
    {"name": "latitude", "type": "double"},
    {"name": "longtitude",  "type": "double"}
  ],
  "doc:" : "A schema for vehicle movement events"
}

And we have this example file “location1.json” with a valid data record

{"vehicle_id": "1", "timestamp": 1476005672, "latitude": 51.687402, "longtitude": 5.307759}

Working with the Avro tools

Download the latest version of the Avro tools (currently 1.8.1) from the Avro Releases page.

$ java -jar avro-tools-1.8.1.jar
Version 1.8.1 of Apache Avro
Copyright 2010-2015 The Apache Software Foundation

This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
----------------
Available tools:
          cat extracts samples from files
      compile Generates Java code for the given schema.
       concat Concatenates avro files without re-compressing.
   fragtojson Renders a binary-encoded Avro datum as JSON.
     fromjson Reads JSON records and writes an Avro data file.
     fromtext Imports a text file into an avro data file.
      getmeta Prints out the metadata of an Avro data file.
    getschema Prints out schema of an Avro data file.
          idl Generates a JSON schema from an Avro IDL file
 idl2schemata Extract JSON schemata of the types from an Avro IDL file
       induce Induce schema/protocol from Java class/interface via reflection.
   jsontofrag Renders a JSON-encoded Avro datum as binary.
       random Creates a file with randomly generated instances of a schema.
      recodec Alters the codec of a data file.
       repair Recovers data from a corrupt Avro Data file
  rpcprotocol Output the protocol of a RPC service
   rpcreceive Opens an RPC Server and listens for one message.
      rpcsend Sends a single RPC message.
       tether Run a tethered mapreduce job.
       tojson Dumps an Avro data file as JSON, record per line or pretty.
       totext Converts an Avro data file to a text file.
     totrevni Converts an Avro data file to a Trevni file.
  trevni_meta Dumps a Trevni file's metadata as JSON.
trevni_random Create a Trevni file filled with random instances of a schema.
trevni_tojson Dumps a Trevni file as JSON.

Generate data record from JSON to Avro

$ java -jar avro-tools-1.8.1.jar fromjson --schema-file location.avsc location1.json > location.avro

The result is an output “location.avro” with the Avro binary. The interesting about Avro is that is encapsulates both the schema and the content in it’s binary message.

Objavro.schema#####ype":"record","name":"Location","namespace":"nl.rubix.avro","fields":[{"name":"vehicle_id","type":"string","doc":"id of the vehicle"},{"name":"timestamp","type":"long","doc":"time in seconds"},{"name":"latitude","type":"double"},{"name":"longtitude","type":"double"}],"doc:":"A schema for vehicle movement events"}avro.codenull~##5############.1м##

Retrieving the JSON message from Avro data

$ java -jar avro-tools-1.8.1.jar tojson location.avro > location_output.json

Retrieving the Avro schema from Avro data

And because the schema is present in the data we can retrieve the schema as well.

$ java -jar avro-tools-1.8.1.jar getschema location.avro > location_output.avsc

References:

 
1 Comment

Posted by on 18-10-2016 in Uncategorized

 

Tags: , , , , , ,

Using the Oracle Database to store and present XML data

Because we investigated the possibilities to store our (old) BPM Human Task data outside the SOAINFRA database (for archive, metrics and search queries on short history) we looked into a few possibilities in a spike / PoC. Because the Task data is actually structured XML data of which we do not know yet what future needs would require, the most safe solution was to store the complete XML document in a datastore.

Luckily the Oracle database has the option to store XML data and use views to represent the data “the old fashion way”. So this design in high-level looks like this.

xmldata

First we create a table:

CREATE TABLE TEST_TAAK
( "ID" NUMBER,
"TAAK_ID" VARCHAR2(36 BYTE),
"VERSIE" VARCHAR2(4 BYTE),
"PAYLOAD" "XMLTYPE"
)

Then insert a HumanTask (task) XML element into the table.
To make sure we don’t get any errors like:

  • “ORA-31011: XML parsing failed”
  • “SQL Error: ORA-01704: string literal too long; Cause: The string literal is longer than 4000 characters.”

we declare a variable to hold the XML string before we update/insert it.

Declare vXmlStr xmltype:=xmltype('&lt;task&gt;&lt;title&gt;My Task&lt;/title&gt;&lt;payload&gt;&lt;CaseNumber&gt;Case-1&lt;/CaseNumber&gt;&lt;DocumentUrl&gt;http://mydocument&lt;/DocumentUrl&gt;&lt;DocumentNaam&gt;myDocument&lt;/DocumentNaam&gt;&lt;/payload&gt;&lt;taskDefinitionURI&gt;default/Process_1.0!1600.93239/htMyTask&lt;/taskDefinitionURI&gt;&lt;ownerRole&gt;MyCasus_1.0.Users&lt;/ownerRole&gt;&lt;priority&gt;3&lt;/priority&gt;&lt;identityContext&gt;jazn.com&lt;/identityContext&gt;&lt;systemAttributes&gt;&lt;xmlstuff&gt;much stuff&lt;/xmlstuff&gt;&lt;taskDefinitionName&gt;htMyTask&lt;/taskDefinitionName&gt;&lt;xmlstuff&gt;more stuff&lt;/xmlstuff&gt;&lt;/systemAttributes&gt;&lt;systemMessageAttributes&gt;&lt;numberAttribute1&gt;0.0&lt;/numberAttribute1&gt;&lt;/systemMessageAttributes&gt;&lt;sca&gt;&lt;applicationName&gt;default&lt;/applicationName&gt;&lt;xmlstuff&gt;more stuff&lt;/xmlstuff&gt;&lt;/sca&gt;&lt;/task&gt;');
Begin
  Update TEST_TAAK set PAYLOAD = vXmlStr where ID=1;
End;

taskxml1

CREATE OR REPLACE FORCE VIEW TEST_TAAK_VW ("TAAK_ID", "VERSIE", "XML_TITLE", "XML_TASKDEFINITIONNAME", "XML_PAYLOAD")
AS
SELECT TT.TAAK_ID
, TT.VERSIE
, XMLRGL.TITLE
, XMLRGL.TASKDEFINITIONNAME
, XMLRGL.PAYLOAD
FROM
TEST_TAAK TT
, XMLTABLE( '/task' PASSING TT.PAYLOAD
COLUMNS TITLE VARCHAR2(40) PATH 'title'
, TASKDEFINITIONNAME VARCHAR2(40) PATH 'systemAttributes/taskDefinitionName'
, PAYLOAD XMLTYPE PATH 'payload'
) AS XMLRGL;

And the result, voila:

 

taskxml2

 

 
Leave a comment

Posted by on 22-09-2016 in Oracle

 

Tags: ,

Getting started with Apache Kafka

Apache Kafka is a publish-subscribe messaging solution rethought as a distributed commit log.

kafka-logo-wide

The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.

Some use cases for Kafka are stream processing, event sourcing, metrics and all other (large sets of) data that go from publisher to 1-n subscriber(s). A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients making it a very efficient (and also easy to scale) high volume messaging solution.

So actually Kafka is a good alternative for any more traditional (JMS / MQ) message broker. Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc). In comparison to most messaging systems Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large scale message processing applications. And this all, is free.

Getting Started

The Kafka website has an excellent quickstart tutorial here. Download the latest version here and work through the tutorial to send and receive your first messages from console.

Playing around with Java

First we create a test topic.

bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic testIteration
Created topic “testIteration”.

The earlier versions of Kafka came with default serializer but that created lot of confusion. With 0.8.2, you would need to pick a serializer yourself from StringSerializer or ByteArraySerializer that comes with API or build your own. Since both our key and value in the example will be a string, we use the StringSerializer.

Use the following Apache Kafka library as a Maven dependency (pom.xml).

<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.0.1</version>
</dependency></dependencies>

The following lines of code produces / publishes 10 messages on the Kafka Topic.


public void ProduceIteration()
{
int amountMessages = 10; // 10 is enough for the demo

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

org.apache.kafka.clients.producer.Producer<String, String> producer = new KafkaProducer<String, String>(props);

for(int i = 1; i <= amountMessages; i++)
{
ProducerRecord<String, String> data = new ProducerRecord<String, String>("testIteration", Integer.toString(i), Integer.<em>toString</em>(i));
System.out.println ("Publish message " + Integer.toString(i) + " - " + data);
producer.send(data);
}

producer.close();
}

The messages can be received from the topic:

jvzoggel$ bin/kafka-console-consumer.sh –zookeeper localhost:2181 –topic testIteration –property print.key=true –property print.timestamp=true

CreateTime:1474354960268        1       1
CreateTime:1474354960284        2       2
CreateTime:1474354960284        3       3
CreateTime:1474354960285        4       4
CreateTime:1474354960285        5       5
CreateTime:1474354960285        6       6
CreateTime:1474354960285        7       7
CreateTime:1474354960285        8       8
CreateTime:1474354960285        9       9
CreateTime:1474354960285        10      10

 

 
3 Comments

Posted by on 20-09-2016 in Uncategorized

 

Tags: , , ,

What about CMMN (Case Management Model and Notation) ?

When we started our Case Management adventure the Case Management Model and Notation (CMMN) standard was still relatively new and unknown. (May 2014 the official 1.0 release was released). Both our design tool (Enterprise Architect) and implementation software (Oracle  Adaptive Case Management) did not support the CMMN notation so we created our own “way-of-modelling”.

In our teams that means that the process analysts/designers use powerful tools like SparxSystem Enterprise Architect (EA) with its integrated BPM(N) and SOA support. However manually adding text documents describing the relation the specific process or task has in the whole case. You would think that the CMMN notation could easily be integrated in one of the leading architecture and design tools out there. There are some niche products out there that see a market for CMMN modelling like Trisotech, however SparxSystem seems to have no plans at all.

What about the CMMN adoption by software vendors

It seems that only Camunda and IBM adopted CMMN in their business process management offering. Camunda even supports the beta CMMN 1.1 definition. The other (major) BPM vendors seem to hold back. If a small company like Camunda can do it, you would expect the other large vendors to be able to adopt the standard fairly easy as well.

I assume that the move to cloud, trends like big data and mobile (including new challenges like better API management) has been the primary focus for most software vendors. A big distraction away from extending the core functionality of their current BPM/CM offering. A shame, especially since Gartner states that currently case management software is a 8 billion dollar untapped market. Adoption of standards and especially gaining market share should sound interesting.

So these questions pop up and make me wonder:

  • What is the future of CMMN ?
  • Will companies like Oracle, Appian, TIBCO and PEGA eventually adopt CMMN in their on-premise BPM offering?
  • And with the Gartner prediction in mind and the unstoppable move to cloud, can we expect any Case Management cloud based solution in the near future ? And if so, will it support CMMN ?

I guess only time will tell.

screen-shot-2016-09-06-at-21-32-22

 
2 Comments

Posted by on 08-09-2016 in Uncategorized

 

Tags: ,