Tag Archives: API

How does communication between the 3Scale API Gateway and API Management Portal work ?

API Gateway communication

The 3Scale API Gateway is a light-weight high-performance API Management Gateway. Since it can scale and recover easily with tools like vanilla Docker/Kubernetes or Openshift, new pods can be there in seconds to handle more APIs then you can imagine. After the API Gateway starts and during runtime it needs to communicatie with the central API Management Portal to retrieve it’s configuration. Therefore it uses these 2 APIs:

  • The Service Management API to ask for authorization and report usage
  • The Account Management API (read-only) to get the list of available APIs and their configuration
The Service Management API is at (port 443) whereas the Account Management API is at (port 443). The connection is build up by the API Gateway itself which from a security perspective is great. Only outbound port 443 is needed.

Auto updating

The gateway is able to check the configuration from time to time and self update when necessary. You can change the default by adjusting the APICAST_CONFIGURATION_CACHE (which value is in seconds). This parameter controls the polling frequency for the Gateway to connect to the Admin Portal.

“Cache the configuration for N seconds. Empty means forever and 0 means don’t cache at all.”
Which means that the value should be 60 or greater, 0 or unset
  • 0, don’t cache the configuration at all. Safe to use on staging together with APICAST_CONFIGURATION_LOADER=lazy. Lazy will load it on demand for each incoming request (to guarantee a complete refresh on each request.
  • 60, caches for 60 seconds and is the minimal value if set.
  • Empty means forever, you probably do not want to use that in production

References and all credits:

Leave a comment

Posted by on 15-01-2018 in Uncategorized


Tags: , ,

How to run the 3Scale APICast Gateway in OpenShift ?

In my last blogpost I’ve shown how to run the 3Scale APICast Gateway in a Docker container. The next step would be to run APICast in OpenShift (both are Red Hat family members) and this was part of a showcase for a client. This blogpost will show you which steps to make.

High Level Overview

So this is how our setup looks like with very bright shiny pretty colours:

The preperation

What we need:

  • Endpoint for the API to hit, like apicast.rubix.local
  • Endpoint of the 3Scale SaaS environment (like
  • 3Scale SaaS Access Token which is something like 93b21fc40335f58ee3a93d5a5c343…..
  • user key which is shown at the bottom of the SaaS API Configuration screen in the curl example or can be found in Application -> your app

First make sure the API Endpoint is set in the 3Scale SaaS environment and copy the the curl example at the bottom for your own convenience.

API endpoint to hit

You probably already have an Access Token, if not you can generate one from Personal Settings -> Tokens -> Access Tokens

Make sure to always note down access tokens since your not able to retrieve it again.

The commands

We make sure OpenShift is running (I use my local OS X machine to run so adjust commands where needed when a remote OpenShift cluster is needed)

jvzoggel$ oc cluster up --docker-machine=openshift
Starting OpenShift using openshift/origin:v3.7.0 ...
OpenShift server started.

The server is accessible via web console at:

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

Note the OpenShift URL (in this case presented here, we will need this later.

Next step we will use the OC command to create a new project,

jvzoggel$ oc new-project "3scalegateway" --display-name="gateway" --description="Rubix 3scale gateway on OpenShift demo"
Now using project "3scalegateway" on server "".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~

to build a new example application in Ruby.

The next step is to set a configuration variable which the APICast image will use to connect on runtime to the 3Scale SaaS environment to download the configuration. The string needs the 3Scale SaaS Access Token and your

jvzoggel$ oc secret new-basicauth apicast-configuration-url-secret --password=

We are now going to create a new application in our project which uses a template to retrieve it’s 3Scale image and configuration settings. You can check out the template by checking the URL in your browser.

jvzoggel$ oc new-app -f
--> Deploying template "3scalegateway/3scale-gateway" for "" to project 3scalegateway

     3scale API Gateway

     * With parameters:
        * CONFIGURATION_URL_SECRET=apicast-configuration-url-secret
        * DEPLOYMENT_ENVIRONMENT=production
        * APICAST_NAME=apicast
        * RESOLVER=
        * SERVICES_LIST=
        * BACKEND_CACHE_HANDLER=strict
        * LOG_LEVEL=
        * PATH_ROUTING=false
        * RESPONSE_CODES=false
        * REDIS_URL=
        * OAUTH_TOKENS_TTL=604800
        * MANAGEMENT_API=status
        * OPENSSL_VERIFY=false

--> Creating resources ...
    deploymentconfig "apicast" created
    service "apicast" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/apicast' 
    Run 'oc status' to view your app.<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>

Your 2 pods should spin uo in OpenShift and make sure to check the Logs if no errors are there.

We need to expose our pods to incoming traphic by adding a route through the OpenShift console or through the oc command. I assume you know how to add a route through the console, so here is the command:

jvzoggel$ oc expose svc/apicast --name=apicast --hostname=apicast.rubix.local
route "apicast" exposed

Now we can hit our api endpoint. Since I need to hit the API endpoint with apicast.rubix.local which is not known on my host machine, I could edit the /etc/hosts file. But because I don’t like to fill up my hosts I add a HTTP Host header with my request containing the correct endpoint.

jvzoggel$ curl "" -H "Host: apicast.rubix.local"
  "method": "GET",
  "path": "/echo",
  "args": "user_key=my_key_was_here",
  "body": "",
  "headers": {
    "HTTP_VERSION": "HTTP/1.1",
    "HTTP_HOST": "",
    "HTTP_ACCEPT": "*/*",
    "HTTP_USER_AGENT": "curl/7.54.0",
    "HTTP_X_3SCALE_PROXY_SECRET_TOKEN": "Shared_secret_sent_from_proxy_to_API_backend",
    "HTTP_X_REAL_IP": "",
    "HTTP_X_FORWARDED_PROTO": "https",
    "HTTP_FORWARDED": "for=;;proto=https"
  "uuid": "711e9799-1234-1234-b8b6-4287541238"
}jvzoggel:~ jvzoggel$ 

And to proof the statistics, the 3Scale SaaS dashboards show us the metrics:

So setting up 3ScaleAPICast in OpenShift is relatively easy. Further configuring APICast, setting up OpenShift routes between your consumers and API’s and adding (REDIS) caching adds more complexity, but still hope this helps!


Leave a comment

Posted by on 11-12-2017 in Uncategorized


Tags: , , ,

How to run the 3Scale APICast Gateway in a Docker container ?

With the rise of API popularity and the necessity for decent API Management some relatively new players emerged. And by know almost all are acquired by bigger fish in the software pond. One of these leading API Management software companies was 3Scale and in 2016 is was acquired by Red Hat. 3Scale then was a SaaS API Management offering only. Soon after the acquisition however Mike Piech, vice president and general manager of middleware at Red Hat, wrote in a blog post:

“3scale is today offered in an as-a-Service model and joins Red Hat OpenShift Online and Red Hat Mobile Application Platform in that format. We plan to quickly create an on-prem version and open source the code in the Red Hat way.”

The cloud hosted API gateway service APICast was published to GitHub. And in Q2 2017 the officially announcement was there that the Red Hat 3Scale API management platform was available as a Docker container and stand-alone installation for deployment on-premise. So now the implementation choice roughly consists of full SaaS and 3 on-premise variations as vanille Docker, Red Hat OpenShift and stand-alone.

This blogpost will show the steps and commands how to run the 3Scale APICast Gateway in “vanille” Docker containers.

High Level Overview

The commands

What we need:

  • Endpoint for the local API to hit, like api.rubix.local
  • Endpoint of the 3Scale SaaS environment, like
  • provider api key which is something like 93b21fc40335f58ee3a93d5a5c343 and can be found under your 3Scale SaaS account
  • user key which is shown at the bottom of the SaaS API Configuration screen in the curl example or can be found in Application -> your app

API endpoint to hit


First we will create a rubix.local Docker Network for our Apicast and API client nodes to run in:

jvzoggel$ docker network create rubix.local

We will start the 3Scale APICast Gateway docker image and configure it to port forward 8080, connect to the rubix.local network and use the environment property THREESCALE_PORTAL_ENDPOINT to make connection to our 3Scale SaaS environment to retrieve the API configuration.

jvzoggel$ docker run --name apicast --net=rubix.local --rm -p 8080:8080 -h apicast.rubix.local -e THREESCALE_PORTAL_ENDPOINT=
dnsmasq[8]: started, version 2.76 cachesize 1000
dnsmasq[8]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
dnsmasq[8]: using nameserver 8 #0 dnsmasq[8]: reading /etc/resolv.conf
dnsmasq[8]: using nameserver options#0
dnsmasq[8]: using nameserver
dnsmasq[8]: cleared cache

The API Gateway is running as apicast.rubix.local, the same endpoint URI as configured in the SaaS environment. So it’s time to hit it with our first request. We are going to use a Centos docker image as an API client and run it in interactive mode.

jvzoggel$ docker run --name centos --net=rubix.local --rm -it -h centos.rubix.local centos:latest bash
[root@centos /]# hostname

We can first check if both docker instances can connect to each other with a ping. After that is proven we execute the following command which will connect to the Apicast docker image on our rubix.local network. The Apicast instance will download the configuration from the 3Scale SaaS solution and route our API call to the configured echo service back-end.

[root@centos /]# curl "http://apicast.rubix.local:8080/echo?user_key=USERKEY"

"method": "GET",
"path": "/echo",
"args": "user_key=XXXXXX",
"body": "",
"headers": {
"HTTP_HOST": "",
"HTTP_ACCEPT": "*/*",
"HTTP_USER_AGENT": "curl/7.29.0",
"HTTP_X_3SCALE_PROXY_SECRET_TOKEN": "Shared_secret_sent_from_proxy_to_API_backend",
"HTTP_FORWARDED": "for=;;proto=https",
"uuid": "24e5737c-c3ff-4333-bbbb<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>-6030aaea6521"

Hope it helps !

1 Comment

Posted by on 28-11-2017 in Uncategorized


Tags: , , ,

What is the NEM blockchain ?

Origin and concept

The idea of NEM started on the forum where the initial plan was to improve Nxt. The concept of Nxt was more than currency and contains features as assets, smart contracts, a message system and a marketplace. A group of developers decided against forking Nxt in January 2014 and instead wrote a new codebase from scratch. At launch in early 2015, the team of NEM was reportedly large, over 15 developers and almost 30 marketers or translators. The project originally was named “New Economy Movement” but this term however was quickly deprecated (2015) and just NEM in short is used since then. From day one NEM was designed to scale, be secure, fast and easy to build on. The support of a standard API based on open standards simplifies the adoption for developers.

In late 2015, some core NEM developers began working with Tech Bureau Corp on Mijin. Mijin is the first private blockchain project based on NEM technology. In 2016, several Japanese banks ran successful experiments on Mijin, which achieved 1,500 transactions per second with 2.5 million virtual bank accounts.


The NEM code is written entirely in Java in a client-server architecture which allows light clients to operate without running a local full copy of the NEM blockchain. Many of code is open source and can be located on Github here. However, the NEM server based component (NIS) is closed source. Currently the NEM project is working to replace the server based java code with a new C++ rewrite. This project is called “Catapult” and the developers inform that, when finished, it will be released as open-source. There is no formal indication when Catapult will be released further then the statement “it’s done, when it’s done”.


In 2016, somewhere during the Ethereum DAO crisis, several Japanese banks were engaged in the Mijin network testing. The successful results using the NEM blockchain as a new payment infrastructure, probably combined with the increasing interest in altcoins resulted in NEM jumping to the top of the largest cryptocurrencies by market capitalization. At the moment of writing NEM is the number 4 coin out there and quickly gaining popularity.

Why NEM ?

NEM contains some unique features and the combination of those, is actually what makes NEM such a promising blockchain technology.

NEM introduced a new consensus mechanism called Proof of Importance (PoI). It functions similarly to proof-of-stake, but it includes more variables than one’s NEM account holdings. The mechanism is designed to reward user’s contribution to the NEM community. This is a totally different approach then the more known PoW (proof-of-work) where clients perform mining to handle transactions. No energy craving mining is needed anyway, because all the 8,999,999,999 XEM (the currency of NEM) is premined.

NEM has an easy to use and open standard (JSON and REST) API that work directly with NIS. These can be requested for any transaction on any open node for free. This is a huge advantage since developers don’t have to learn a new specific language.

Support of a domain naming system on NEM called namespaces. Like the Internet namespaces these contain higher level domains and subdomains. This allows one person with one domain to create many different subdomains for their different projects or outside business accounts. It also helps to build and maintain a reputation system for Mosaics (see below).

Support of blockchain-based assets, which are called Mosaics. These mosaics themselves are customizable by amount in either a fixed capped or mutable quantity. They can be designed to be transferable or not, divisible or not, have personalized descriptions, and can be sent alongside encrypted messages in one transaction. Mosaics can be applied with a levy so that any mosaic being sent on the network will have to have a special fee paid above the normal transactions fees. Each mosaic gets a name that is under a unique domain in the Namespace system. New features are planned in the near future.

NEM includes native multisig transactions as part of the platform. While apps for other blockchains do support multisig transactions, the feature is often specific to an app, service, or wallet. NEM’s native functionality provides a standard which is applicable to all services.

NEM implements a node reputation system to help guard against a malicious node attack. Each node maintains a constantly updating trust value in each other node, based largely on how successful its attempts to synchronize have been and how accurate its feedback is about other nodes. The goal is to allow successful synchronization about the blockchain even when a large portion of the node network is colluding.


No one can predict how the future of NEM will look like. We all know there is many speculation about the whole distributed ledger technology in common. But the popularity of NEM seems to be rising within the blockchain community and with the release of Catapult the NEM community expects a huge leap for NEM regarding adoption and market capitalization. So as always, time will tell.

Leave a comment

Posted by on 13-06-2017 in Uncategorized


Tags: , ,