Advertisements
RSS

Author Archives: jvzoggel

About jvzoggel

Consultant / the Netherlands / 's-Hertogenbosch / RUBIX.nl

AWS CloudFormation error “The parameter groupName cannot be used with the parameter subnet”

When trying to start some EC2 instance through CloudFormation I kept getting the error “The parameter groupName cannot be used with the parameter subnet”.  

The (YAML) AWS CloudFormation looks something like this:

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-20ee5e5d
      InstanceType: t2.micro
      KeyName: t10_kubernetes
      PrivateIpAddress: 10.0.11.11
      SubnetId:
        Fn::ImportValue:
          !Sub "t10-vpc-k8s-subnet1-id"
      SecurityGroupIds: - !Ref KubernetesControllerSecurityGroup
      Tags:
        - Key: Name
          Value: t10-k8s-controller1

So the error ended in a Google search with many hits, many questions, many suggestions, but very few real answers.

Until I saw this answer from johnhunsley:
I believe you have created a Security Group without specifying a VPC ID. You have then attempted to create a launch config which launches instances into a subnet within a VPC. Therefore, when It attempts to assign the security group to those instances it fails because it expects the security group ID rather than the name.

So I think the response from AWS is in the running for the “Worst Error Message Ever” but the solution is very simple. Don’t make the mistake of not specifying your custom VPC ID when creating a new security group.

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      VpcId: !ImportValue t10-vpc-id
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller

References

johnhunsley @ https://github.com/boto/boto/issues/350

Advertisements
 
Leave a comment

Posted by on 15-08-2018 in AWS, CloudFormation

 

Tags: , ,

How to connect to the CEPH Object Gateway S3 API with Java

We use a CEPH storage solution and specifically want to use the Ceph Object Gateway with the S3 API through a Java client. The API is based on the AWS S3 standard however requires some special tweaking to work. Took me some effort to get a working connection, so here to share:

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk</artifactId>
  <version>1.11.325</version>
</dependency>

We can use either the new AmazonS3ClientBuilder

package nl.rubix.s3;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.SDKGlobalConfiguration;

public class AmazonS3ClientBuilder
{
  public static void main(String[] args)
  {
    String accessKey = "XXXXX";
    String secretKey = "XXXXX";

    // Our firewall on DEV does some weird stuff so we disable SSL cert check
    System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true");
    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
      System.out.println("Cert checking is disabled");
    }
		
    // S3 Client configuration
    ClientConfiguration config = new ClientConfiguration();
    // Not the standard "AWS3SignerType", maar expliciet signerTypeV2
    config.setSignerOverride("S3SignerType");
    config.setProtocol(Protocol.HTTPS);
    config.setProxyHost("proxy.rubix.nl");

    config.setProxyPort(8080);
    // S3 Credentials
    BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey,secretKey);
    // S3 Endpoint
    AwsClientBuilder.EndpointConfiguration endpointConfiguration = new
      AwsClientBuilder.EndpointConfiguration("objects.dc1.rubix.nl", "");
    AmazonS3 s3 = com.amazonaws.services.s3.AmazonS3ClientBuilder.standard()
      .withClientConfiguration(config)
      .withCredentials(new AWSStaticCredentialsProvider(credentials))
      .withEndpointConfiguration(endpointConfiguration)
      .build();
    
    System.out.println("===========================================");
    System.out.println(" Connection to the Rubix S3 ");
    System.out.println("===========================================\n");
    try { 
       /*
       * List of buckets and objects in our account
       */
       System.out.println("Listing buckets and objects");
       for (Bucket bucket : s3.listBuckets())
       {
         System.out.println(" - " + bucket.getName() +" "
           + "(owner = " + bucket.getOwner()
           + " "
           + "(creationDate = " + bucket.getCreationDate());
         ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
           .withBucketName(bucket.getName()));
         for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) 
         {
           System.out.println(" --- " + objectSummary.getKey() +" "
           + "(size = " + objectSummary.getSize() + ")" +" "
           + "(eTag = " + objectSummary.getETag() + ")");
           System.out.println();
         }
       }
     }
     catch (AmazonServiceException ase)
     {
       System.out.println("Caught an AmazonServiceException, which means your request made it to S3, but was rejected with an error response for some reason.");
       System.out.println("Error Message:    " + ase.getMessage());
       System.out.println("HTTP Status Code: " + ase.getStatusCode());
       System.out.println("AWS Error Code: " + ase.getErrorCode());
       System.out.println("Error Type: " + ase.getErrorType());
       System.out.println("Request ID: " + ase.getRequestId());
     }
     catch (AmazonClientException ace)
     {
       System.out.println("Caught an AmazonClientException, which means the client encountered "
       + "a serious internal problem while trying to communicate with S3,
       + "such as not being able to access the network.");
       System.out.println("Error Message: " + ace.getMessage());
     }

or make it work with the older and depricated AmazonS3Client

package nl.rubix.s3;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.SDKGlobalConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amaonaws.services.s3.model.Bucket;

public class BasicAWSCredentials
{
    public static void main(String[] args)
    {
        String accessKey = "XXXXXXX";
        String secretKey = "XXXXXXX";
        System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true"
    );

    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
        System.out.println("Cert checking is disabled");
    }
    AWSCredentials credentials = new com.amazonaws.auth.BasicAWSCredentials(accessKey,secretKey);

    ClientConfiguration clientConfig = new ClientConfiguration();
    clientConfig.setSignerOverride("S3SignerType");
    clientConfig.setProxyHost("proxy.rubix.nl");
    clientConfig.setProxyPort(8080);

    AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
    conn.setEndpoint("objects.gn3.rubix.nl");

    for (Bucket bucket : conn.listBuckets())
    {
      System.out.println(" - " + bucket.getName() 
        + " "
        + "(owner = " + bucket.getOwner()
        + " "
        + "(creationDate = " + bucket.getCreationDate());
    }
  }
}

Hope it helps!

 
Leave a comment

Posted by on 03-07-2018 in Uncategorized

 

Tags: , , , , ,

How to dynamically generate XML Request in ReadyAPI / SOAPui ?

We have a functional test where we use a SOAP Request to start the processing of a couple files based on a URL in the request. For a negative test (all corrupt files) we got a batch of 500 files. So to prevent a lot of copy/paste work in my SOAP Request I wanted to generate the request dynamically. I did this before, couldn’t find/remember my own example so eventually when I got it working decided to share and store it here.

First some housekeeping

I always do some housekeeping in the init groovy step of my tests to generate an unique id (for correlation, etc etc) and more.

//Generate unique id - sequence
def v_sequence = new Date().time.toString()
 
// testRunner.testCase.setPropertyValue("sequence", v_sequence)
testRunner.testCase.testSuite.project.setPropertyValue("v_sequence", v_sequence)
 
// empty some variables
testRunner.testCase.testSuite.project.setPropertyValue("XML", "")
testRunner.testCase.testSuite.project.setPropertyValue("teller", "1")

Then the basic dataloop

Using an external datasource is the way to go if we want to load data. In the example I only use 1 field (url) in a text file with 500 lines:

The DataSource Loop function makes it possible to go back to the Groovy script “Generate XML Request” to build the request line by line.

The Groovy magic

Here is the groovy script that holds the logic. For each url in the dataloop we create an Document XML complex element which we add to the list of

import groovy.xml.StreamingMarkupBuilder
import groovy.xml.XmlUtil
import groovy.util.XmlSlurper
 
def sequence = context.expand( '${#Project#v_sequence}' )
def url = context.expand( '${DataSource#url}' )
def datumtijd = new Date().format("yyyy-MM-dd'T'HH:mm:ss")
def teller = context.expand( '${#Project#teller}' )
 
log.info('sequence = ' + sequence)
log.info('url = ' + url)
 
def filename = url.split('/').last()
log.info('filename = ' + filename)          
           
// Define all your namespaces here
// def nameSpacesMap = [soapenv: 'http://schemas.xmlsoap.org/soap/envelope/',ns: 'nl.rubix.ohmy',]
def builder = new StreamingMarkupBuilder()
builder.encoding ='utf-8'
def xmlDocument = builder.bind
{
//          namespaces << nameSpacesMap
            // use it like ns.element            
            Document
            {
                        Id('DOC.' + sequence + '.' + teller);
                        Name(filename);                        
                        Date(datumtijd);
                        Url(url);
            }
            Document       
}
 
// XML buildup to get rid of irritating XML version string which probably can be done much nicer
def v_document = XmlUtil.serialize(xmlDocument);
v_document = v_document.substring(39)
log.info("XML = " + v_document);
def origineelXML = context.expand( '${#Project#XML}' )
origineelXML = origineelXML + v_document
testRunner.testCase.testSuite.project.setPropertyValue("XML", origineelXML)
 
// increase counter
teller = teller.toInteger() + 1
testRunner.testCase.testSuite.project.setPropertyValue("teller", teller.toString())

The SOAP Request

Now we have a variable on project level which stores the complete list of documents, which we can just use as normal in the request like this:

  <soap:Body>
      <ns1:Request>
         <Documents>${#Project#XML}</Documents>
      </ns1:Request>
   </soap:Body>
 
Leave a comment

Posted by on 21-03-2018 in Uncategorized

 

Tags: ,

Problem with Spring Boot Starter Web and FasterXML Jackson dependency

While working with Spring Boot and developing a combined REST/JSON & SOAP/XML (not sexy, I know) API I was able to build & compile but on runtime I had this error:


Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
ERROR 2145 --- [ main] o.s.boot.SpringApplication : Application startup failed
..........
org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:137) ...........
...........
...........
Caused by: java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/exc/InvalidDefinitionException

So Spring uses Jackson and the Jackson library is composed of three components: Jackson Databind, Core, and Annotation. I did not add anything specific to my maven pom.xml for Jackson so the dependency got inherited somewhere. So after some Google jobs I figured out the spring-boot-starter-parent uses some older FasterXML/Jackson libs which seem to screw things up.


jvzoggel$ mvn dependency:tree -Dincludes=com.fasterxml.jackson.*

[INFO] --- maven-dependency-plugin:2.10:tree (default-cli) @ springboot ---
[INFO] nl.rubix.api:springboot:jar:0.0.1-SNAPSHOT
[INFO] \- org.springframework.boot:spring-boot-starter-web:jar:1.5.10.RELEASE:compile
[INFO] \- com.fasterxml.jackson.core:jackson-databind:jar:2.8.10:compile
[INFO] +- com.fasterxml.jackson.core:jackson-annotations:jar:2.8.0:compile
[INFO] \- com.fasterxml.jackson.core:jackson-core:jar:2.8.10:compile

So by overriding the dependency in my pom.xml I could make sure a newer version of Jackson was used:

<!-- Jackson due to SpringBootStarterParent dependency problems -->
<dependency>
   <groupId>com.fasterxml.jackson.core</groupId>
   <artifactId>jackson-databind</artifactId>
   <version>2.9.4</version>
</dependency>
<dependency>
   <groupId>com.fasterxml.jackson.core</groupId>
   <artifactId>jackson-core</artifactId>
   <version>2.9.4</version>
</dependency>
<dependency>
   <groupId>com.fasterxml.jackson.core</groupId>
   <artifactId>jackson-annotations</artifactId>
   <version>2.9.4</version>
</dependency>

Problem solved.

References

 

 

 
3 Comments

Posted by on 04-02-2018 in Uncategorized

 

Tags: , ,

How to set the default Java version in IntelliJ IDE projects

When working in a Java environment with multiple developers using their own IDE preference I often get error like this opening a project with IntelliJ: “@Override is not allowed when implementing interface method”. Since I’m not a Java dev fulltime I tend to forget this stuff, so basically this is a reminder for myself …

Basically IntelliJ thinks the project is Java 1.5 by default so while my console / mvn build works perfectly due to using Java 8 the IDE thinks differently.

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <configuration>
    <source>1.8</source>
    <target>1.8</target>
  </configuration>
</plugin>

Project Language

File > Project Structure > Project > Language level

Module Language

If your project has multiple modules check that they inherit the project language, or when needed, set the correct non-default on the module

File > Project Structure > Module > Language level

or

Right Click module > Open Module Settings 


Java Compiler for Default Projects

While your at it, set this value as a default for new projects as well

File > Other Settiings > Default Settings > Build, Execution, Deployment > Compiler > Java Compiler > Project Bytecode version

 

 
Leave a comment

Posted by on 16-01-2018 in Uncategorized

 

Tags: ,

How does communication between the 3Scale API Gateway and API Management Portal work ?

API Gateway communication

The 3Scale API Gateway is a light-weight high-performance API Management Gateway. Since it can scale and recover easily with tools like vanilla Docker/Kubernetes or Openshift, new pods can be there in seconds to handle more APIs then you can imagine. After the API Gateway starts and during runtime it needs to communicatie with the central API Management Portal to retrieve it’s configuration. Therefore it uses these 2 APIs:

  • The Service Management API to ask for authorization and report usage
  • The Account Management API (read-only) to get the list of available APIs and their configuration
The Service Management API is at su1.3scale.net (port 443) whereas the Account Management API is at MyAccountadmin.3scale.net (port 443). The connection is build up by the API Gateway itself which from a security perspective is great. Only outbound port 443 is needed.

Auto updating

The gateway is able to check the configuration from time to time and self update when necessary. You can change the default by adjusting the APICAST_CONFIGURATION_CACHE (which value is in seconds). This parameter controls the polling frequency for the Gateway to connect to the Admin Portal.

-e APICAST_CONFIGURATION_CACHE=300
“Cache the configuration for N seconds. Empty means forever and 0 means don’t cache at all.”
Which means that the value should be 60 or greater, 0 or unset
  • 0, don’t cache the configuration at all. Safe to use on staging together with APICAST_CONFIGURATION_LOADER=lazy. Lazy will load it on demand for each incoming request (to guarantee a complete refresh on each request.
  • 60, caches for 60 seconds and is the minimal value if set.
  • Empty means forever, you probably do not want to use that in production

References and all credits:

 
Leave a comment

Posted by on 15-01-2018 in Uncategorized

 

Tags: , ,

How to run the 3Scale APICast Gateway in OpenShift ?

In my last blogpost I’ve shown how to run the 3Scale APICast Gateway in a Docker container. The next step would be to run APICast in OpenShift (both are Red Hat family members) and this was part of a showcase for a client. This blogpost will show you which steps to make.

High Level Overview

So this is how our setup looks like with very bright shiny pretty colours:

The preperation

What we need:

  • Endpoint for the API to hit, like apicast.rubix.local
  • Endpoint of the 3Scale SaaS environment (like jvzoggel-admin.3scale.net)
  • 3Scale SaaS Access Token which is something like 93b21fc40335f58ee3a93d5a5c343…..
  • user key which is shown at the bottom of the SaaS API Configuration screen in the curl example or can be found in Application -> your app

First make sure the API Endpoint is set in the 3Scale SaaS environment and copy the the curl example at the bottom for your own convenience.

API endpoint to hit

You probably already have an Access Token, if not you can generate one from Personal Settings -> Tokens -> Access Tokens

Make sure to always note down access tokens since your not able to retrieve it again.

The commands

We make sure OpenShift is running (I use my local OS X machine to run so adjust commands where needed when a remote OpenShift cluster is needed)

jvzoggel$ oc cluster up --docker-machine=openshift
Starting OpenShift using openshift/origin:v3.7.0 ...
OpenShift server started.

The server is accessible via web console at:
    https://192.168.99.100:8443

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

Note the OpenShift URL (in this case 192.168.99.100) presented here, we will need this later.

Next step we will use the OC command to create a new project,

 
jvzoggel$ oc new-project "3scalegateway" --display-name="gateway" --description="Rubix 3scale gateway on OpenShift demo"
Now using project "3scalegateway" on server "https://192.168.99.100:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

The next step is to set a configuration variable which the APICast image will use to connect on runtime to the 3Scale SaaS environment to download the configuration. The string needs the 3Scale SaaS Access Token and your MY_ACCOUNT-admin.3scale.net.

 
jvzoggel$ oc secret new-basicauth apicast-configuration-url-secret --password=https://MY_ACCESS_TOKEN@jvzoggel-admin.3scale.net
secret/apicast-configuration-url-secret

We are now going to create a new application in our project which uses a template to retrieve it’s 3Scale image and configuration settings. You can check out the template by checking the URL in your browser.

jvzoggel$ oc new-app -f https://raw.githubusercontent.com/3scale/apicast/master/openshift/apicast-template.yml
--> Deploying template "3scalegateway/3scale-gateway" for "https://raw.githubusercontent.com/3scale/apicast/master/openshift/apicast-template.yml" to project 3scalegateway

     3scale-gateway
     ---------
     3scale API Gateway

     * With parameters:
        * CONFIGURATION_URL_SECRET=apicast-configuration-url-secret
        * CONFIGURATION_FILE_PATH=
        * IMAGE_NAME=quay.io/3scale/apicast:master
        * DEPLOYMENT_ENVIRONMENT=production
        * APICAST_NAME=apicast
        * RESOLVER=
        * SERVICES_LIST=
        * CONFIGURATION_LOADER=boot
        * BACKEND_CACHE_HANDLER=strict
        * LOG_LEVEL=
        * PATH_ROUTING=false
        * RESPONSE_CODES=false
        * CONFIGURATION_CACHE=
        * REDIS_URL=
        * OAUTH_TOKENS_TTL=604800
        * MANAGEMENT_API=status
        * OPENSSL_VERIFY=false
        * REPORTING_THREADS=0

--> Creating resources ...
    deploymentconfig "apicast" created
    service "apicast" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/apicast' 
    Run 'oc status' to view your app.<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>

Your 2 pods should spin uo in OpenShift and make sure to check the Logs if no errors are there.

We need to expose our pods to incoming traphic by adding a route through the OpenShift console or through the oc command. I assume you know how to add a route through the console, so here is the command:

 
jvzoggel$ oc expose svc/apicast --name=apicast --hostname=apicast.rubix.local
route "apicast" exposed

Now we can hit our api endpoint. Since I need to hit the API endpoint with apicast.rubix.local which is not known on my host machine, I could edit the /etc/hosts file. But because I don’t like to fill up my hosts I add a HTTP Host header with my request containing the correct endpoint.

 
jvzoggel$ curl "http://192.168.99.100/echo?user_key=MY_KEY" -H "Host: apicast.rubix.local"
{
  "method": "GET",
  "path": "/echo",
  "args": "user_key=my_key_was_here",
  "body": "",
  "headers": {
    "HTTP_VERSION": "HTTP/1.1",
    "HTTP_HOST": "echo-api.3scale.net",
    "HTTP_ACCEPT": "*/*",
    "HTTP_USER_AGENT": "curl/7.54.0",
    "HTTP_X_3SCALE_PROXY_SECRET_TOKEN": "Shared_secret_sent_from_proxy_to_API_backend",
    "HTTP_X_REAL_IP": "172.17.0.1",
    "HTTP_X_FORWARDED_FOR": "192.168.99.1, 89.200.44.122, 10.0.101.13",
    "HTTP_X_FORWARDED_HOST": "echo-api.3scale.net",
    "HTTP_X_FORWARDED_PORT": "443",
    "HTTP_X_FORWARDED_PROTO": "https",
    "HTTP_FORWARDED": "for=10.0.101.13;host=echo-api.3scale.net;proto=https"
  },
  "uuid": "711e9799-1234-1234-b8b6-4287541238"
}jvzoggel:~ jvzoggel$ 

And to proof the statistics, the 3Scale SaaS dashboards show us the metrics:

So setting up 3ScaleAPICast in OpenShift is relatively easy. Further configuring APICast, setting up OpenShift routes between your consumers and API’s and adding (REDIS) caching adds more complexity, but still hope this helps!

References

 
Leave a comment

Posted by on 11-12-2017 in Uncategorized

 

Tags: , , ,