Advertisements
RSS

Author Archives: jvzoggel

About jvzoggel

Consultant / the Netherlands / 's-Hertogenbosch / RUBIX.nl

Example AWS CloudFormation template for network load balancer


We needed a public network load balancer with SSL (through AWS Certificate Manager) and took me some retry’s to get it right since most examples are based upon the classic or application load balancer so here to share:

  Terra10NetworkLoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: t10-networkloadbalancer
      Scheme: internet-facing
      Subnets: !Ref Terra10Subnet
      Type: network
      Tags:
        - Key: Name
          Value: t10-networklb
  Terra10NetworkLoadBalancerTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: t10-networklb-target
      Port: 443
      Protocol: TCP
      VpcId: !ImportValue t10-vpc-id
      TargetGroupAttributes:
        - Key: deregistration_delay.timeout_seconds
          Value: 60
      Targets:
      - Id: !Ref Terra10EC2Instance1
        Port: 443
      - Id: !Ref Terra10EC2Instance2
        Port: 443  
      Tags:
        - Key: Name
          Value: t10-networklb-target
  Terra10NetworkLoadBalancerListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn: !Ref Terra10NetworkLoadBalancerTargetGroup
      LoadBalancerArn: !Ref Terra10NetworkLoadBalancer
      Port: '443'
      Protocol: TCP
  Terra10NetworkLoadBalancerListenerCert:
    Type: AWS::ElasticLoadBalancingV2::ListenerCertificate
    Properties:
      Certificates:
        - CertificateArn: arn:aws:acm:eu-west-1:xxxaccountxxx:certificate/123456....
      ListenerArn: !Ref Terra10NetworkLoadBalancerListener

 

Reference

Advertisements
 
Leave a comment

Posted by on 29-08-2018 in AWS, CloudFormation

 

Tags: ,

How to share values between seperated AWS CloudFormation stacks

In AWS CloudFormation templates you often have the need to make a reference to an earlier created component. For instance the unique ID of a vpc, subnet, security group or instance. You have two choices: continue with separated stacks or combine them to create a nested stack. In our case we wanted to keep seperated stacks so needed a way to export/import settings from an earlier network related CloudFormation stack holding our VPC and Subnet Identifier which the new stack uses. To share information between stacks we can export output values from the first stack and import these values in new stacks. Other stacks that are in the same AWS account and region can import the exported values.

Example of an VPC ID which we export in our network stack (named t10-cf-vpc):

######################
## OUTPUT
######################
Outputs:
    VPC:
        Description: A reference to the created VPC
        Value: !Ref VPC
        Export:
          Name: t10-vpc-id

You can easily check if the export succeeded by using the AWS CloudFormation GUI

or using the AWS CLI to get a list

 
jvzoggel$ aws cloudformation list-exports
{
"Exports": [
{
"ExportingStackId": "arn:aws:cloudformation:xxxxx:xxxxxxx:stack/t10-cf-vpc/xxxxxxxx",
"Name": "t10-vpc-id",
"Value": "vpc-xxxxxxx"
},
.....

Importing the values in new stacks

The new CloudFormation stack can make use of the exported value simple by using the !ImportValue function

Parameters:
  NetworkStackNameParameter:
    Description: Reference to the vpc-10 stack
    Type: String
    Default: 't10-cf-vpc'

Resources:
  MyTerra10SecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: MyTerra10SecurityGroup
      GroupDescription: MyTerra10SecurityGroup
      VpcId: !ImportValue t10-vpc-id

Note: After another stack imports an output value, you can’t delete the stack that is exporting the output value or modify the exported output value.

References:

 
Leave a comment

Posted by on 28-08-2018 in AWS

 

Tags: , ,

Using CloudFormation to bootstrap EC2 instances with scripts from CodeCommit

While spinning up EC2 instances you can bootstrap them with packages, files, etc in different ways. For our stack we wanted to pull scripts from an AWS CodeCommit to make life easier.

The (bash) scripts are stored in our CodeCommit so first we need to make sure the EC2 instances, while spinning up, are allowed to access the repository. So we created an IAM Policy with these sufficient rights and attach the policy to a IAM role which we can use to attach to our EC2 instances.

AWS IAM Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:GitPull"
            ],
            "Resource": "arn:aws:codecommit:*:*:terra10-scripts"
        },
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:Get*",
                "codecommit:BatchGetRepositories",
                "codecommit:List*"
            ],
            "Resource": "*"
        }
    ]
}

We make sure the EC2 instances uses the new IAM Role by defining IamInstanceProfile with our example IAM Role t10-ec2-role in the CloudFormation template. Further on by using the UserData segment we can execute scripts during bootstrap of the server. Install the AWSCLI is required for the credential helper

T10Controller1:
  Type: AWS::EC2::Instance
  Properties:
    ImageId: !Ref HostAMI
    InstanceType: t2.micro
    IamInstanceProfile: t10-ec2-role
    PrivateIpAddress: 10.0.11.11   
    Tags:
      - Key: Name
        Value: t10-k8s-controller1
    UserData:
      Fn::Base64: !Sub |
        #!bin/bash -xe
        apt-get update
        apt-get -y install awscli
        cd /tmp
        echo "######## git pull AWS CodeCommit files"
        sudo git config --global credential.helper '!aws codecommit credential-helper $@'
        sudo git config --global credential.UseHttpPath true
        sudo git clone https://git-codecommit.xxxxxx.amazonaws.com/v1/repos/terra10-scripts /tmp/terra10-scripts

 

 
Leave a comment

Posted by on 26-08-2018 in AWS

 

Tags: , , , ,

AWS CloudFormation error “The parameter groupName cannot be used with the parameter subnet”

When trying to start some EC2 instance through CloudFormation I kept getting the error “The parameter groupName cannot be used with the parameter subnet”.  

The (YAML) AWS CloudFormation looks something like this:

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-20ee5e5d
      InstanceType: t2.micro
      KeyName: t10_kubernetes
      PrivateIpAddress: 10.0.11.11
      SubnetId:
        Fn::ImportValue:
          !Sub "t10-vpc-k8s-subnet1-id"
      SecurityGroupIds: - !Ref KubernetesControllerSecurityGroup
      Tags:
        - Key: Name
          Value: t10-k8s-controller1

So the error ended in a Google search with many hits, many questions, many suggestions, but very few real answers.

Until I saw this answer from johnhunsley:
I believe you have created a Security Group without specifying a VPC ID. You have then attempted to create a launch config which launches instances into a subnet within a VPC. Therefore, when It attempts to assign the security group to those instances it fails because it expects the security group ID rather than the name.

So I think the response from AWS is in the running for the “Worst Error Message Ever” but the solution is very simple. Don’t make the mistake of not specifying your custom VPC ID when creating a new security group.

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      VpcId: !ImportValue t10-vpc-id
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller

References

johnhunsley @ https://github.com/boto/boto/issues/350

 
Leave a comment

Posted by on 15-08-2018 in AWS, CloudFormation

 

Tags: , , ,

How to install Azure CLI on MacOS

The Azure CLI 2.0 is Microsoft’s command line interface for managing Azure resources. First install the CLI through brew

brew update && brew install azure-cli

Then run the login command which will launch a browser session for your login credentials

jvzoggel$ az login
Fail to load or parse file /Users/jvzoggel/.azure/azureProfile.json. It is overridden by default settings.
Fail to load or parse file /Users/jvzoggel/.azure/az.json. It is overridden by default settings.
Fail to load or parse file /Users/jvzoggel/.azure/az.sess. It is overridden by default settings.
Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code"
You have logged in. Now let us find all the subscriptions to which you have access...
{
  "cloudName": "AzureCloud",
  "id": "xxxxxx",
  "isDefault": true,
.....
},

The first time the 3 Azure configuration files in your home/.azure will automatically be created.

 

References

 
Leave a comment

Posted by on 06-08-2018 in Azure

 

Tags: ,

How to connect to the CEPH Object Gateway S3 API with Java

We use a CEPH storage solution and specifically want to use the Ceph Object Gateway with the S3 API through a Java client. The API is based on the AWS S3 standard however requires some special tweaking to work. Took me some effort to get a working connection, so here to share:

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk</artifactId>
  <version>1.11.325</version>
</dependency>

We can use either the new AmazonS3ClientBuilder

package nl.rubix.s3;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.SDKGlobalConfiguration;

public class AmazonS3ClientBuilder
{
  public static void main(String[] args)
  {
    String accessKey = "XXXXX";
    String secretKey = "XXXXX";

    // Our firewall on DEV does some weird stuff so we disable SSL cert check
    System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true");
    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
      System.out.println("Cert checking is disabled");
    }
		
    // S3 Client configuration
    ClientConfiguration config = new ClientConfiguration();
    // Not the standard "AWS3SignerType", maar expliciet signerTypeV2
    config.setSignerOverride("S3SignerType");
    config.setProtocol(Protocol.HTTPS);
    config.setProxyHost("proxy.rubix.nl");

    config.setProxyPort(8080);
    // S3 Credentials
    BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey,secretKey);
    // S3 Endpoint
    AwsClientBuilder.EndpointConfiguration endpointConfiguration = new
      AwsClientBuilder.EndpointConfiguration("objects.dc1.rubix.nl", "");
    AmazonS3 s3 = com.amazonaws.services.s3.AmazonS3ClientBuilder.standard()
      .withClientConfiguration(config)
      .withCredentials(new AWSStaticCredentialsProvider(credentials))
      .withEndpointConfiguration(endpointConfiguration)
      .build();
    
    System.out.println("===========================================");
    System.out.println(" Connection to the Rubix S3 ");
    System.out.println("===========================================\n");
    try { 
       /*
       * List of buckets and objects in our account
       */
       System.out.println("Listing buckets and objects");
       for (Bucket bucket : s3.listBuckets())
       {
         System.out.println(" - " + bucket.getName() +" "
           + "(owner = " + bucket.getOwner()
           + " "
           + "(creationDate = " + bucket.getCreationDate());
         ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
           .withBucketName(bucket.getName()));
         for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) 
         {
           System.out.println(" --- " + objectSummary.getKey() +" "
           + "(size = " + objectSummary.getSize() + ")" +" "
           + "(eTag = " + objectSummary.getETag() + ")");
           System.out.println();
         }
       }
     }
     catch (AmazonServiceException ase)
     {
       System.out.println("Caught an AmazonServiceException, which means your request made it to S3, but was rejected with an error response for some reason.");
       System.out.println("Error Message:    " + ase.getMessage());
       System.out.println("HTTP Status Code: " + ase.getStatusCode());
       System.out.println("AWS Error Code: " + ase.getErrorCode());
       System.out.println("Error Type: " + ase.getErrorType());
       System.out.println("Request ID: " + ase.getRequestId());
     }
     catch (AmazonClientException ace)
     {
       System.out.println("Caught an AmazonClientException, which means the client encountered "
       + "a serious internal problem while trying to communicate with S3,
       + "such as not being able to access the network.");
       System.out.println("Error Message: " + ace.getMessage());
     }

or make it work with the older and depricated AmazonS3Client

package nl.rubix.s3;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.SDKGlobalConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amaonaws.services.s3.model.Bucket;

public class BasicAWSCredentials
{
    public static void main(String[] args)
    {
        String accessKey = "XXXXXXX";
        String secretKey = "XXXXXXX";
        System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true"
    );

    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
        System.out.println("Cert checking is disabled");
    }
    AWSCredentials credentials = new com.amazonaws.auth.BasicAWSCredentials(accessKey,secretKey);

    ClientConfiguration clientConfig = new ClientConfiguration();
    clientConfig.setSignerOverride("S3SignerType");
    clientConfig.setProxyHost("proxy.rubix.nl");
    clientConfig.setProxyPort(8080);

    AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
    conn.setEndpoint("objects.gn3.rubix.nl");

    for (Bucket bucket : conn.listBuckets())
    {
      System.out.println(" - " + bucket.getName() 
        + " "
        + "(owner = " + bucket.getOwner()
        + " "
        + "(creationDate = " + bucket.getCreationDate());
    }
  }
}

Hope it helps!

 
Leave a comment

Posted by on 03-07-2018 in Uncategorized

 

Tags: , , , , ,

How to dynamically generate XML Request in ReadyAPI / SOAPui ?

We have a functional test where we use a SOAP Request to start the processing of a couple files based on a URL in the request. For a negative test (all corrupt files) we got a batch of 500 files. So to prevent a lot of copy/paste work in my SOAP Request I wanted to generate the request dynamically. I did this before, couldn’t find/remember my own example so eventually when I got it working decided to share and store it here.

First some housekeeping

I always do some housekeeping in the init groovy step of my tests to generate an unique id (for correlation, etc etc) and more.

//Generate unique id - sequence
def v_sequence = new Date().time.toString()
 
// testRunner.testCase.setPropertyValue("sequence", v_sequence)
testRunner.testCase.testSuite.project.setPropertyValue("v_sequence", v_sequence)
 
// empty some variables
testRunner.testCase.testSuite.project.setPropertyValue("XML", "")
testRunner.testCase.testSuite.project.setPropertyValue("teller", "1")

Then the basic dataloop

Using an external datasource is the way to go if we want to load data. In the example I only use 1 field (url) in a text file with 500 lines:

The DataSource Loop function makes it possible to go back to the Groovy script “Generate XML Request” to build the request line by line.

The Groovy magic

Here is the groovy script that holds the logic. For each url in the dataloop we create an Document XML complex element which we add to the list of

import groovy.xml.StreamingMarkupBuilder
import groovy.xml.XmlUtil
import groovy.util.XmlSlurper
 
def sequence = context.expand( '${#Project#v_sequence}' )
def url = context.expand( '${DataSource#url}' )
def datumtijd = new Date().format("yyyy-MM-dd'T'HH:mm:ss")
def teller = context.expand( '${#Project#teller}' )
 
log.info('sequence = ' + sequence)
log.info('url = ' + url)
 
def filename = url.split('/').last()
log.info('filename = ' + filename)          
           
// Define all your namespaces here
// def nameSpacesMap = [soapenv: 'http://schemas.xmlsoap.org/soap/envelope/',ns: 'nl.rubix.ohmy',]
def builder = new StreamingMarkupBuilder()
builder.encoding ='utf-8'
def xmlDocument = builder.bind
{
//          namespaces << nameSpacesMap
            // use it like ns.element            
            Document
            {
                        Id('DOC.' + sequence + '.' + teller);
                        Name(filename);                        
                        Date(datumtijd);
                        Url(url);
            }
            Document       
}
 
// XML buildup to get rid of irritating XML version string which probably can be done much nicer
def v_document = XmlUtil.serialize(xmlDocument);
v_document = v_document.substring(39)
log.info("XML = " + v_document);
def origineelXML = context.expand( '${#Project#XML}' )
origineelXML = origineelXML + v_document
testRunner.testCase.testSuite.project.setPropertyValue("XML", origineelXML)
 
// increase counter
teller = teller.toInteger() + 1
testRunner.testCase.testSuite.project.setPropertyValue("teller", teller.toString())

The SOAP Request

Now we have a variable on project level which stores the complete list of documents, which we can just use as normal in the request like this:

  <soap:Body>
      <ns1:Request>
         <Documents>${#Project#XML}</Documents>
      </ns1:Request>
   </soap:Body>
 
Leave a comment

Posted by on 21-03-2018 in Uncategorized

 

Tags: ,