Advertisements
RSS

Tag Archives: AWS

How to use multiple resources definitions in the Serverless Framework

In other Serverless projects we use a single resource definition to include AWS CloudFormation scripts in our build/deploy. Where the resource file contained component definitions like DDBTable: in the root.

resources:
  Resources: ${file(resources.yml)}

However a new project contains a lot of AWS configurations so I decided to split it up. You can do that using the next syntax. However make sure your files start with the Resources: data type.

resources:
  - ${file(resources/resources-vpc.yml)}
  - ${file(resources/resources-db.yml)}

You can even mix and match inline and external resources

resources:
  - Resource:
      ApiGatewayRestApi:
        Type: AWS::ApiGateway::RestApi
  - ${file(resources/resources-vpc.yml)}
  - ${file(resources/resources-db.yml)}

Reference(s)

Advertisements
 
Leave a comment

Posted by on 07-09-2018 in Serverless

 

Tags: , , ,

Example AWS CloudFormation template for network load balancer


We needed a public network load balancer with SSL (through AWS Certificate Manager) and took me some retry’s to get it right since most examples are based upon the classic or application load balancer so here to share:

  Terra10NetworkLoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: t10-networkloadbalancer
      Scheme: internet-facing
      Subnets: !Ref Terra10Subnet
      Type: network
      Tags:
        - Key: Name
          Value: t10-networklb
  Terra10NetworkLoadBalancerTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: t10-networklb-target
      Port: 443
      Protocol: TCP
      VpcId: !ImportValue t10-vpc-id
      TargetGroupAttributes:
        - Key: deregistration_delay.timeout_seconds
          Value: 60
      Targets:
      - Id: !Ref Terra10EC2Instance1
        Port: 443
      - Id: !Ref Terra10EC2Instance2
        Port: 443  
      Tags:
        - Key: Name
          Value: t10-networklb-target
  Terra10NetworkLoadBalancerListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn: !Ref Terra10NetworkLoadBalancerTargetGroup
      LoadBalancerArn: !Ref Terra10NetworkLoadBalancer
      Port: '443'
      Protocol: TCP
  Terra10NetworkLoadBalancerListenerCert:
    Type: AWS::ElasticLoadBalancingV2::ListenerCertificate
    Properties:
      Certificates:
        - CertificateArn: arn:aws:acm:eu-west-1:xxxaccountxxx:certificate/123456....
      ListenerArn: !Ref Terra10NetworkLoadBalancerListener

 

Reference

 
Leave a comment

Posted by on 29-08-2018 in AWS, CloudFormation

 

Tags: ,

How to share values between seperated AWS CloudFormation stacks

In AWS CloudFormation templates you often have the need to make a reference to an earlier created component. For instance the unique ID of a vpc, subnet, security group or instance. You have two choices: continue with separated stacks or combine them to create a nested stack. In our case we wanted to keep seperated stacks so needed a way to export/import settings from an earlier network related CloudFormation stack holding our VPC and Subnet Identifier which the new stack uses. To share information between stacks we can export output values from the first stack and import these values in new stacks. Other stacks that are in the same AWS account and region can import the exported values.

Example of an VPC ID which we export in our network stack (named t10-cf-vpc):

######################
## OUTPUT
######################
Outputs:
    VPC:
        Description: A reference to the created VPC
        Value: !Ref VPC
        Export:
          Name: t10-vpc-id

You can easily check if the export succeeded by using the AWS CloudFormation GUI

or using the AWS CLI to get a list

 
jvzoggel$ aws cloudformation list-exports
{
"Exports": [
{
"ExportingStackId": "arn:aws:cloudformation:xxxxx:xxxxxxx:stack/t10-cf-vpc/xxxxxxxx",
"Name": "t10-vpc-id",
"Value": "vpc-xxxxxxx"
},
.....

Importing the values in new stacks

The new CloudFormation stack can make use of the exported value simple by using the !ImportValue function

Parameters:
  NetworkStackNameParameter:
    Description: Reference to the vpc-10 stack
    Type: String
    Default: 't10-cf-vpc'

Resources:
  MyTerra10SecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: MyTerra10SecurityGroup
      GroupDescription: MyTerra10SecurityGroup
      VpcId: !ImportValue t10-vpc-id

Note: After another stack imports an output value, you can’t delete the stack that is exporting the output value or modify the exported output value.

References:

 
Leave a comment

Posted by on 28-08-2018 in AWS

 

Tags: , ,

Using CloudFormation to bootstrap EC2 instances with scripts from CodeCommit

While spinning up EC2 instances you can bootstrap them with packages, files, etc in different ways. For our stack we wanted to pull scripts from an AWS CodeCommit to make life easier.

The (bash) scripts are stored in our CodeCommit so first we need to make sure the EC2 instances, while spinning up, are allowed to access the repository. So we created an IAM Policy with these sufficient rights and attach the policy to a IAM role which we can use to attach to our EC2 instances.

AWS IAM Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:GitPull"
            ],
            "Resource": "arn:aws:codecommit:*:*:terra10-scripts"
        },
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:Get*",
                "codecommit:BatchGetRepositories",
                "codecommit:List*"
            ],
            "Resource": "*"
        }
    ]
}

We make sure the EC2 instances uses the new IAM Role by defining IamInstanceProfile with our example IAM Role t10-ec2-role in the CloudFormation template. Further on by using the UserData segment we can execute scripts during bootstrap of the server. Install the AWSCLI is required for the credential helper

T10Controller1:
  Type: AWS::EC2::Instance
  Properties:
    ImageId: !Ref HostAMI
    InstanceType: t2.micro
    IamInstanceProfile: t10-ec2-role
    PrivateIpAddress: 10.0.11.11   
    Tags:
      - Key: Name
        Value: t10-k8s-controller1
    UserData:
      Fn::Base64: !Sub |
        #!bin/bash -xe
        apt-get update
        apt-get -y install awscli
        cd /tmp
        echo "######## git pull AWS CodeCommit files"
        sudo git config --global credential.helper '!aws codecommit credential-helper $@'
        sudo git config --global credential.UseHttpPath true
        sudo git clone https://git-codecommit.xxxxxx.amazonaws.com/v1/repos/terra10-scripts /tmp/terra10-scripts

 

 
Leave a comment

Posted by on 26-08-2018 in AWS

 

Tags: , , , ,

AWS CloudFormation error “The parameter groupName cannot be used with the parameter subnet”

When trying to start some EC2 instance through CloudFormation I kept getting the error “The parameter groupName cannot be used with the parameter subnet”.  

The (YAML) AWS CloudFormation looks something like this:

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-20ee5e5d
      InstanceType: t2.micro
      KeyName: t10_kubernetes
      PrivateIpAddress: 10.0.11.11
      SubnetId:
        Fn::ImportValue:
          !Sub "t10-vpc-k8s-subnet1-id"
      SecurityGroupIds: - !Ref KubernetesControllerSecurityGroup
      Tags:
        - Key: Name
          Value: t10-k8s-controller1

So the error ended in a Google search with many hits, many questions, many suggestions, but very few real answers.

Until I saw this answer from johnhunsley:
I believe you have created a Security Group without specifying a VPC ID. You have then attempted to create a launch config which launches instances into a subnet within a VPC. Therefore, when It attempts to assign the security group to those instances it fails because it expects the security group ID rather than the name.

So I think the response from AWS is in the running for the “Worst Error Message Ever” but the solution is very simple. Don’t make the mistake of not specifying your custom VPC ID when creating a new security group.

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      VpcId: !ImportValue t10-vpc-id
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller

References

johnhunsley @ https://github.com/boto/boto/issues/350

 
Leave a comment

Posted by on 15-08-2018 in AWS, CloudFormation

 

Tags: , , ,

How to connect to the CEPH Object Gateway S3 API with Java

We use a CEPH storage solution and specifically want to use the Ceph Object Gateway with the S3 API through a Java client. The API is based on the AWS S3 standard however requires some special tweaking to work. Took me some effort to get a working connection, so here to share:

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk</artifactId>
  <version>1.11.325</version>
</dependency>

We can use either the new AmazonS3ClientBuilder

package nl.rubix.s3;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.SDKGlobalConfiguration;

public class AmazonS3ClientBuilder
{
  public static void main(String[] args)
  {
    String accessKey = "XXXXX";
    String secretKey = "XXXXX";

    // Our firewall on DEV does some weird stuff so we disable SSL cert check
    System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true");
    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
      System.out.println("Cert checking is disabled");
    }
		
    // S3 Client configuration
    ClientConfiguration config = new ClientConfiguration();
    // Not the standard "AWS3SignerType", maar expliciet signerTypeV2
    config.setSignerOverride("S3SignerType");
    config.setProtocol(Protocol.HTTPS);
    config.setProxyHost("proxy.rubix.nl");

    config.setProxyPort(8080);
    // S3 Credentials
    BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey,secretKey);
    // S3 Endpoint
    AwsClientBuilder.EndpointConfiguration endpointConfiguration = new
      AwsClientBuilder.EndpointConfiguration("objects.dc1.rubix.nl", "");
    AmazonS3 s3 = com.amazonaws.services.s3.AmazonS3ClientBuilder.standard()
      .withClientConfiguration(config)
      .withCredentials(new AWSStaticCredentialsProvider(credentials))
      .withEndpointConfiguration(endpointConfiguration)
      .build();
    
    System.out.println("===========================================");
    System.out.println(" Connection to the Rubix S3 ");
    System.out.println("===========================================\n");
    try { 
       /*
       * List of buckets and objects in our account
       */
       System.out.println("Listing buckets and objects");
       for (Bucket bucket : s3.listBuckets())
       {
         System.out.println(" - " + bucket.getName() +" "
           + "(owner = " + bucket.getOwner()
           + " "
           + "(creationDate = " + bucket.getCreationDate());
         ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
           .withBucketName(bucket.getName()));
         for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) 
         {
           System.out.println(" --- " + objectSummary.getKey() +" "
           + "(size = " + objectSummary.getSize() + ")" +" "
           + "(eTag = " + objectSummary.getETag() + ")");
           System.out.println();
         }
       }
     }
     catch (AmazonServiceException ase)
     {
       System.out.println("Caught an AmazonServiceException, which means your request made it to S3, but was rejected with an error response for some reason.");
       System.out.println("Error Message:    " + ase.getMessage());
       System.out.println("HTTP Status Code: " + ase.getStatusCode());
       System.out.println("AWS Error Code: " + ase.getErrorCode());
       System.out.println("Error Type: " + ase.getErrorType());
       System.out.println("Request ID: " + ase.getRequestId());
     }
     catch (AmazonClientException ace)
     {
       System.out.println("Caught an AmazonClientException, which means the client encountered "
       + "a serious internal problem while trying to communicate with S3,
       + "such as not being able to access the network.");
       System.out.println("Error Message: " + ace.getMessage());
     }

or make it work with the older and depricated AmazonS3Client

package nl.rubix.s3;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.SDKGlobalConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amaonaws.services.s3.model.Bucket;

public class BasicAWSCredentials
{
    public static void main(String[] args)
    {
        String accessKey = "XXXXXXX";
        String secretKey = "XXXXXXX";
        System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true"
    );

    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
        System.out.println("Cert checking is disabled");
    }
    AWSCredentials credentials = new com.amazonaws.auth.BasicAWSCredentials(accessKey,secretKey);

    ClientConfiguration clientConfig = new ClientConfiguration();
    clientConfig.setSignerOverride("S3SignerType");
    clientConfig.setProxyHost("proxy.rubix.nl");
    clientConfig.setProxyPort(8080);

    AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
    conn.setEndpoint("objects.gn3.rubix.nl");

    for (Bucket bucket : conn.listBuckets())
    {
      System.out.println(" - " + bucket.getName() 
        + " "
        + "(owner = " + bucket.getOwner()
        + " "
        + "(creationDate = " + bucket.getCreationDate());
    }
  }
}

Hope it helps!

 
Leave a comment

Posted by on 03-07-2018 in Uncategorized

 

Tags: , , , , ,

How to push to AWS CodeCommit from Mac OS X

When trying to commit to a AWS CodeCommit GIT repository I receive the following error:

jvzoggel$ git push
 fatal: unable to access 'https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/myProject/': The requested URL returned error: 403

The Amazon website states:

If you are using macOS, use HTTPS to connect to an AWS CodeCommit repository. After you connect to an AWS CodeCommit repository with HTTPS for the first time, subsequent access will fail after about fifteen minutes. The default Git version on macOS uses the Keychain Access utility to store credentials. For security measures, the password generated for access to your AWS CodeCommit repository is temporary, so the credentials stored in the keychain will stop working after about 15 minutes. To prevent these expired credentials from being used, you must either:

  • Install a version of Git that does not use the keychain by default.
  • Configure the Keychain Access utility to not provide credentials for AWS CodeCommit repositories.

I used the second option to fix it, so:

  1. Open the Keychain Access utility (use Finder to locate it)
  2. Search for git-codecommit
  3. Select the row, right-click and then choose Get Info.
  4. Choose the Access Control tab.
  5. In Confirm before allowing access, choose git-credential-osxkeychain, and then choose the minus sign to remove it from the list.

screen-shot-2017-02-15-at-19-21-22

After removing git-credential-osxkeychain from the list, you will see a pop-up dialog whenever you run a Git command. Choose Deny to continue. The pop-up is really annoying so I will probably switch over to SSH soon.

References

 

 
Leave a comment

Posted by on 18-02-2017 in Uncategorized

 

Tags: , , , ,