Advertisements
RSS

Tag Archives: Cloud

How to configure AWS Lambda functions to use a outbound fixed ip

For our Serverless project running on AWS infrastructure we needed an outbound Lambda API call to a SaaS platform which demands a whitelist of the source IP addresses. Which is pretty hard since AWS has a whole range. Luckily there is a trick to target your Lambda function in a VPC which can be configured to use a elastic IP for outbound communication.

High Level Design

Steps to configure the VPC / Network

Step 1 – create/use a VPC

Our AWS Network configuration always start with an AWS VPC (Virtual Private Cloud). You can use an existing VPC and configure your subnets there, or create a new one. Network configuration on AWS can look simple with the GUI wizard, but the setup of your VPC had major impact on all your resources so for real production a thing to remember. In the example I use CIDR block 10.0.0.0/16 which gives me (way to much) options.

Step 2 – create a public and 1-n private subnets

We will need 1 public subnet to attach to a NAT Gateway and route our traffic to the Internet. The example can work with 1 private subnet which hosts your Lambda function. However for availability purposes you would want to have multiple private subnets in different availability zones for your lambda to run. In the example I use CIDR block 10.0.11.0/24 for the public subnet and 10.0.21.0/24, 10.0.22.0/24 and 10.0.23.0/24 for the 3 private subnets on each eu-west-1 availability zone.

Step 3 – Configure the Internet Gateway and the public subnet configuration

We need an Internet Gateway for our VPC, attach to our public subnet and a public route table configuration which targets all (0.0.0.0/0) outbound traffic from the public subnet.

Step 4 – Configure the NAT Gateway and the private subnet configuration

We need a NAT Gateway which uses an elastic IP address. By adding a private route table which we attach to our private subnet(s) we make sure that all functions in the VPC will use our elastic IP for outbound communication. At lease, if we make sure the private route table contains a (0.0.0.0/0) target to the NAT Gateway

Configure your Lambda function

Make sure your Lambda function(s) use the configured VPC by selecting it in the VPC pulldown and then select all the private subnet(s).

Notes:

  • Make sure your Lambda runs with the IAM AWSLambdaVPCAccessExecutionRole

CloudFormation source code

Template can be found in my aws-cloudformation Git repository as well:

Resources:
  ######################
  ## VPC basics
  ######################
  VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 10.0.0.0/16
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
      - Key: Name
        Value: t10-fn-vpc-dev
  InternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
      - Key: Name
        Value: t10-fn-internetgateway-dev
  InternetGatewayAttachment:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      InternetGatewayId:
        Ref: InternetGateway
      VpcId:
        Ref: VPC

  ######################
  ## Subnet Public
  ######################
  PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2a
      CidrBlock: 10.0.11.0/24
      MapPublicIpOnLaunch: true
      Tags:
      - Key: Name
        Value: t10-fn-public-subnet-az1-dev
  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-public-rt-dev
  PublicRouteTableRoute1:
    Type: AWS::EC2::Route
    DependsOn: InternetGateway
    Properties:
      RouteTableId:
        Ref: PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId:
        Ref: InternetGateway
  PublicRouteTableAssociation1:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PublicSubnet1
      RouteTableId:
        Ref: PublicRouteTable
  PublicElasticIP:
    Type: AWS::EC2::EIP
    Properties:
      Domain:
        Ref: VPC
  NatGateway:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId:
        Fn::GetAtt: PublicElasticIP.AllocationId
      SubnetId:
        Ref: PublicSubnet1
      Tags:
      - Key: Name
        Value: t10-fn-natgateway-dev

  ######################
  ## Subnet Private
  ######################
  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2a
      CidrBlock: 10.0.21.0/24
      MapPublicIpOnLaunch: false
      Tags:
      - Key: Name
        Value: t10-fn-private-subnet-az1-dev
  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2b
      CidrBlock: 10.0.22.0/24
      MapPublicIpOnLaunch: false
      Tags:
      - Key: Name
        Value: t10-fn-private-subnet-az2-dev
  PrivateSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2c
      CidrBlock: 10.0.23.0/24
      MapPublicIpOnLaunch: false
      Tags:
      - Key: Name
        Value: t10-fn-private-subnet-az3-dev
  PrivateRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-private-rt-dev
  PrivateRouteTableRoute1:
    Type: AWS::EC2::Route
    DependsOn: NatGateway
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NatGateway
  PrivateRouteTableAssociation1:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet1
      RouteTableId:
        Ref: PrivateRouteTable
  PrivateRouteTableAssociation2:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet2
      RouteTableId:
        Ref: PrivateRouteTable
  PrivateRouteTableAssociation3:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet3
      RouteTableId:
        Ref: PrivateRouteTable

  ######################
  ## Security NACL
  ######################
  NetworkAcl:
    Type: AWS::EC2::NetworkAcl
    Properties:
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-nacl-dev
  NetworkAclEntryfn100:
    Type: AWS::EC2::NetworkAclEntry
    Properties:
      CidrBlock: 0.0.0.0/0
      Egress: 'false'
      NetworkAclId:
        Ref: NetworkAcl
      Protocol: "-1"
      RuleAction: allow
      RuleNumber: "100"
  NetworkAclEntryOutbound100:
    Type: AWS::EC2::NetworkAclEntry
    Properties:
      CidrBlock: 0.0.0.0/0
      Egress: 'true'
      NetworkAclId:
        Ref: NetworkAcl
      Protocol: "-1"
      RuleAction: allow
      RuleNumber: "100"
  PrivateSubnetNetworkAclAssociation1:
    Type: AWS::EC2::SubnetNetworkAclAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet1
      NetworkAclId:
        Ref: NetworkAcl
  PrivateSubnetNetworkAclAssociation2:
    Type: AWS::EC2::SubnetNetworkAclAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet2
      NetworkAclId:
        Ref: NetworkAcl
  PrivateSubnetNetworkAclAssociation3:
    Type: AWS::EC2::SubnetNetworkAclAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet3
      NetworkAclId:
        Ref: NetworkAcl

  ######################
  ## Security Group(s)
  ######################
  LambdaSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-fn-lambda-sg
      GroupDescription: t10-fn-lambda-sg
      SecurityGroupIngress:
      - IpProtocol: -1
        CidrIp: 0.0.0.0/0
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-lambda-sg-dev

  ######################
  ## OUTPUT
  ######################

  #Outputs:
  #  VPC:
  #    Description: A reference to the created VPC
  #    Value:
  #      Ref: VPC
  #    Export:
  #      Name: t10-fn-vpc-id$

References

Advertisements
 
Leave a comment

Posted by on 01-10-2018 in AWS

 

Tags: , , , , ,

How to use multiple resources definitions in the Serverless Framework

In other Serverless projects we use a single resource definition to include AWS CloudFormation scripts in our build/deploy. Where the resource file contained component definitions like DDBTable: in the root.

resources:
  Resources: ${file(resources.yml)}

However a new project contains a lot of AWS configurations so I decided to split it up. You can do that using the next syntax. However make sure your files start with the Resources: data type.

resources:
  - ${file(resources/resources-vpc.yml)}
  - ${file(resources/resources-db.yml)}

You can even mix and match inline and external resources

resources:
  - Resource:
      ApiGatewayRestApi:
        Type: AWS::ApiGateway::RestApi
  - ${file(resources/resources-vpc.yml)}
  - ${file(resources/resources-db.yml)}

Reference(s)

 
Leave a comment

Posted by on 07-09-2018 in Serverless

 

Tags: , , ,

How to share values between seperated AWS CloudFormation stacks

In AWS CloudFormation templates you often have the need to make a reference to an earlier created component. For instance the unique ID of a vpc, subnet, security group or instance. You have two choices: continue with separated stacks or combine them to create a nested stack. In our case we wanted to keep seperated stacks so needed a way to export/import settings from an earlier network related CloudFormation stack holding our VPC and Subnet Identifier which the new stack uses. To share information between stacks we can export output values from the first stack and import these values in new stacks. Other stacks that are in the same AWS account and region can import the exported values.

Example of an VPC ID which we export in our network stack (named t10-cf-vpc):

######################
## OUTPUT
######################
Outputs:
    VPC:
        Description: A reference to the created VPC
        Value: !Ref VPC
        Export:
          Name: t10-vpc-id

You can easily check if the export succeeded by using the AWS CloudFormation GUI

or using the AWS CLI to get a list

 
jvzoggel$ aws cloudformation list-exports
{
"Exports": [
{
"ExportingStackId": "arn:aws:cloudformation:xxxxx:xxxxxxx:stack/t10-cf-vpc/xxxxxxxx",
"Name": "t10-vpc-id",
"Value": "vpc-xxxxxxx"
},
.....

Importing the values in new stacks

The new CloudFormation stack can make use of the exported value simple by using the !ImportValue function

Parameters:
  NetworkStackNameParameter:
    Description: Reference to the vpc-10 stack
    Type: String
    Default: 't10-cf-vpc'

Resources:
  MyTerra10SecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: MyTerra10SecurityGroup
      GroupDescription: MyTerra10SecurityGroup
      VpcId: !ImportValue t10-vpc-id

Note: After another stack imports an output value, you can’t delete the stack that is exporting the output value or modify the exported output value.

References:

 
Leave a comment

Posted by on 28-08-2018 in AWS

 

Tags: , ,

Using CloudFormation to bootstrap EC2 instances with scripts from CodeCommit

While spinning up EC2 instances you can bootstrap them with packages, files, etc in different ways. For our stack we wanted to pull scripts from an AWS CodeCommit to make life easier.

The (bash) scripts are stored in our CodeCommit so first we need to make sure the EC2 instances, while spinning up, are allowed to access the repository. So we created an IAM Policy with these sufficient rights and attach the policy to a IAM role which we can use to attach to our EC2 instances.

AWS IAM Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:GitPull"
            ],
            "Resource": "arn:aws:codecommit:*:*:terra10-scripts"
        },
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:Get*",
                "codecommit:BatchGetRepositories",
                "codecommit:List*"
            ],
            "Resource": "*"
        }
    ]
}

We make sure the EC2 instances uses the new IAM Role by defining IamInstanceProfile with our example IAM Role t10-ec2-role in the CloudFormation template. Further on by using the UserData segment we can execute scripts during bootstrap of the server. Install the AWSCLI is required for the credential helper

T10Controller1:
  Type: AWS::EC2::Instance
  Properties:
    ImageId: !Ref HostAMI
    InstanceType: t2.micro
    IamInstanceProfile: t10-ec2-role
    PrivateIpAddress: 10.0.11.11   
    Tags:
      - Key: Name
        Value: t10-k8s-controller1
    UserData:
      Fn::Base64: !Sub |
        #!bin/bash -xe
        apt-get update
        apt-get -y install awscli
        cd /tmp
        echo "######## git pull AWS CodeCommit files"
        sudo git config --global credential.helper '!aws codecommit credential-helper $@'
        sudo git config --global credential.UseHttpPath true
        sudo git clone https://git-codecommit.xxxxxx.amazonaws.com/v1/repos/terra10-scripts /tmp/terra10-scripts

 

 
Leave a comment

Posted by on 26-08-2018 in AWS

 

Tags: , , , ,

AWS CloudFormation error “The parameter groupName cannot be used with the parameter subnet”

When trying to start some EC2 instance through CloudFormation I kept getting the error “The parameter groupName cannot be used with the parameter subnet”.  

The (YAML) AWS CloudFormation looks something like this:

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller
  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-20ee5e5d
      InstanceType: t2.micro
      KeyName: t10_kubernetes
      PrivateIpAddress: 10.0.11.11
      SubnetId:
        Fn::ImportValue:
          !Sub "t10-vpc-k8s-subnet1-id"
      SecurityGroupIds: - !Ref KubernetesControllerSecurityGroup
      Tags:
        - Key: Name
          Value: t10-k8s-controller1

So the error ended in a Google search with many hits, many questions, many suggestions, but very few real answers.

Until I saw this answer from johnhunsley:
I believe you have created a Security Group without specifying a VPC ID. You have then attempted to create a launch config which launches instances into a subnet within a VPC. Therefore, when It attempts to assign the security group to those instances it fails because it expects the security group ID rather than the name.

So I think the response from AWS is in the running for the “Worst Error Message Ever” but the solution is very simple. Don’t make the mistake of not specifying your custom VPC ID when creating a new security group.

Resources:
  KubernetesControllerSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-sg-k8s-controller
      GroupDescription: t10-sg-k8s-controller
      ......
      VpcId: !ImportValue t10-vpc-id
      Tags:
        - Key: Name
          Value: t10-sg-k8s-controller

References

johnhunsley @ https://github.com/boto/boto/issues/350

 
Leave a comment

Posted by on 15-08-2018 in AWS, CloudFormation

 

Tags: , , ,

How to install Azure CLI on MacOS

The Azure CLI 2.0 is Microsoft’s command line interface for managing Azure resources. First install the CLI through brew

brew update && brew install azure-cli

Then run the login command which will launch a browser session for your login credentials

jvzoggel$ az login
Fail to load or parse file /Users/jvzoggel/.azure/azureProfile.json. It is overridden by default settings.
Fail to load or parse file /Users/jvzoggel/.azure/az.json. It is overridden by default settings.
Fail to load or parse file /Users/jvzoggel/.azure/az.sess. It is overridden by default settings.
Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code"
You have logged in. Now let us find all the subscriptions to which you have access...
{
  "cloudName": "AzureCloud",
  "id": "xxxxxx",
  "isDefault": true,
.....
},

The first time the 3 Azure configuration files in your home/.azure will automatically be created.

 

References

 
Leave a comment

Posted by on 06-08-2018 in Azure

 

Tags: ,

How to connect to the CEPH Object Gateway S3 API with Java

We use a CEPH storage solution and specifically want to use the Ceph Object Gateway with the S3 API through a Java client. The API is based on the AWS S3 standard however requires some special tweaking to work. Took me some effort to get a working connection, so here to share:

<dependency>
  <groupId>com.amazonaws</groupId>
  <artifactId>aws-java-sdk</artifactId>
  <version>1.11.325</version>
</dependency>

We can use either the new AmazonS3ClientBuilder

package nl.rubix.s3;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.SDKGlobalConfiguration;

public class AmazonS3ClientBuilder
{
  public static void main(String[] args)
  {
    String accessKey = "XXXXX";
    String secretKey = "XXXXX";

    // Our firewall on DEV does some weird stuff so we disable SSL cert check
    System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true");
    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
      System.out.println("Cert checking is disabled");
    }
		
    // S3 Client configuration
    ClientConfiguration config = new ClientConfiguration();
    // Not the standard "AWS3SignerType", maar expliciet signerTypeV2
    config.setSignerOverride("S3SignerType");
    config.setProtocol(Protocol.HTTPS);
    config.setProxyHost("proxy.rubix.nl");

    config.setProxyPort(8080);
    // S3 Credentials
    BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey,secretKey);
    // S3 Endpoint
    AwsClientBuilder.EndpointConfiguration endpointConfiguration = new
      AwsClientBuilder.EndpointConfiguration("objects.dc1.rubix.nl", "");
    AmazonS3 s3 = com.amazonaws.services.s3.AmazonS3ClientBuilder.standard()
      .withClientConfiguration(config)
      .withCredentials(new AWSStaticCredentialsProvider(credentials))
      .withEndpointConfiguration(endpointConfiguration)
      .build();
    
    System.out.println("===========================================");
    System.out.println(" Connection to the Rubix S3 ");
    System.out.println("===========================================\n");
    try { 
       /*
       * List of buckets and objects in our account
       */
       System.out.println("Listing buckets and objects");
       for (Bucket bucket : s3.listBuckets())
       {
         System.out.println(" - " + bucket.getName() +" "
           + "(owner = " + bucket.getOwner()
           + " "
           + "(creationDate = " + bucket.getCreationDate());
         ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
           .withBucketName(bucket.getName()));
         for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) 
         {
           System.out.println(" --- " + objectSummary.getKey() +" "
           + "(size = " + objectSummary.getSize() + ")" +" "
           + "(eTag = " + objectSummary.getETag() + ")");
           System.out.println();
         }
       }
     }
     catch (AmazonServiceException ase)
     {
       System.out.println("Caught an AmazonServiceException, which means your request made it to S3, but was rejected with an error response for some reason.");
       System.out.println("Error Message:    " + ase.getMessage());
       System.out.println("HTTP Status Code: " + ase.getStatusCode());
       System.out.println("AWS Error Code: " + ase.getErrorCode());
       System.out.println("Error Type: " + ase.getErrorType());
       System.out.println("Request ID: " + ase.getRequestId());
     }
     catch (AmazonClientException ace)
     {
       System.out.println("Caught an AmazonClientException, which means the client encountered "
       + "a serious internal problem while trying to communicate with S3,
       + "such as not being able to access the network.");
       System.out.println("Error Message: " + ace.getMessage());
     }

or make it work with the older and depricated AmazonS3Client

package nl.rubix.s3;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.SDKGlobalConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amaonaws.services.s3.model.Bucket;

public class BasicAWSCredentials
{
    public static void main(String[] args)
    {
        String accessKey = "XXXXXXX";
        String secretKey = "XXXXXXX";
        System.setProperty(SDKGlobalConfiguration.DISABLE_CERT_CHECKING_SYSTEM_PROPERTY,"true"
    );

    if (SDKGlobalConfiguration.isCertCheckingDisabled())
    {
        System.out.println("Cert checking is disabled");
    }
    AWSCredentials credentials = new com.amazonaws.auth.BasicAWSCredentials(accessKey,secretKey);

    ClientConfiguration clientConfig = new ClientConfiguration();
    clientConfig.setSignerOverride("S3SignerType");
    clientConfig.setProxyHost("proxy.rubix.nl");
    clientConfig.setProxyPort(8080);

    AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
    conn.setEndpoint("objects.gn3.rubix.nl");

    for (Bucket bucket : conn.listBuckets())
    {
      System.out.println(" - " + bucket.getName() 
        + " "
        + "(owner = " + bucket.getOwner()
        + " "
        + "(creationDate = " + bucket.getCreationDate());
    }
  }
}

Hope it helps!

 
Leave a comment

Posted by on 03-07-2018 in Uncategorized

 

Tags: , , , , ,