Advertisements
RSS

Tag Archives: AWS

How to use AWS Simple Email Service (SES) from TypeScript on NodeJS example

In our application flows we use AWS Simple Email Service to send emails to our users. Since the documentation and examples of AWS SES are not that clear, it might take some trial and error to figure out which of the parameters are mandatory and, more important, obsolete. The AWS examples here crash if you actually configure the parameters with empty strings or null like mentioned there.

This TypeScript code below initiates a AWS SES connection and uses an earlied AWS CloudFront generated Email template (see example below somewhere). I found it quite surprising that you can actually refer to the template by name and not ARN only.

import * as AWS from 'aws-sdk';
import * as https from 'https';

const ses = new AWS.SES({
    httpOptions: {
        agent: new https.Agent({
            keepAlive: true
        })
    }
});

/**
 * Send email through AWS SES Templates
 */
export async function sendMail(email: string, name: string): Promise(String) {

    try {
        // Create SES sendTemplatedEmail templateData content
        const templatedata = {
            parameter_name: name
        };
        // console.debug(`sendMail templatedata: ${JSON.stringify(templatedata)}`);

        // Create SES sendTemplatedEmail full message
        const params = {
            Destination: {
                ToAddresses: [ email ]
            },
            Source: 'noreply@terra10.io',
            Template: 'myFanceTerra10EmailTemplate',
            TemplateData: JSON.stringify(templatedata)
        };
        // console.debug(`sendMail param: ${JSON.stringify(params)}`);

        const sesResponse = await ses.sendTemplatedEmail(params).promise();
        console.log(`sendMail requestId: ${sesResponse.$response.requestId} and messageId: ${sesResponse.MessageId}`);
        return 'OK';
    } catch (e) {
        console.error(`sendMail unexpected: ${e.message}`);
        return 'something fancy error handling';
    }
}

Here is an example AWS CloudFormation resource for the AWS SES email template. You can use the always handy sub function to prevent any complex character escaping or unreadable 1-line HTML. I love it for EC2 UserData and for stuff like this:

Resources:
  SesTemplateTerra10:
    Type: AWS::SES::Template
    Properties:
      Template:
        TemplateName: myFanceTerra10EmailTemplate
        SubjectPart: My Subject
#       TextPart: "Nobody uses this anymore right ???"
        HtmlPart:
          Fn::Sub: |
            <img src="http://terra10.nl/img/logo.png">
            <h1>Sir/Madam {{parameter_name}},</h1>
            <p>Ho ya doin ?</p>
            <p>cheerio,</p>
            <strong>the T10 crew</strong>

In case you use the Serverless Framework (you should) for Serverless deployments the following code is necessary in your Serverless.yaml. This allows your Lambda function to use the Email template on runtime. In our case the domain is hosted on AWS Route53 as well which saves you some problems.

- Effect: Allow
  Action:
  - ses:SendTemplatedEmail
  Resource:
  - "arn:aws:ses:eu-west-1:*:identity/terra10.io"

Hope it helps!

References

  • Link to the original article due to RSS feeds @ https://jvzoggel.com
  • Sending Email Using Amazon SES
Advertisements
 
Leave a comment

Posted by on 26-11-2018 in Uncategorized

 

Tags: , , , , , ,

Using AWS Key Management (KMS) to encrypt and decrypt in AWS Lambda (NodeJS)

AWS Key Management (KMS) is a fully managed service that makes it easy to create and control encryption keys on AWS which can then be utilised to encrypt and decrypt data in a safe manner. The service leverages Hardware Security Modules (HSM) under the hood which in return guarantees security and integrity of the generated keys.

You can enable AWS KMS to store your data-at-rest on many AWS storage solutions (like DynamoDB and EBS). However in our Lambda functions we would like to encrypt (and decrypt) certain values on runtime. So we needed some code to encrypt and decrypt. Took me some while to figure it out, so here it is.

Here is the function to encrypt any string against the AWS KMS Key. You can create a KMS Key through AWS IAM in the console or (much better) AWS CloudFormation.

const kmsClient = new AWS.KMS({region: 'eu-west-1'});

/**
 * Encrypt
 */
async function encryptString(text: string): Promise<string> {

    const paramsEncrypt = {
        KeyId: 'arn:aws:kms:eu-west-1:........',
        Plaintext: new Buffer(text)
    };

    const encryptResult = await kmsClient.encrypt(paramsEncrypt).promise();
    // The encrypted plaintext. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded.
    if (Buffer.isBuffer(encryptResult.CiphertextBlob)) {
        return Buffer.from(encryptResult.CiphertextBlob).toString('base64');
    } else {
        throw new Error('Mayday Mayday');
    }
}

And we can use the result (an encrypted string) to store it in DynamoDB, Aurora or send it wherever we want. So in the end we would like to decrypt it as well. So …

/**
 * Decrypt
 */
async function decryptEncodedstring(encoded: string): Promise<string> {

    const paramsDecrypt: AWS.KMS.DecryptRequest = {
        CiphertextBlob: Buffer.from(encoded, 'base64')
    };

    const decryptResult = await kmsClient.decrypt(paramsDecrypt).promise();
    if (Buffer.isBuffer(decryptResult.Plaintext)) {
        return Buffer.from(decryptResult.Plaintext).toString();
    } else {
        throw new Error('We have a problem');
    }
}

Hope it helps!

References

  • Example code including test function available in my Github as well

 

 
Leave a comment

Posted by on 05-11-2018 in AWS

 

Tags: , , , , ,

How to setup unit testing for AWS Lambda serverless functions (on NodeJS) ?

We use AWS Lambda serverless functions combined with TypeScript and NodeJS which results in an extreme powerful developer toolset. Due to the fact that functions contain isolated logic they are ideal for automated unit testing in CI/CD pipelines. So eventually looking at our options we decided to use the features of mocha, chai and nock combined. This resulted in a very easy and powerful solution for unit testing.

I’m sharing this after a chat at a meetup where the use of EKS instead of Lambda even for really simple functions was advocated due to the fact that serverless was hard to isolate (run local) and was hard to setup any unit testing. I beg to differ because:

So let’s go …

Our example function

is a simple function for retrieving a single record from a AWS DynamoDB table

// Handler for serverless framework
export async function handler(event: APIGatewayProxyEvent, _context: Context, callback: Callback) {
    try {
        callback(undefined, await getWerkgever(event));
    } catch (err) {
        callback(err);
    }
}

// Main logic
export async function getRecord(event: APIGatewayProxyEvent) { 
  .... 
  const id = '1'; // pointless, but good enough for this example
  const queryParams = { TableName: process.env.dynamotable, Key: { id } }; 
  const result = await documentClient.get(queryParams).promise(); 
  if (result.Item) { 
    return { statusCode: 200, headers, body: JSON.stringify(result.Item) }; 
  } else {
    return { statusCode: 404, headers, body: undefined };
  }
};

So what happens here ?

  • We deliberately split the logic between handler (for the serverless framework) and the main logic
  • We need to export the main logic for use in our unit tests (and local development)

Using Mocha & Nock

Since we are running node we can use both mochaJS and nock for our unit testing. Setting up the specification file (.spec) for our simple function we first run the test with.

import { APIGatewayProxyEvent } from 'aws-lambda';
import { expect } from 'chai';
import * as nock from 'nock';
import { getRecord } from './getRecord';

process.env.dynamotable = 'myTable';

describe('getRecord', () => {

    it('UT001 - getRecord with valid response', async() => {
        
        nock.recorder.rec();
        
        const event: APIGatewayProxyEvent = {
            body: '',
            headers: {},
            httpMethod: 'GET',
            isBase64Encoded: false,
            path: '',
            pathParameters: {},
            queryStringParameters: undefined,
            stageVariables: {},
            requestContext: {},
            resource: '' };

        const response = await getRecord(event);
        expect(response.statusCode).to.equal(200);
    });

});

So what happened ?

  • We set the environment variables (like the DynamoDB table) which normally is done by AWS Lambda
  • We configure nock.recorder for auditing the upcoming execution
  • We define a dummy APIGatewayProxyEvent which has some mandatory elements which we leave mostly empty or undefined
  • We define the call to our AWS Lambda function and since we isolated the main logic we can call this directly
  • If your AWS profile used on your dev machine has enough IAM grants the code can execute against AWS DynamoDB (we use a special user for this to keep it clean)

By running mocha with the Nock recorder we can see the actual callout to AWS DynamoDB from our developer machine

<-- cut here -->

nock('https://dynamodb.eu-west-1.amazonaws.com:443', {"encodedQueryParams":true})
.get('/', {"TableName":"myTable","Key":{"id":{"S":"1"}}})
.reply(200,{Item: {name: {S: 'myName'}, id: {S: '1'}});
......... (much stuff)

So with nock we actually recorded the https call to DynamoDB. Which now, we can easily use with nock to mock the response during unit testing. So next change the code in our spec file with the info from nock recorder:

// nock.recorder.rec();
nock('https://dynamodb.eu-west-1.amazonaws.com:443')
    .get('/' )
    .reply(200,{Item: {name: {S: 'myName'}, id: {S: '1'}});

So what happened ?

  • Disabled the recorder, we don’t need it anymore
  • Setup nock to catch the HTTPS GET call to the dynamoDB endpoint
  • Configured nock to reply with a 200 and the specified Item record (you can also use reply with files)

Using Chai

With this basis setup we can execute unit tests in our pipeline with Mocha where nock will handle the mocking of the endpoints. With some little Chai magic we can define expectations in our specification file to make sure all message logic of our function is done properly and the HTTP reply is as expected.

expect(response.statusCode).to.equal(200);
expect(response.headers).to.deep.include({ 'Content-Type': 'application/json' });
expect(JSON.parse(response.body)).to.deep.equal({ name: 'myName', id: '1'});

And there is more

With this it’s easy to catch all outbound HTTPS requests and mock different responses (0 records, multiple records, etc etc) for extensive unit testing. The possibilities are endless, so hope it helps …

 
Leave a comment

Posted by on 26-10-2018 in Uncategorized

 

Tags: , , , , , , , ,

How to determine your AWS Lambda@Edge regions and find your CloudWatch logs

You must review AWS CloudWatch log files in the correct region to see the log files created when CloudFront executed your Lambda function. I found this very usefull bash AWS CLI based bash command here which allows to determine a list of Regions where your Lambda@Edge function have received traffic so storing for future (personal) reference.

FUNCTION_NAME=function_name_without_qualifiers
for region in $(aws --output text  ec2 describe-regions | cut -f 3) 
do
    for loggroup in $(aws --output text  logs describe-log-groups --log-group-name "/aws/lambda/us-east-1.$FUNCTION_NAME" --region $region --query 'logGroups[].logGroupName')
    do
        echo $region $loggroup
    done
done

You can just leave FUNCTION_NAME empty to get a list of all functions.

References

 
Leave a comment

Posted by on 25-10-2018 in Uncategorized

 

Tags: , , , , , ,

How to configure AWS Lambda functions to use a outbound fixed ip

For our Serverless project running on AWS infrastructure we needed an outbound Lambda API call to a SaaS platform which demands a whitelist of the source IP addresses. Which is pretty hard since AWS has a whole range. Luckily there is a trick to target your Lambda function in a VPC which can be configured to use a elastic IP for outbound communication.

High Level Design

Steps to configure the VPC / Network

Step 1 – create/use a VPC

Our AWS Network configuration always start with an AWS VPC (Virtual Private Cloud). You can use an existing VPC and configure your subnets there, or create a new one. Network configuration on AWS can look simple with the GUI wizard, but the setup of your VPC had major impact on all your resources so for real production a thing to remember. In the example I use CIDR block 10.0.0.0/16 which gives me (way to much) options.

Step 2 – create a public and 1-n private subnets

We will need 1 public subnet to attach to a NAT Gateway and route our traffic to the Internet. The example can work with 1 private subnet which hosts your Lambda function. However for availability purposes you would want to have multiple private subnets in different availability zones for your lambda to run. In the example I use CIDR block 10.0.11.0/24 for the public subnet and 10.0.21.0/24, 10.0.22.0/24 and 10.0.23.0/24 for the 3 private subnets on each eu-west-1 availability zone.

Step 3 – Configure the Internet Gateway and the public subnet configuration

We need an Internet Gateway for our VPC, attach to our public subnet and a public route table configuration which targets all (0.0.0.0/0) outbound traffic from the public subnet.

Step 4 – Configure the NAT Gateway and the private subnet configuration

We need a NAT Gateway which uses an elastic IP address. By adding a private route table which we attach to our private subnet(s) we make sure that all functions in the VPC will use our elastic IP for outbound communication. At lease, if we make sure the private route table contains a (0.0.0.0/0) target to the NAT Gateway

Configure your Lambda function

Make sure your Lambda function(s) use the configured VPC by selecting it in the VPC pulldown and then select all the private subnet(s).

Notes:

  • Make sure your Lambda runs with the IAM AWSLambdaVPCAccessExecutionRole

CloudFormation source code

Template can be found in my aws-cloudformation Git repository as well:

Resources:
  ######################
  ## VPC basics
  ######################
  VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 10.0.0.0/16
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
      - Key: Name
        Value: t10-fn-vpc-dev
  InternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
      - Key: Name
        Value: t10-fn-internetgateway-dev
  InternetGatewayAttachment:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      InternetGatewayId:
        Ref: InternetGateway
      VpcId:
        Ref: VPC

  ######################
  ## Subnet Public
  ######################
  PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2a
      CidrBlock: 10.0.11.0/24
      MapPublicIpOnLaunch: true
      Tags:
      - Key: Name
        Value: t10-fn-public-subnet-az1-dev
  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-public-rt-dev
  PublicRouteTableRoute1:
    Type: AWS::EC2::Route
    DependsOn: InternetGateway
    Properties:
      RouteTableId:
        Ref: PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId:
        Ref: InternetGateway
  PublicRouteTableAssociation1:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PublicSubnet1
      RouteTableId:
        Ref: PublicRouteTable
  PublicElasticIP:
    Type: AWS::EC2::EIP
    Properties:
      Domain:
        Ref: VPC
  NatGateway:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId:
        Fn::GetAtt: PublicElasticIP.AllocationId
      SubnetId:
        Ref: PublicSubnet1
      Tags:
      - Key: Name
        Value: t10-fn-natgateway-dev

  ######################
  ## Subnet Private
  ######################
  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2a
      CidrBlock: 10.0.21.0/24
      MapPublicIpOnLaunch: false
      Tags:
      - Key: Name
        Value: t10-fn-private-subnet-az1-dev
  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2b
      CidrBlock: 10.0.22.0/24
      MapPublicIpOnLaunch: false
      Tags:
      - Key: Name
        Value: t10-fn-private-subnet-az2-dev
  PrivateSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: VPC
      AvailabilityZone: eu-west-2c
      CidrBlock: 10.0.23.0/24
      MapPublicIpOnLaunch: false
      Tags:
      - Key: Name
        Value: t10-fn-private-subnet-az3-dev
  PrivateRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-private-rt-dev
  PrivateRouteTableRoute1:
    Type: AWS::EC2::Route
    DependsOn: NatGateway
    Properties:
      RouteTableId:
        Ref: PrivateRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId:
        Ref: NatGateway
  PrivateRouteTableAssociation1:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet1
      RouteTableId:
        Ref: PrivateRouteTable
  PrivateRouteTableAssociation2:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet2
      RouteTableId:
        Ref: PrivateRouteTable
  PrivateRouteTableAssociation3:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet3
      RouteTableId:
        Ref: PrivateRouteTable

  ######################
  ## Security NACL
  ######################
  NetworkAcl:
    Type: AWS::EC2::NetworkAcl
    Properties:
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-nacl-dev
  NetworkAclEntryfn100:
    Type: AWS::EC2::NetworkAclEntry
    Properties:
      CidrBlock: 0.0.0.0/0
      Egress: 'false'
      NetworkAclId:
        Ref: NetworkAcl
      Protocol: "-1"
      RuleAction: allow
      RuleNumber: "100"
  NetworkAclEntryOutbound100:
    Type: AWS::EC2::NetworkAclEntry
    Properties:
      CidrBlock: 0.0.0.0/0
      Egress: 'true'
      NetworkAclId:
        Ref: NetworkAcl
      Protocol: "-1"
      RuleAction: allow
      RuleNumber: "100"
  PrivateSubnetNetworkAclAssociation1:
    Type: AWS::EC2::SubnetNetworkAclAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet1
      NetworkAclId:
        Ref: NetworkAcl
  PrivateSubnetNetworkAclAssociation2:
    Type: AWS::EC2::SubnetNetworkAclAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet2
      NetworkAclId:
        Ref: NetworkAcl
  PrivateSubnetNetworkAclAssociation3:
    Type: AWS::EC2::SubnetNetworkAclAssociation
    Properties:
      SubnetId:
        Ref: PrivateSubnet3
      NetworkAclId:
        Ref: NetworkAcl

  ######################
  ## Security Group(s)
  ######################
  LambdaSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: t10-fn-lambda-sg
      GroupDescription: t10-fn-lambda-sg
      SecurityGroupIngress:
      - IpProtocol: -1
        CidrIp: 0.0.0.0/0
      VpcId:
        Ref: VPC
      Tags:
      - Key: Name
        Value: t10-fn-lambda-sg-dev

  ######################
  ## OUTPUT
  ######################

  #Outputs:
  #  VPC:
  #    Description: A reference to the created VPC
  #    Value:
  #      Ref: VPC
  #    Export:
  #      Name: t10-fn-vpc-id$

References

 
Leave a comment

Posted by on 01-10-2018 in AWS

 

Tags: , , , , ,

How to use multiple resources definitions in the Serverless Framework

In other Serverless projects we use a single resource definition to include AWS CloudFormation scripts in our build/deploy. Where the resource file contained component definitions like DDBTable: in the root.

resources:
  Resources: ${file(resources.yml)}

However a new project contains a lot of AWS configurations so I decided to split it up. You can do that using the next syntax. However make sure your files start with the Resources: data type.

resources:
  - ${file(resources/resources-vpc.yml)}
  - ${file(resources/resources-db.yml)}

You can even mix and match inline and external resources

resources:
  - Resource:
      ApiGatewayRestApi:
        Type: AWS::ApiGateway::RestApi
  - ${file(resources/resources-vpc.yml)}
  - ${file(resources/resources-db.yml)}

Reference(s)

 
Leave a comment

Posted by on 07-09-2018 in Serverless

 

Tags: , , ,

Example AWS CloudFormation template for network load balancer


We needed a public network load balancer with SSL (through AWS Certificate Manager) and took me some retry’s to get it right since most examples are based upon the classic or application load balancer so here to share:

  Terra10NetworkLoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Name: t10-networkloadbalancer
      Scheme: internet-facing
      Subnets: !Ref Terra10Subnet
      Type: network
      Tags:
        - Key: Name
          Value: t10-networklb
  Terra10NetworkLoadBalancerTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      Name: t10-networklb-target
      Port: 443
      Protocol: TCP
      VpcId: !ImportValue t10-vpc-id
      TargetGroupAttributes:
        - Key: deregistration_delay.timeout_seconds
          Value: 60
      Targets:
      - Id: !Ref Terra10EC2Instance1
        Port: 443
      - Id: !Ref Terra10EC2Instance2
        Port: 443  
      Tags:
        - Key: Name
          Value: t10-networklb-target
  Terra10NetworkLoadBalancerListener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn: !Ref Terra10NetworkLoadBalancerTargetGroup
      LoadBalancerArn: !Ref Terra10NetworkLoadBalancer
      Port: '443'
      Protocol: TCP
  Terra10NetworkLoadBalancerListenerCert:
    Type: AWS::ElasticLoadBalancingV2::ListenerCertificate
    Properties:
      Certificates:
        - CertificateArn: arn:aws:acm:eu-west-1:xxxaccountxxx:certificate/123456....
      ListenerArn: !Ref Terra10NetworkLoadBalancerListener

 

Reference

 
Leave a comment

Posted by on 29-08-2018 in AWS, CloudFormation

 

Tags: ,