Sunday, April 30, 2017

Storing User Sessions in Redis Using Amazon ElastiCache


In this series of posts, I am writing about various AWS services. In my last post, I have shown how to use S3 direct uploads for uploading the files to S3 from the browser.
The application I have developed for that post used HTTP sessions for session management. In this post, I will show how to use Redis to store user sessions when using many EC2 instances.
The Problem
By default, Spring Boot applications use HTTP sessions that are valid only in the JVM they are created. If we use only one EC2 instance for our application, the application works as expected. But if we use multiple EC2 instances behind an Elastic Load Balancer, one HTTP session created in an EC2 instance will not be valid if the subsequent request is handled by another EC2 instance.
The Solution
In a load balanced multi EC2 instance scenario, we should store the session information outside of the EC2 instances. There are different solutions like a database or an in-memory store.
Nowadays the best practice for storing session information is using an in-memory cache store like Memcached or Redis.
Amazon provides a managed in-memory data store called Amazon ElastiCache that can be used as a cache that is compatible with both Memcache and Redis.

For this post, I will use an ElastiCache Redis cluster with 1 node. The picture below shows the structure of the session management solution that I will use.




I will use the application in my previous post as a starting point. The code can be found here.

The steps for adding Redis as a session store are below.

1. Create the ElastiCache Redis cluster

2. Add dependencies

3. Configure Redis session store

4. Deploy the application

Let's start.


1. Create the ElastiCache Redis cluster

We can use the AWS ElastiCache CLI command below to create a Redis cluster with 1 node.

aws elasticache create-cache-cluster --cache-cluster-id CardStoreRedis --cache-node-type cache.t2.micro --engine redis --engine-version 3.2.4 --num-cache-nodes 1

By default, the cluster will use the default security group in the default VPC in the AWS region. To allow the EC2 instances to access Redis, we can enable the inbound traffic to default security group on 6379 port from the security group of EC2 instances by specifiying the security group of the EC2 instances as source with the command below.

aws ec2 authorize-security-group-ingress --group-name default --protocol tcp --port 6379 --source-group CardStoreSG

It will take some time to create Redis cluster. After it is created we can get the public address of the cache node with the command below.
aws elasticache describe-cache-clusters --cache-cluster-id CardStoreRedis --show-cache-node-info|grep Address

It should be like the address below.
cardstoreredis.XXXX.001.euc1.cache.amazonaws.com  


We will use this address to access Redis cache node from the application.

2. Add dependencies

Add Maven dependencies as below for Spring Session and Redis.

        <dependency>
                <groupId>org.springframework.session</groupId>
                <artifactId>spring-session</artifactId>
        </dependency>

        <dependency>
             <groupId>org.springframework.boot</groupId>
             <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>

3. Configure Redis session store

To enable Redis as session store we use  @EnableRedisHttpSession annotation. By default Spring Boot Redis library will try to configure Redis Key Space Notifications for session deletion and expiration events but configuration requests from Redis clients are disabled in ElastiCache Redis clusters. We can configure the application in a way that it will not try to configure key space notifications by creating a ConfigureRedisAction.NO_OP instance as a bean. For more information, see here.

If you want to allow your Spring Boot application to use these session lifecycle events, you can set  notify-keyspace-events parameter by using custom Parameter Groups while creating your ElastiCache Redis Cluster. For more information, see here.

@EnableRedisHttpSession
public class RedisSessionConfig {
      
       @Bean
       public static ConfigureRedisAction configureRedisAction() {
           return ConfigureRedisAction.NO_OP;
       }
      
}

4. Deploy the application

After preparation, we can deploy the application to AWS.  We can create an ELB and an auto scaling group to use multiple EC2 instances.
We package the application and send the WAR to S3 with the commands below.
mvn package
aws s3 cp target/cardstore-0.0.1-SNAPSHOT.war s3://cardstoredeploy/

The Spring Boot application will read Redis cache node address from spring.redis.host variable. We can specify this value as a JVM system property when launching the JVM in EC2 init script like below.

#!/bin/bash
yum update -y
yum install java-1.8.0 -y
yum remove java-1.7.0-openjdk -y

mkdir /app

aws s3 cp --region eu-central-1 s3://cardstoredeploy/cardstore-0.0.1-SNAPSHOT.war /app/

java -Dspring.redis.host=cardstoreredis.XXX.0001.euc1.cache.amazonaws.com -Duser.activation.queue.name=XXX -Dmail.from.address=XXX -Duser.card.upload.s3.bucket.name=XXX -Duser.card.upload.s3.bucket.region=XXX -Duser.card.upload.s3.bucket.awsId=XXX -Duser.card.upload.s3.bucket.awsSecret=XXX -jar /app/cardstore-0.0.1-SNAPSHOT.war

We can create and configure ELB with the commands below.

aws elb create-load-balancer --load-balancer-name CardStoreLB --listeners "Protocol=HTTP,LoadBalancerPort=8080,InstanceProtocol=HTTP,InstancePort=8080" --security-groups sg-8567a2ee --availability-zones eu-central-1a
aws elb configure-health-check --load-balancer-name CardStoreLB --health-check Target=TCP:8080,Interval=5,UnhealthyThreshold=2,HealthyThreshold=2,Timeout=2

Then, we can create the launch configuration and the auto scaling group using the commands below.

aws autoscaling create-launch-configuration --launch-configuration-name CardStoreLC --key-name CardStoreKP --image-id ami-af0fc0c0 --instance-type t2.micro --user-data file://cardstore_ec2_init_script.txt --security-groups sg-8567a2ee  --iam-instance-profile CardStoreRole

aws autoscaling create-auto-scaling-group --auto-scaling-group-name CardStoreASG --launch-configuration-name CardStoreLC --load-balancer-names CardStoreLB --min-size 2 --max-size 2 --termination-policies "OldestInstance"  --availability-zones eu-central-1a


With this commands we have created 2 EC2 instances that are load balanced. After you logged in to the application at the load balancer address and 8080 port, you can refresh the dashboard page to make sure that the requests are distributed to the both instances. If you are not directed to the login page and still see the dashboard page,  that means both instances can access the session information from Redis cache node. You can tail the cloud-init-output.log files of the instances with the command below to be sure that both instances are receiving requests.
tail -200f /var/log/cloud-init-output.log


Summary

When using multiple EC2 instances, we should store the session information outside of the EC2 instances. In this post, I have shown how to use Amazon ElastiCache to create and use a Redis cache cluster for storing user session information. The code can be found here.


Thursday, April 27, 2017

Uploading Images to Amazon S3 Directly from the Browser Using S3 Direct Uploads



In this series of posts, I am writing about various AWS services. In my previous posts, I have written about AWS EC2, Elastic Load Balancing, Auto Scaling, DynamoDB, Amazon Simple Queue Service and Amazon Simple Email Service.

In my last post, I have added an user activation functionality to my digital card store application. The application is used for managing digital cards. So far I have added, user registration, user session management, selling and buying cards and user activation functionality. The user can add a new card by specifying a name only.

In this post, I will add an upload function that allows user to attach an image to a digital card. I will use Amazon S3 to store uploaded image files.

Upload Functionality

When we think about uploading a file, the first option that comes to mind is to upload the file to an EC2 instance from the browser and then send the file to Amazon S3 from the EC2 instance.
While this method accomplish the image upload requirement, there is a better method. In 2012, Amazon announced CORS support for Amazon S3, which allows any web application to upload files to S3 directly. This allows quick and efficient uploads and eliminates proxying the upload requests.

The picture below shows the upload process.



To use direct uploads to S3, we should follow the steps below.

1. Enable CORS support for the bucket.
2. Configure access permissions.
3. Develop the signing part in server
4. Prepare the web front end.

In this post, I will start with the code from my last post. The code can be found here. In the post that I have written about DynamoDB, I have generated the Card entity class with imageURL field. In this post, I will use this field to hold the URL of the uploaded image file. There will be no change in entity class and CardController class. After the image uploaded to the S3, its url will be passed as imageURL to add card request. The card will be persisted with this url to DynamoDB and it will be used as card image url to show the card image in the card listing table. The final code for this post can be found here.

Let's start.

1. Enable CORS support for the bucket

To be able to use direct uploads from any web application, the target S3 bucket should be configured to allow requests from a different domain. For more information, see S3 Cors documentation.

To enable CORS support using Amazon Console, use the steps below. To use AWS CLI, see here.

  • ·         Login to Amazon Console and select S3
  • ·         Select your bucket and click Properties
  • ·         Click Permissions and then click Edit CORS Configuration
  • ·         Paste the below configuration and click Save and then click Close.


<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
       <CORSRule>
             <AllowedOrigin>*</AllowedOrigin>
             <AllowedMethod>GET</AllowedMethod>
             <AllowedMethod>POST</AllowedMethod>
             <AllowedMethod>PUT</AllowedMethod>
             <AllowedHeader>*</AllowedHeader>
       </CORSRule>
</CORSConfiguration>

Please note that I allowed any origin to post to the bucket for easy development. To provide robust securiy in production use, please restrict the domains accordingly.

2. Configure access permissions

To allow uploads to the S3 bucket, the bucket should be writable. There are a few options for this. The first option is to make the bucket public. But if you make the bucket public, everybody can write to your bucket and you take the risk of uncontrolled uploads. The second option is to make the bucket writable by a specific IAM user and sign the upload requests with this users credentials. This option is more secure compared to first option. In this post, I will use the second option. Please consider security options before using the upload function in production.

While using signed requests, S3 expects your upload requests to include an upload policy and a signature. The signature is prepared using your IAM credentials. You can use any IAM credentials, but to provide more strict security, please use a dedicated IAM user for this purpose that have write access only to the target S3 bucket. This way, you can be sure that in case of a credential disclosure, it affects only a specific S3 bucket, not any other AWS resources. For more strict security, you can use Temporary Security Credentials.

To configure access permissions using Amazon Console, use the steps below. To use AWS CLI, see here.

  • ·         Login to Amazon Console and select S3
  • ·         Select your bucket and click Properties
  • ·         Click Permissions and then click Edit bucket policy.
  • ·         Paste the below policy and click Save


{
       "Version": "2012-10-17",
       "Statement": [
             {
                    "Effect": "Allow",
                    "Principal": {
                           "AWS": "arn:aws:iam::XXXXX:user/mys3user"
                    },
                    "Action": "s3:PutObject",
                    "Resource": "arn:aws:s3:::mys3bucket/*"
             },
             {
                    "Effect": "Allow",
                    "Principal": {
                           "AWS": "arn:aws:iam::XXXXX:user/mys3user"
                    },
                    "Action": "s3:PutObjectAcl",
                    "Resource": "arn:aws:s3:::mys3bucket/*"
             }
       ]
}

This policy allows our IAM user to upload the file and set its access level to make it readable to everyone (public-read). We make the uploaded files publicly readable so the browser can show the card images directly from S3 when listing cards. If you don't want to make the images public, you can generate temporary signed image urls to show card images in the browser. You can find more information here.


3. Develop the signing part in server

First we create Spring controller to generate signature. The class use bucket name, bucket region, AWS credential for signing the bucket uploads and the secret of the AWS credential as variables. After generating the signature, the controller returns the signed upload data that will be used in upload request.

@RestController
public class CardUploadController {

       @Value("${user.card.upload.s3.bucket.name}")
       String s3BucketName;

       @Value("${user.card.upload.s3.bucket.region}")
       String s3BucketRegion;

       @Value("${user.card.upload.s3.bucket.awsId}")
       String s3BucketAwsId;

       @Value("${user.card.upload.s3.bucket.awsSecret}")
       String s3BucketAwsSecret;

       @RequestMapping(value = "/presign", method = RequestMethod.POST)
       @ResponseBody
       public PreSignedS3UploadData presignS3Upload(@RequestParam("contentType") String contentType, @RequestParam("fileName") String fileName, HttpSession session) {
             PreSignedS3UploadData res;
             try {
                    String extension = fileName.lastIndexOf('.') == -1 ? "" : fileName.substring(fileName.lastIndexOf('.'));
                    String s3FileName = "upload_" + (int)(100000 * Math.random()) + extension;
                   
                    res = S3SignUtil.generatePreSignedUploadData(s3BucketName, s3BucketRegion, s3BucketAwsId, s3BucketAwsSecret, contentType, s3FileName);
             }
             catch (Exception e) {
                    res = new PreSignedS3UploadData("Can't generate signature for upload: " + e.toString());
             }
             return res;
       }
 }

The signature generation algoritm first generates a security policy, then generates a signing key with AWS credentials and then signs the policy with the signing key. The security policy specifies the expiration date and time, ACL for the file being uploaded and some other options. This application generates a policy with 3 minute expiration time, public-read ACL to allow public access and 1MB max upload size. A sample policy looks like below.

{
       "expiration": "2017-04-26T23:09:59.638Z",
       "conditions": [
             { "acl": "public-read" },
             { "bucket": "XXXXX" },
             { "key": "upload_49921.jpg" },
             { "Content-Type": "image/jpeg" },
             ["content-length-range", 0, 1048576],
             { "x-amz-credential": "XXXXXXXX/20170426/eu-central-1/s3/aws4_request" },
             { "x-amz-algorithm": "AWS4-HMAC-SHA256" },
             { "x-amz-date": "20170426T000000Z" }
       ]
}

Policy and signature generation code is in S3SignUtil class.

For more information on generating the policy and signature, see here.


4. Prepare the web front end.

After we complete the code that generates the signed upload data, we can prepare the web front end. We will change the dashboard.jsp file and add a file upload input to the Add Card form. When the selection change in the file upload input, we generate a signature using the controller we created in the 3rd step. Then we generate a dynamic form to post the file with the signature to the S3 bucket url.

The script is below.

function cardImageFileUpdated(){ 
       var file = document.getElementById('cardImageInput').files[0];
      
       if (file != null)
             startCardImageFileUpload(file);
}

function startCardImageFileUpload(file) {
       $.ajax({
         type: "POST",
         url: "presign",
         data: 'contentType=' + encodeURIComponent(file.type) + '&fileName=' + encodeURIComponent(file.name),
         success: function(data){
                if (data.errorMessage)
                    alert(data.errorMessage);
               else
                    doCardImageFileUpload(file, data);
               },
       });
}


function doCardImageFileUpload(file, data){

       var formData = new FormData();
      
       formData.append('key', data.fileName);
       formData.append('acl', 'public-read');
       formData.append('Content-Type', data.contentType);
       formData.append('X-Amz-Credential', data.credential);
       formData.append('X-Amz-Algorithm', "AWS4-HMAC-SHA256");
       formData.append('X-Amz-Date', data.date);
       formData.append('Policy', data.policy);
       formData.append('X-Amz-Signature', data.signature);
       formData.append('file', $('input[type=file]')[0].files[0]);
      
       $.ajax({
           url: data.bucketUrl,
           data: formData,
           type: 'POST',
           contentType: false,
           processData: false,
           success: function () {
              var imageUrl = data.bucketUrl + "/" + data.fileName;
          
              document.getElementById('cardImagePreview').src = imageUrl;
              document.getElementById('cardImageUrl').value = imageUrl;
           },
           error: function () {
              alert("Upload error.");
           }
       });
}
  
And we change the Add Card form from

<form id="add-card-form" onsubmit="return false;">
       <input type="text" name="name" placeholder="name" />
       <button onclick="addCard()">Add</button>
</form>

to

<form id="add-card-form" onsubmit="return false;">
       <span>Card Name</span>
       <input type="text" name="name" placeholder="name" /><br/>
      
       <span>Card Image File</span>
       <input type="file" id="cardImageInput" accept="image/*" onchange="cardImageFileUpdated()"/>
       <img style="border:1px solid gray;height:160px;width:120px;" id="cardImagePreview" src="/images/default-card.png"/>
       <input type="hidden" id="cardImageUrl" name="imageUrl" value="/images/default-card.png"/> <br/>
                          
       <button onclick="addCard()">Add</button>
</form>

First we add an input with file type to select the image file. And we use an image tag to preview the image after the upload complete. And then we add a hidden imageURL field to Add Card form as I talked in the beginning of this post.

At this point, we finished the uploading the card image to S3 and saving the card with the url of the file uploaded to S3. Next we will show the image of the cards in card listing tables. We change the buildHtmlTable JavaScript function in dashboard.jsp from

if (cellValue == null) cellValue = "";
row$.append($('<td/>').html(cellValue));

to

if (cellValue == null) cellValue = "";
if (columnList[colIndex] == 'imageUrl')
 cellValue = '<img style="border:1px solid gray;height:160px;width:120px;" src="' + cellValue + '"/>';
row$.append($('<td/>').html(cellValue));

to use the imageURL field as card image.

After showing the card images in card listings, we have completed the changes. If you run the application with this command,

$ mvn spring-boot:run -Drun.jvmArguments="-Duser.activation.queue.name=XXX -Dmail.from.address=XXX -Duser.card.upload.s3.bucket.name=XXX -Duser.card.upload.s3.bucket.region=XXX -Duser.card.upload.s3.bucket.awsId=XXX -Duser.card.upload.s3.bucket.awsSecret=XXX"

you can use the application like the screenshots below.






Summary

In this post, I have shown adding a file upload functionality to add card images. I have used direct S3 uploads from the browser without uploading the file to a EC2 instance first. The code can be found at my GitHub repository.

In my next posts, I will continue to use various AWS services to add functionality to my digital card store application.