A hands-on approach to AWS Lambda using Java

Breaking down AWS Lambda

AWS Lambda is a serverless computer service platform that lets run code without provisioning or managing servers. Lambda can be described as a function as a service (FaaS) platform. Developers create functions using supported languages. AWS executes code on demand according to various events. Most AWS services can trigger Lambda execution, such as API Gateway service, with HTTP(S) requests. You pay only for the compute time you consume. There is no charge when your code is not running.

The platform offers:

  • close to zero administration,

  • scaling automatically,

  • paying for compute time only,

  • and integration with most AWS services.

Note: Integration with various services accumulates a very high vendor lock.

In this article, I’ll provide an overview AWS Lambda, its serverless architecture, pricing, and runtime. I’ll also provide high-level details on the platform’s limitations and guidelines on how to overcome constraints, including real-world examples and some performance measurements.

Using a serverless architecture

A serverless architecture is a way to build and run applications and services without having to manage the infrastructure. Cloud providers do infrastructure management. Developers no longer have to provision, scale, and maintain servers to run applications, databases, and storage systems. Serverless applications consist of several components and layers which depend on the problem application is trying to solve, such as:

  • Compute

  • API

  • Storage / Datastore

  • Messaging

  • Orchestration

  • Analytics

Cloud providers offer services that can be used for these components that make up a serverless architecture.

Pricing

Let’s calculate the approximate price for the basic serverless application with basic usage, including the following parameters:

  • Web application

  • Users: 1000 (500 of them are active users)

  • Active user makes 20 requests per day to API

  • Average request processing time: 2s

  • Storage per user: 5 MB files

  • Memory: 512 MB

Runtime

AWS Lambda supports Node.js, Python, Ruby, Go, C# through .NET Core, and Java (more details are available here). Java runtime has two versions: 8 and 11. These versions are production-ready OpenJDK distributions available via Amazon Corretto. While most Java developers don't like using vendor-specific OpenJDK distributions, there is a possibility to use custom runtime. If you want to use the latest Java version on runtime, you need additional effort to achieve that.

Calling out AWS Lambda limits

AWS Lambda has several runtime limitations, such as:

  • Memory: 128MB - 3008MB, in 64 MB increments

  • Max execution duration: 900 seconds (15 minutes)

  • Function package size: 50 MB (compressed), 250MB (uncompressed)

  • Invocation payload (request and response): 6 MB

  • Concurrency: 500 - 3000 (varies per region)

There are additional limitations from AWS API Gateway service, including a max request time of thirty seconds.

Cold start

The Lambda function (i.e., the code you run on AWS Lambda) needs a container with code to start processing events or requests. The starting time before processing is known as the cold start. The container stays alive sometime after processing. If a new event/request appears, the same container can be used to process it without a cold start.

Let's assume that you want to use Amazon’s Relational Database (RDB) or another resource that doesn’t expose the Lambda function to the internet. That means you have to set up a Virtual Private Cloud (VPC). While security is essential, VPC comes with a trade-off. There’s an additional delay in start time anywhere from two to thirteen seconds.

Single HTTP requests sometimes need to hit more than one Lambda function, such as a security authorizer, and then business logic in another Lambda. Every function cold start and processing times are summed in the response time. If you want to deliver a low latency system, you should find a way to minimize the delay.

6 tactics for overcoming Lambda constraints

There are several ways to address the platform’s constraints. The below list notes five steps to take to mitigate Lambda's limitations.

1. Keep the package size small.

Package size matters. Ask yourself, “Do I really need Spring Boot? Spring? Hibernate?” Consider using lighter, faster libraries instead. Developers often are fine using singleton services without dependency injection. Fewer dependencies and a smaller package size generally bring faster initialization and execution time, along with lower latency and cost.

2. Improve performance and speed.

Use the latest AWS Software Development Kit (SDK). It starts faster compare to the older version and has performance improvements.

3. Take advantage of execution context.

Execution context is static code section or handler constructor if using Java. This section is executed using a higher CPU performance. To take advantage of it, move long taking initial operations to that section (e.g., open connection to the database). Warning: All these operations have to fit in 10 seconds or less otherwise Lambda will be killed and restarted with 20 seconds for initializations.

4. Avoid VPC.

The cold start runs much faster without VPC. Minimize using VPC minimally. With VPC, less is more.

5. Warm up.

How can you warm up and keep Lambda functions alive? Two approaches are:

  • AWS built-in Provisioned Concurrency. It's easy to set up and use. You can use application auto-scaling to configure schedules for currency or have auto-scaling automatically add or remove concurrency in real-time as demand changes. Pricing is defined here.

  • While there is no provisioned concurrency, there there are several ways to keep lambda warmed. An example can be found here.

6. Optimize Lambda memory size.

If you choose 1,792 MB, you will get one full vCPU (Virtual Processor). If you decide to use more memory, you will get additional vCPU. In other words, there won't be a performance increase if you are using a single thread.

*More information is available here.

Building the first function

With a clear understanding of how to face limitations, you’re ready to build. There are several ways to create Lambda functions, including:

  • AWS CLI

  • AWS Web Console

  • Terraform Scripts

  • IDE plugin

  • Serverless Framework

One of the most popular approaches is using a Serverless Framework. This framework is by definition infrastructure as code, meaning you can spin up a serverless application with all required resources using few commands in the terminal. You can easily create and deploy a simple AWS Lambda function, which returns a timestamp.

Prerequisites include:

  • Node and npm

  • Serverless CLI (npm install -g serverless)

  • AWS Credentials (This item has disadvantages because it requires you granting a wide range of access rights in the AWS account.)

  • Java 11

  • Maven

Start by creating a project.

Use the following to get started. --template aws-java-maven --name timestamp --path timestamp

Update pom.xml using:

1 <maven.compiler.source>11</maven.compiler.source>
2 <maven.compiler.target>11</maven.compiler.target>

Then, update serverless.yml with:

service: timestamp

provider:
name: aws
  runtime: java11
  memory: 1024
  timeout: 30

package:
  artifact: target/hello-dev.jar

functions:
  getTimestamp:
    handler: com.serverless.Handler
    events:
      - http:
          path: /
          method: get

Next, update the handleRequest method in Handler.java.

@Override
public ApiGatewayResponse handleRequest(Map<String, Object> input, Context context) {
  LOG.info("received: {}", input);
  return ApiGatewayResponse.builder()
    .setStatusCode(200)
    .setObjectBody(Collections.singletonMap("timestamp", System.currentTimeMillis()))
    .setHeaders(Collections.singletonMap("X-Powered-By", "Devbridge"))
    .build();
}

Now, package and deploy using:

  • mvn clean package

  • serverless deploy -v

You can see the function endpoint URL after deployment succeeds and execute curl with: curl https://<api_id>.execute-api.us-east-1.amazonaws.com/dev

The response should appear as: {"timestamp":1594030185799}.

Adding Spring Boot

One of the use cases for Lambda is REST API for a web application. To achieve that, you need to set up a proxy that connects the Lambda handler with a Spring Framework. There is a Serverless Java container for that case.

Below is sample code for a Spring Boot application with TimestampController.

@RestController
@RequestMapping("/timestamp")
public class TimestampController {
    @GetMapping
    public Map<String, Object> get() {
        return Collections.singletonMap("Timestamp", System.currentTimeMillis());
    }

    @GetMapping("/formatted")
    public Map<String, Object> getFormatted() {
        return Collections.singletonMap("Timestamp", new Timestamp(System.currentTimeMillis()).toString());
    }
}

There are manual configuration instructions that explain how to adjust the existing Spring Boot project for AWS Lambdas. The Serverless Java container doesn’t use a Serverless Framework.

To start using it, you need serverless.yml shown here.

service: timestamp

provider:
  name: aws
  runtime: java11
  memory: 1024
  timeout: 30

package:
  artifact: target/timestamp-Lambda-package.zip

functions:
  getTimestamp:
    handler: com.devbridge.timestamp.StreamLambdaHandler
    events:
      - http:
        path: /{proxy+}
        method: any

Now, you are ready to package and deploy using the same commands as before:

  • mvn clean package

  • serverless deploy -v

Measuring performance

Speed matters, especially for cold start timing. The Spring Boot auto-configuration takes some time. I captured the cost and how to improve the start duration. I executed a performance test for the timestamp Lambda variations using the previous examples.

  • A - Plain Java 11 (without framework)

  • B - Spring Boot 2

  • C - Spring Boot 2 with enhancements

  • D - Spring Boot 2 with enhancements and without auto-configuration

  • E - Spring Framework

JMeter test case:

  • 15 threads

  • 1 second ramp-up period

Balancing the benefits and drawbacks of AWS Lambda

Pros:

  • Serverless pricing gives a considerable advantage.

  • Dynamic scaling eliminates unexpected outages.

  • The AWS Lambda with Serverless Framework learning curve is lower compared to micro-services like Kubernetes.

  • Less management work is needed for DevOps.

Cons:

  • There is a very high dependency on provider AWS.

  • There are limitations (cold start, execution time, memory size, etc.)

  • Overcoming the limitations requires additional effort.

When to choose AWS Lambdas

I prefer Lambdas when:

  • Pricing is a priority.

  • Unstable traffic might appear.

  • Auto-scaling is needed.

Consider alternatives when:

  • I don't want to depend on one specific vendor.

  • Performance is crucial.

  • There are high complexity and long calculations.

  • There is a high level of traffic.

Never miss a beat.

Sign up for our email newsletter.