This article was originally published on the Dashbird blog:https://dashbird.io/blog/aws-serverless-updates-2021/

In this article, we’re covering all the latest updates from AWS in 2021 that all serverless builders should be aware of.

Before we start, let’s recall a few significant updates in serverless, announced at re:Invent 2020. One of the things that we see is that agility is really one of the primary drivers to one’s workload in the cloud and serverless is a good example of this. 

But the discussion often starts with cost.

At re:Invent 2020, AWS announced that* Lambda has its billing granularity reduced to one millisecond*. And what that really means for you, is that for example if your Lambda function runs for 30 milliseconds, you also pay for 30 milliseconds rather than for a 100 milliseconds, as opposed to before. And this really translates to something like 70% of savings from function duration alone. And this is something that required no action from you to be enabled. This automatically affected all of your Lambda functions and this also includes Lambda Edge.

Learn more how to cut AWS Lambda cost in this article about 6 AWS Lambda cost optimization strategies that work.

We’re not done with Lambda just yet. 

So to simplify packaging and deployment of Lambda functions, you can now have them as container images, meaning that it’s much easier to package dependencies with your function. And to use the tools for containers that you’re already familiar with. 

This does not change how Lambdas work. They’re still event-driven, it’s just that in addition to those zipped images that you used to work with, you can now package the Lambda functions in Docker V2, or OCI container formats, and this can be up to 10 gigabytes in size.

aws lambda monitoring

One of our personal favorites from late last year, a little bit before re:Invent, is the ability to synchronously execute Step Functions express workflows. 

Serverless builders often find themselves in a situation where they have multiple microservices that need some form of orchestration. For example, you might be taking data from one microservice, passing it on to another, maybe taking the decision on its output, and executing some more functions along the way. And you can orchestrate all of this really easily with Step Functions, but if you wanted to hide all of that processing behind a single RESTful API call, you either had to set up a Lambda function to wait for the completion of the workflow, or you had to build a more complex polling client to handle the asynchronous nature of workflow executions.

Now with synchronous execution support, you can just trigger a Step Function, express workflow directly from API gateway, and let the API gateway wait until the workflow is complete. All this is possible even without having to really build another layer of indirection, and then just wait for something to happen.

One more service that AWS released in preview was the Aurora serverless version 2. This version is really aiming to make relational databases even more scalable and to do this by first of all, being able to scale instantly from hundreds, to hundreds of thousands of transactions in really a fraction of a second — you’re able to scale in fine-grained increments. It’s no longer that you have to very coarsely specify how many ACUs or capacity units you need, but rather Aurora will find the right capacity for you.

We still have the full breadth of Aurora capabilities and serverless V2, including multi-AC, including global database, but overall, it’s estimated that you can save up to 90% in costs when compared to your traditional approach for provisioning for peak loads only.

Some honorary mentions from re:Invent include, for example, larger Lambda functions. So now you can use up to 10 gigabytes of memory, with a maximum of 60 CPUs. And also, advanced extra extensions too—this is not supported on Lambda functions. 

Recently, AWS also released AWS Proton, which is a fully-managed application deployment service. This is mainly targeted for container and serverless applications. What you can do with Proton really is you can connect and coordinate all the different tools that you need for infrastructure provisioning, and co-deployment, monitoring and updates, and you can do that through reusable environment and service templates.

Additionally, AWS also announced EventBridge, EventReplay, and Archiving Support. This allows you to replay a specific set of events that have occurred in your system so that you can, for example, develop what has happened when a problem struck. 

And last but most certainly not the least, S3 is now strongly consistent, which means that once you read after you write, you already get a consistent view of the world like it should be after a write. This is really a game-changer for all sorts of data-driven workloads that make heavy use of object storage, as you don’t need to kind of write your code around eventual consistency.

Now let’s move on to post-re:Invent AWAS news and updates.



AWS New Features in 2021



S3 Object Lambda and S3 Object Lambda Access Point

The first one is S3 Object Lambda. With S3 Object Lambda, you can add code to S3 GIT requests to process and even modify data as it’s returned to an application. The way it works, as you could see on the schematic as well, is that first of all, you create a Lambda function that you want to use to modify data or to analyze data to some extent. You then attach that to a supporting S3 access point.

And you do that through what’s called an S3 Object Lambda access point

So what happens is that you will have an S3 access point. You will have a Lambda function, and you will have an object Lambda access point which executes that Lambda function on top of the data that’s received from a supporting S3 access point. Every time someone gets the object through the new object Lambda access point, your Lambda function is called, its provided request information, its provided user’s identity, and the object content to work on. And your Lambda can then access the original object content as well. You can do that through the assigned URL, and you write a new version of that object using a new API which is called the write-get-object response API. You write it back and then the client will receive that modified content.

The maximum duration of S3 Object Lambda is 60 seconds. Keep in mind to keep your objects small enough to be able to fit into the timeframe. What could be the potential use cases for Object Lambda are for example redacting personally identifiable information on the fly, augmenting data, converting data formats, or encoding like compressing, decompressing files, implementing customer authorization, and resizing and watermarking images for example. So now, remember that AVX2 or the advanced vector ascensions are supported on Lambda, and Lambdas can be made much more efficient at tasks like image manipulation.



CloudFront

With Amazon CloudFront, you can securely deliver content like videos, images, data applications to your customers with low latency and high transfer speeds. For a while now, you’ve been able to use Lambda Edge to run your code really close to the customers, to customize their content experience. But this has mostly been targeted towards a little bit more compute-heavier use cases, especially for use cases when those objects that you’re working with are not cashed regionally.

But sometimes you don’t need this, but you just need a quick way to manipulate the request with as low latency as possible. You don’t perhaps even care about what’s in that response, but you care about what are the headers, what are the request parameters that you provide. For this, AWS introduced CloudFront functions. And what you can do with CloudFront functions, is that you can write your lightweight serverless general script functions for high-scale and latency-sensitive customizations, and these run really, really close to the customer. You can customize those requests and responses there to a certain extent to perform simpler authentication tasks, and generate HTTP responses.

If you look at for example the scale, they’re designed to scale to 10 million or more requests per second. But at the same time, at the cost of how much time it takes to execute a single function. 

So your function duration has to stay under one millisecond. A typical use case for CloudFront function could be cache key normalization. So let’s say you have cache keys coming in with your request in different formats, but you know that your backend or your origin really likes them in one specific kind of a format, then you can do that kind of formatting. You can do header manipulation and header verification, for example, if your security headers are present. You can do your redirects and rewrites, or you can do simple request authorization.

So just kind of keep that in mind that you can’t access the network and the file system from a CloudFront function, and you really have one millisecond to work, so stay lightweight when running at the Edge.



EventBridge

AWS EventBridge is something that allows you to integrate different event-driven systems in a very simple and local manner. What it can now do is, it can also propagate x-ray trace context to make your event pipelines easier to understand and debug. 

For example, if you need to trace transactions through a very distributed microservices architecture, or understand latency in different parts of your architecture, then X-Ray traces can prove to be really valuable. 

EventBridge has been growing a lot in features. Previously we mentioned that events can now be archived and replayed. Plus event pipelines are now more observable through XRay traces. But there is now also support to send events directly to HTTP APIs, whether your internal APIs, or external APIs like Slack or Zendesk, PagerDuty. And to do that without writing any code. It really takes away some of the typical heavy lifting, for example, authentication, retrying, working with rate limits downstream, and things that aren’t particularly interesting to reinvent every time.



Other honorable mentions

EventBridgeCross-Region event bus targets are also supported. You could collect all of your events in a central region.

Lambda management actions are now visible through last access information in IAM. So you can go to the console, have a look at what kind of Lambda management actions were executed by which roles, and then you can tighten your IAM policies and permissions.

Databases: Integration from PostgreSQL to Lambda. Previously, you knew that you can use Lambda and you can access Poster SQL from Lambda, but now you can also do it the other way around. You can invoke Lambda function via stored procedures or use it to find functions and really get to see what you can come up with.



Education in AWS

Finally, there is a new three-day intermediate instructor-led course available on AWS training on developing serverless solutions on AWS. And there are, of course, many other courses available in AWS training.

This article was written based on our co-hosted webinar with AWS on 25 May 2021.