This post will cover technical details such as configuration options and limitations. We’ll also discuss how to use this technical knowledge in order to design serverless and Lambda systems.
You should now have a better understanding of the main considerations when designing around AWS Lambda.
Jump toAWS Lambda Function memory
AWS Lambda Invocation
Failure of AWS Lambda and retry behavior
AWS Lambda versions & aliases
AWS Lambda VPC
AWS Lambda security
Scaling and concurrency with AWS Lambda
AWS Lambda cold starts
Types of AWS Lambda error
Handling errors in AWS Lambda
AWS Lambda coupling
Batching AWS Lambda
Monitoring and observability of AWS Lambda
Additional AWS Lambda tips
Summary
AWS Lambda is the most common thing that comes to mind when you hear the term “serverless”. It’s not surprising that the tech revolutionized our industry and brought with it a whole new set of solutions.
AWS Lambda was my first Function as a service (FaaS), and I was skeptical at first. It doesn’t require any servers to manage, auto-scales, has fault tolerance, and is paid per usage. This sounds like a dream.
Great power comes with great responsibility. Serverless design requires knowledge about different services and how they interact.
There are many pitfalls to navigate with serverless technology. However, they are far less than the power of serverless. These are some things you should remember when designing with AWS Lambda to prevent this nightmare from becoming a reality.
Technical considerationsFunction MemoryThe memory setting of your Lambda determines both the amount of power and unit of billing. There are 44 options available, ranging from the slowest 128 MB to the largest 3,008 MB. There are many options to choose from.
Your program may take longer to run if you don’t allocate enough memory. It might also take more time than the 15 minute limit. However, if you allocate too much memory, your function may not use as much power and end up costing you a lot.
It is crucial to find the sweet spot for your function. AWS states that if 1,792 MB is assigned, you get the equivalent to 1 vCPU. This is a thread of an Intel Xeon core, or an AMD EPYC Core. This is about all they have to say about the relationship between memory setting and CPU power.
A few people have experimented and found that you can get a second core of CPU after you have 1,792 MB of RAM. However, it is not possible to determine how these cores are being used.
Sometimes, cheaper is not always better. Sometimes, a higher memory option upfront can reduce execution time. This means that you can do the same amount of work in a shorter time frame. By fine-tuning your memory settings and finding an optimal point, you can make functions run faster than those with lower memory settings.
You might end up paying more for your function than you would with the lower option.
The bottom line is that memory and CPU should not be high up on your design considerations. AWS Lambda is, like other serverless technologies in the same way, meant to scale horizontally.
It is easier to break down the problem into smaller pieces and process them in parallel than with many vertically-scaled applications. The function can be designed and then the memory settings can be fine-tuned as necessary.
It is easier to break down the problem into manageable pieces and process them in parallel than many vertically-scaled applications.
InvocationAWS Lambda offers three invocation types and two invocation models. This means that there are three ways to send the data.