Introduction

Cloud-native architectures are built on principles of agility, elasticity, and automation. Within this paradigm, event-driven systems are essential, enabling infrastructure and applications to react automatically to changes in state, such as a file upload, an API call, or a database update. In the AWS ecosystem, these patterns are enabled by services like Amazon S3, Amazon EventBridge, Amazon DynamoDB Streams, and AWS Lambda.

AWS Lambda, the core of AWS’s serverless platform, lets developers run code without provisioning or managing servers. It integrates seamlessly with many AWS services and scales automatically with demand. While invoking one Lambda from another is relatively common, dynamically creating a new Lambda function from within another is far less typical, and for good reason. It introduces complexity but unlocks powerful patterns for self-provisioning infrastructure and event-responsive automation.

This post explores how to dynamically create Lambda functions from within another Lambda function in response to real-time events. You’ll learn how this pattern can support tenant provisioning, on-demand developer environments, or custom event workflows, along with practical implementation tips, IAM permissions, logging strategies, and deployment guidance.

We’ll explore:

  • How dynamic Lambda creation works in practice
  • Real-world use cases where it delivers unique advantages
  • And the business benefits of automating serverless infrastructure in response to events

What we will build


This approach isn’t for every workload, but it can be incredibly powerful in the right context. Here’s when it works well:

When it could work:

  • Functions are short-lived or ephemeral: Ideal for tasks that spin up, execute quickly, and disappear.
  • You need rapid, isolated provisioning: Useful in multi-tenant environments, per-user workflows, or processing individual data streams.
  • You want minimal operational overhead: No need to manage infrastructure or long-running services.

When to avoid:

  • You need full CI/CD with versioning: Managing deployments, rollbacks, and testing becomes tricky with dynamically created code.
  • Codebases are complex or large: These are better handled in pre-built, managed environments.
  • When audit controls can’t be met: Dynamically created resources can complicate governance, logging, and compliance.

Reference Implementation: You can find the complete working example of this pattern in the GitHub repository here

Prerequisites

To implement dynamic Lambda creation, you’ll need:

  • An AWS account with appropriate permissions
  • A pre-created IAM role for the source Lambda function 

IAM Role and Permission Requirements

These permissions are defined using AWS Identity and Access Management (IAM). Assigning them to the source Lambda’s execution role ensures it can fully provision, configure, and validate the target Lambda at runtime. Below is a permission summary:

Permission Resource
lambda:CreateFunction
arn:aws:lambda:<aws-region>:<aws-accountid>:function:*
iam:PassRole
arn:aws:iam::<aws-accountid>:role/service-role/<your-execution-role-name>
logs:CreateLogGroup
arn:aws:logs:<aws-region>:<aws-accountid>:*
logs:CreateLogStream
**arn:aws:logs:<aws-region>:<aws-accountid>:***
logs:PutLogEvents
arn:aws:logs:<aws-region>:<aws-accountid>:*
lambda:InvokeFunction
arn:aws:lambda:<aws-region>:<aws-accountid>:function:*
lambda:AddPermission
arn:aws:lambda:<aws-region>:<aws-accountid>:function:*
lambda:GetFunction
*

Here is a sample policy with the permission above, download here.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowLambdaActionsAndLogging",
      "Effect": "Allow",
      "Action": [
        "lambda:CreateFunction",
        "lambda:InvokeFunction",
        "lambda:AddPermission",
        "lambda:GetFunction",
        "iam:PassRole",
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": [
        "arn:aws:lambda:<aws-region>:<aws-account-id>:function:*",
        "arn:aws:iam::<aws-account-id>:role/service-role/lambda-creates-lambda-role-tyqb7j6i",
        "arn:aws:logs:us-east-1:<aws-account-id>:*"
      ]
    },
    {
      "Sid": "AllowGetFunctionOnWildcard",
      "Effect": "Allow",
      "Action": "lambda:GetFunction",
      "Resource": "*"
    }
  ]
}

In addition, you must pre-define the IAM role that the new (target) Lambda function will use. This role must have the correct permissions to support whatever tasks the function is intended to perform after deployment. This permission set will, of course, be dependent on your use case and the actions your target lambda will take when executed post creation.

Region Awareness

The region variable is set explicitly in the source Lambda and used throughout the code. Hardcoding this variable may seem redundant, but it ensures consistency across Lambda and S3 clients, especially in multi-region architectures. Omitting a region setting can cause boto3 to use implicit values from your environment or credentials, resulting in unpredictable outcomes.

region = "<your-desired-aws-region>" (Line9)
lambda_client = boto3.client('lambda', region_name=region) (Line15)

Choosing Inline f-Strings Over External S3 Templates:

In this implementation, the source code for the target Lambda function is defined as a multi-line Python f-string, wrapped with textwrap.dedent to remove any unintended indentation. This string is embedded directly within the parent Lambda function and includes runtime variables like region and target_name:

lambda_code = textwrap.dedent(f"""\
import json
import boto3

region = '{region}'
template_name = '{target_name}'

lambda_client = boto3.client('lambda', region_name=region)

def lambda_handler(event, context):
    print(json.dumps(event))
    parsed_event = json.loads(json.dumps(event))
    user_id = parsed_event.get('user_id', 'unknown')
    print(f"Processed user ID: {{user_id}}")
    return {{
        "status": "ok",
        "user_id": user_id
    }}
""")

Why use this approach?

  • Speed and Simplicity
    • This eliminates the need to host code templates in S3 or maintain a separate repo to define short-lived, self-deploying functions. This is especially useful in ephemeral, automation-heavy systems where agility trumps maintainability.
  • Dynamic Variable Injection
    • f-strings allow easy injection of runtime variables into the target function, such as region, template name, or user data, without relying on templating engines or pre-processing.
  • No Dependency on S3 Availability or Permissions
    • By packaging the code on-the-fly using Python’s zipfile module and writing it to Lambda’s /tmp directory, the solution avoids additional network calls or IAM permissions that would be required to pull from S3.
  • Controlled and Secure
    • Since all code creation is confined within the Lambda runtime environment, there’s no concern about public S3 buckets or code tampering between upload and function creation.
  • Use Case Fit
    • While this method is not ideal for large or complex functions, it’s highly effective for lightweight, event-driven workflows where target functions are purpose-built, short-lived, or intended to be overwritten dynamically.

This pattern works well for DevOps engineers, platform teams, and automation architects building responsive infrastructure that provisions itself based on live input, without relying on persistent artifacts or manual CI/CD steps.

Use Case Examples

Event-driven Lambda creation unlocks powerful automation capabilities across a range of practical scenarios. Here are three real-world use cases where dynamically generating Lambda functions with another Lambda proves invaluable:

Use Case Description Business Benefit
Tenant Provisioning in Multi-Tenant SaaS Applications In a multi-tenant SaaS architecture, each customer (tenant) often requires isolated resources to ensure security, compliance, and performance. Using an event-triggered Lambda to create new functions for each tenant, the system can automatically provision isolated compute environments as new customers are onboarded. This pattern allows for rapid, zero-touch tenant onboarding while maintaining strict isolation. It also supports scalable growth without manual intervention or deployment bottlenecks.
Dynamic Data Stream Processing Organizations ingesting large volumes of event or telemetry data may need to transform or enrich that data differently depending on context, such as data type, source, or client. A parent Lambda function can respond to events on Amazon Kinesis or Amazon EventBridge by creating purpose-built transformation Lambdas tailored to each scenario. This enables fine-grained, event-specific processing pipelines that are both responsive and cost-efficient, since functions are only created when needed and can be destroyed after use.
On-Demand Developer Sandbox Environments When developers need to test specific logic changes without impacting shared environments, a controlling Lambda function can generate isolated, temporary clones of production Lambdas. These clones are uniquely named, assigned test IAM roles, and spun down after a time-to-live (TTL) period. This approach accelerates development cycles by enabling parallel, conflict-free testing environments without the overhead of managing dedicated dev stacks or CI/CD pipelines for ephemeral testing needs.

Error Handling + Logging

Right now, the demo assumes everything goes smoothly, but that’s rarely the case in the real world. When creating infrastructure on the fly, especially with Lambda, it’s vital to build for the moments when things don’t work.

Handle Errors Gracefully

When calling create_function, wrap it in a try/except block to catch issues like missing permissions, invalid parameters, or naming conflicts. These errors happen, and failing gracefully can mean the difference between a smooth recovery and a broken workflow.

Here’s a simple example:

import logging
from botocore.exceptions import ClientError

logger = logging.getLogger()
logger.setLevel(logging.INFO)

try:
    response = lambda_client.create_function(**params)
    logger.info(f"Created function: {params['FunctionName']}")
except ClientError as e:
    logger.error(f"Failed to create function {params['FunctionName']}: {e}")

Log What Matters

Instead of print(), use the logging module built for this. These logs become your breadcrumbs when debugging issues in CloudWatch. At minimum, log:

  • The event input (trimmed if it’s too big)
  • The function name created
  • Any exception messages or stack traces

Plan for Failures

If the function creation fails, don’t just drop the error. Route the failed event to an EventBridge dead-letter queue (DLQ) or an SQS queue for later retries. You can also configure an EventBridge rule with exponential backoff to automate the retry logic. This adds resilience and allows your automation to recover gracefully, without requiring manual intervention.

In short: Don’t just hope it works. Expect failure, log intelligently, and build with recovery in mind.

Function Code Explained

For this example, we will use Python 3.11.0. The same approach will work with whatever language you choose to use with Lambda. The same IAM permission set above in prerequisites is required regardless of the application code language. For this code to run, you will need the following packages imported into your source function. 

  • boto3 – The official AWS SDK for Python is pre-installed in all Python Lambda runtimes. This lets you interact with AWS services like S3, DynamoDB, and Lambda.
  • json – This is a standard Python library, so it’s automatically included in all Python runtimes.
  • zipfile – This is part of Python’s standard library and is available by default in Lambda’s Python runtimes.

Setup Lambda Variables

  • region = <your-desired-aws-region>
  • lambdas_iam_role = "arn:aws:iam::<aws-accountid>:role/service-role/<your-target-lambda-execution-role-name>"
    • This is an IAM role that you’d like to attach to the target Lambda that gets created. It can be the same or different from the source execution role, but will likely differ based on your needs. 
  • myvar = "myvariable"
    • This can be any variable you need to pass to the target lambda function that will be created in response to the event. You can add more as needed. Name them whatever you like. 
  • template_name = "my-target-template"
    • This should be the name you want your target lambda created with. This is the name that will show in the Lambda console.

Setup boto3 Clients

You’ll need two clients from the boto3 SDK for this to execute correctly:

  • lambda_client = boto3.client('lambda', region_name=region)
    • Notice it initiates the boto3 client for lambda using the region variable set above
  • s3_client = boto3.client('s3', region_name=region)
    • Notice it initiates the boto3 client for S3 using the region variable set above

The Lambda Function

This is the function that will be called from the lambda handler to do the work we want done. Specifically:

  1. Define the target lambda code, including the lambda handler. 
  2. Zip the Lambda function and save locally to the source Lambda environment in /tmp directory.
  3. Call the lambda client we created and create the lambda function with the zipped code.
  4. Attach the target execution role when the target Lambda is created
  5. Define the necessary lambda configuration for the target function

Let’s review each section:

Code Explanation
def create_lambda_function():
The function definition
lambda_code = textwrap.dedent(f"""\
import json
import boto3

region = '{region}'
template_name = '{target_name}'
lambda_client = boto3.client('lambda', region_name=region)

def lambda_handler(event, context):
    print(json.dumps(event))
    parsed_event = json.loads(json.dumps(event))
    user_id = parsed_event.get('user_id', 'unknown')
    print(f"Processed user ID: {{user_id}}")
    return {{
        "status": "ok",
        "user_id": user_id
    }}
""")
This variable, named “lambda_code, ” contains an ‘f string’ that holds the code that we want deployed to our target lambda function in response to the event that triggers its creation.
Wrapped with textwrap.dedent() to remove leading whitespace from multi-line strings, ensuring the code compiles without indentation errors.

Note that variables defined in the source lambda function code can be passed to the target function code dynamically by surrounding the desired variable with brackets in the f-string. As with any lambda function, ensure your f-string contains a lambda handler, or your target function will create but not run without error. 
with zipfile.ZipFile('/tmp/lambda_code.zip', 'w') as zfs:
       zip_info = zipfile.ZipInfo('lambda_function.py')
       # Write the Lambda code to the ZIP file with the specified permissions
       zfs.writestr(zip_info, lambda_code)
This uses zipfile to zip the code defined in the f-string into a named zip file in the /tmp directory of the lambda source function. The /tmp directory is the only writable directory in a Lambda runtime. 
lambda_client.create_function(
       FunctionName=template_name,
       Runtime='python3.11',
       Role=lambdas_iam_role,
       Handler='lambda_function.lambda_handler',
       Code={
           'ZipFile': open(b'/tmp/lambda_code.zip', 'rb').read()
       },
       Environment={
           'Variables': {
               'mylambdavariable': myvar,
           }
       },
       Architectures=[
           'arm64',
       ],
       Description=f'Lambda function',
       Timeout=30,
       MemorySize=256
   )
Call the boto3 create_function call using the lambda client we created to create the lambda function with the zipfile as the code. The boto3 documentation to go deeper on anything in this call can be found here

 

  • FunctionName
    • The target function variable name we defined at the top of the source function
  • Runtime
    • The Python Runtime you want to use in Lambda for the source function. 
  • Role
    • The target function execution role you created in the prerequisites and loaded into the lambdas_iam_role variable
  • Handler
    • You need to define which function in your target function source code contains the lambda handler required by the lambda service. Note the name after the dot in lambda_function.lambda_handler value is the same as the lambda handler function name in our target code definition f-string.
  • Code
    • Open the zipfile in the /tmp directory we created above as a binary file and read it for deployment to the target lambda creation. 
  • Environment
    • This is where we can define environment variables for the target Lambda. Note, we’re using the myvar variable we created at the top of the source lambda.
  • Architectures
    • Lambda requires you to define the architecture for your target function. 
  • Description
    • This description shows in the Lambda console next to your target Lambda name. It doesn’t have to be an f-string if you aren’t passing variables from the target environment dynamically. 
  • Timeout
    • Sets the target lambda function timeout time. 
  • MemorySize
    • Sets the target lambda function’s available memory.
def lambda_handler(event, context):
   parsed_event = json.loads(json.dumps(event))
   print(parsed_event)
   create_lambda_function()
If you aren’t familiar with the Lambda service, it requires a function that defines a lambda handler as shown. This is the lambda handler for the source function. It just prints the incoming event from the test we will run and calls the create lambda function above. 

Deploy the Target Function to the Lambda Service

For simplicity, we will deploy the function from the CLI. You can log in to the Lambda console to see the resulting deployment. Prerequisites for this stage are having AWS credentials configured locally for the account and the default region to which you’d like to deploy the Lambda. For details on configuring AWS credentials locally (if you haven’t done so), visit this link.

  1. Download the file function here
  2. Create the role based on the policy above.
  3. Be sure to replace , , and any placeholders in the zip path with actual values from your environment.
  4. Run this command at your CLI. This will create a file called function.zip in the directory you run it in from the CLI. Run it in the same directory as your lambda_from_lambda.py for simplicity.
    zip -j ./function.zip /<path to lambda .py>/lambda_from_lambda.py
  5. Then you can run this CLI command to deploy. Make sure your reference to the zip file created above is correct.
    aws lambda create-function \
        --function-name create-lambdas-lambda \
        --runtime python3.11 \
        --role arn:aws:iam::<account-id>:role/service-role/<your role name> \
        --handler lambda_from_lambda.lambda_handler \
        --zip-file fileb://function.zip \
        --timeout 30 \
        --memory-size 256 \
        --architecture arm64
    

    This will deploy the Lambda function to the Lambda service in AWS. You can log in to the Lambda console, and you should see a new function there in the region you set as default in your aws credentials configuration. The lambda function will be called create-lambdas-lambda if you didn’t change it in the example code.

    Let’s Test the Secondary Lambda Creation

    This code is pretty simple and doesn’t have a lot of utility beyond creating the secondary lambda in response to an event. But you can begin to see how you can extend the code in either lambda to have more utility based on your use case, with the simple pattern to create a lambda with a lambda in response to an event as the foundation. To test this code, we’ll simply run a test in the Lambda console to see if the secondary Lambda gets created in response to our test event.

  6. In the Lambda console, click on the name of the create_lambdas_lambda to open it.
  7. Click the test from the menu at the top of the editor.
    You can accept all the defaults on this page, with the only change being to give the test event a name in the “Event name” field. In this example, the Lambda does not process any specific content from the event payload, and it simply triggers the creation of the secondary Lambda.
    However, this pattern is highly extensible. You can imagine passing additional fields through the event, such as configuration settings or user-specific data, which the primary Lambda could use to dynamically customize the behavior, logic, or environment variables of the generated function. This makes the approach well-suited for building personalized automation or per-tenant infrastructure at runtime.
  8. Once you’ve named the test event, click the orange test button in the top right. This should fire the lambda test and create a green header above the editor showing success. A red header will display error information in the detail view that you can troubleshoot. If IAM permissions were configured correctly above, the lambda should fire.
  9. Return to the main lambda console, and you should now see a second lambda function called “dynamic-function” created from the firing of the test event in the first function.
  10. If you click the “dynamic-function-xxxx” function name, you should see the code in the ‘f-string’ of the primary lambda in the code editor. This secondary lambda code doesn’t have any utility since it merely prints the payload that was passed in from the test event, but you can see that creating a lambda from a lambda in response to an event worked!

Security Best Practices

Security should be foundational, especially when dealing with dynamically created functions. Here are a few key practices to follow:

  • Use least privilege IAM roles: Assign the minimum permissions necessary to the parent and child Lambda functions. This reduces the blast radius if credentials are compromised.
  • Avoid injecting unvalidated content into f-strings: If you’re building code or commands dynamically, never insert untrusted input directly. It opens the door to remote code execution vulnerabilities.
  • Encrypt environment variables: If your functions rely on secrets or sensitive configuration, store them securely, using AWS KMS or a secrets manager to encrypt environment variables.

Threat Matrix: Risks of Dynamic Lambda Creation

While dynamic function creation enables agility and flexibility, it also introduces specific security and operational risks. Below is a simplified threat matrix to help assess potential vulnerabilities and plan mitigations:

 

Threat Description Mitigation
Overprivileged IAM Roles Excessive permissions on Lambda execution roles can be exploited to access unauthorized resources. Apply the principle of least privilege; regularly audit IAM policies.
Code Injection Dynamically generated code or environment variables may be vulnerable to injection attacks if input is not sanitized. Validate all input rigorously; avoid unsanitized string interpolation (e.g., f-strings).
Untracked Code Deployment Dynamically created functions may bypass version control or CI/CD processes. Log function creation events via CloudTrail; enforce tagging and naming standards.
Secrets Leakage Environment variables or code may inadvertently expose credentials or sensitive data. Encrypt secrets using AWS KMS or a secrets manager; avoid hardcoding sensitive data.
Resource Sprawl Uncontrolled creation of functions can lead to cost overruns or hitting service limits. Set Lambda quotas, monitor usage, and automate cleanup of unused functions.

Conclusion

Dynamically creating AWS Lambda functions isn’t a widely adopted pattern, but when applied in the right context, it unlocks unique capabilities. Whether you’re enabling per-tenant compute, dynamically transforming event data, or deploying ephemeral developer environments, this approach provides on-demand scalability, automation-driven flexibility, and reduced operational overhead.

Next Steps

To explore this pattern further:

  • Adapt the sample code to your use case, whether tenant provisioning, dynamic event handling, or sandbox creation.
  • Use AWS Step Functions to orchestrate more complex workflows involving multiple dynamic Lambdas.
  • Apply input validation and secure coding practices to mitigate code injection or configuration risks.
  • Log and monitor function creation and execution to maintain observability and governance.

Have questions, feedback, or a use case to share? Leave a comment, we’d love to hear how you’re using dynamic serverless automation in your architecture.

 

Authors: David Ernst, Principal Architect; Jeff Carson, CTO at ClearScale