Lambda Practical: Deploying & Triggering
Learn by doing. In this walkthrough, you will create a serverless Python function that automatically triggers every time a file is uploaded to an S3 bucket.
🏗️ Phase 1: Creating Your First Function
- Log in to the AWS Management Console (opens in a new tab).
- Search for Lambda and click Create function.
- Select Author from scratch.
Step 1: Basic Information
- Function name:
process-s3-upload - Runtime:
Python 3.12(or latest) - Architecture:
x86_64
Step 2: Permissions
- Expand Change default execution role.
- Select Create a new role with basic Lambda permissions.
- This allows the function to upload logs to CloudWatch.
- Click Create function.
🧪 Phase 2: Testing the Logic
Before connecting triggers, verify that your code works using a mock event.
- In the Code tab, click the Test button.
- Select Create new event.
- Event name:
test-upload. - Keep the default JSON and click Save.
- Click Test again.
- Verify the Execution results:
- Status:
Succeeded - Output:
"Hello from Lambda!"
- Status:
🚀 Phase 3: Adding an S3 Trigger
Now, let's make the function reactive.
- In the Function overview section, click + Add trigger.
- Select S3 from the dropdown.
- Bucket: Select an existing bucket (or create one in a new tab).
- Event type:
All object create events. - Check the recursive search acknowledgment and click Add.
The visual architecture should now look like this:
🕵️ Phase 4: Monitoring in Cloudwatch
- Go to the Monitor tab of your function.
- Click View CloudWatch logs.
- Click on the latest Log stream.
- You will see every execution of your function, including the
print()statements and errors.
🧹 Phase 5: Cleanup
To avoid any potential charges:
- Delete the Lambda function.
- Delete the IAM role created for the function (found in the IAM dashboard).
- If you created a test S3 bucket, delete it.
[!IMPORTANT] Why do we need a Role? A Lambda function cannot "do" anything outside itself unless it has a Role attached. By default, it can't even write logs. Always ensure your function's role has the necessary policies (e.g.,
S3ReadOnlyAccess) before trying to interact with other AWS services.