https://github.com/ajardin/lambda-logs-archiving
:zap: Logs archiving from CloudWatch to S3 through a Lambda function.
https://github.com/ajardin/lambda-logs-archiving
aws-lambda cloudwatch-logs golang s3-bucket
Last synced: 7 months ago
JSON representation
:zap: Logs archiving from CloudWatch to S3 through a Lambda function.
- Host: GitHub
- URL: https://github.com/ajardin/lambda-logs-archiving
- Owner: ajardin
- License: mit
- Created: 2018-02-05T18:50:23.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2018-07-20T09:36:20.000Z (about 7 years ago)
- Last Synced: 2025-02-24T07:06:03.690Z (7 months ago)
- Topics: aws-lambda, cloudwatch-logs, golang, s3-bucket
- Language: Go
- Homepage:
- Size: 3.88 MB
- Stars: 3
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Logs archiving with a Lambda function
[](https://travis-ci.org/ajardin/lambda-logs-archiving)
[](https://www.codacy.com/app/ajardin/lambda-logs-archiving?utm_source=github.com&utm_medium=referral&utm_content=ajardin/lambda-logs-archiving&utm_campaign=badger)
[](https://opensource.org/licenses/MIT)## Overview
The idea behind that project is to be able to easily archive logs from CloudWatch into an S3 bucket thanks to AWS features.
It's designed to be used with a scheduled task running everyday in order to retrieve yesterday logs.## Behavior
1. Retrieve flags value from either the command line or from environment variables.
2. Identify which CloudWatch log streams must be downloaded.
3. Download concurrently all logs with multiple [goroutines](https://gobyexample.com/goroutines).
4. Create a ZIP archive with all these logs.
5. Upload on an S3 bucket.... That's all!
## Usage
To use Go with a Lambda function, we need a Linux binary that we will compress into a ZIP archive.
```
# Build a binary that will run on Linux
GOOS=linux go build -o logs-archiving logs-archiving.go# Put the binary into a ZIP archive
zip logs-archiving.zip logs-archiving
```
Once the archive has been generated, you have to upload it on AWS.## Configuration
AWS credentials are automatically retrieved from the execution context.
There is no additional configuration required.Two environment variables must be configured on the Lambda function:
* `BUCKET_NAME`, the S3 bucket name where logs will be archived.
* `ENVIRONMENT_NAME`, the environment name from where logs have been generated.
* `TARGET_DATE` (optional), the day on which the logs must be archived.These values can also be passed manually outside AWS by using:
```
go run logs-archiving.go -bucket XXXXX -environment XXXXX (-target XXXXX)
```Therefore if you want to use the script locally, you have to replace `lambda.Start(LambdaHandler)` by `LambdaHandler()`.
The instruction is required by AWS, but it causes an infinite wait when the program is run outside a Lambda context.## Limitations
Because of the Lambda nature, limited execution time and limited resources, it can be problematic to archive access
logs generated by a production infrastructure. A condition has been implemented in the process to avoid timeouts due to
those files. It simply consists of bypassing log stream names which contain the string `access`.That's not the nicest solution, but it covers most of our use cases (Apache and Nginx).