Shell Script for backing up data to Amazon s3
s3cmd is a command line tool for saving, retrieving, uploading and managing data from Amazon s3. Its capacity to store huge amount of data on nominal charges, Amazon s3 is becoming one of the most suitable choices for storing data remotely.
Pre-Requests
- Install amazon s3 cli on ec2.
- Create IAM role with s3 Access
- Configure s3cmd on ec2.
- Create Amazon s3 bucket for the backup process.
- Basic Bash Script, can be modified further as required.
Step:1 Install amazon s3 cli interface on ec2.
On CentOS/RHEL:
$ yum install s3cmd
On Ubuntu/Debian:
$ sudo apt-get install s3cmd
On SUSE Linux Enterprise Server 11:
$ zypper addrepo http://s3tools.org/repo/SLE_11/s3tools.repo
$ zypper install s3cmd
Step:2 Create IAM role with s3 Access.
Create a user in Amazon IAM with access to Amazon S3 and download its AWS Access Key ID and Secret Access Key.
- Login to your Amazon Dashboard Console Account.
- Click on IAM management console.
- Click on Users to create a new user for s3 access.
- Create a new user >> Click ADD User button.
- I am adding user s3-backup for demo purpose with Programmatic access >> Click Button Next: Permissions.
- On next page select predefined policy “Attach existing policies directly” and grant “AmazonS3FullAccess” permission to the new user. Granting permission to Amazon Ec2 is optional “AmazonEC2FullAccess“, this will be useful for some other advanced tricks. >> Click Button Next: Review.
- In the end, you will be provided with Access Key ID and Secret access key. Download and save them nicely. These will be used in next steps.
Step:3 Configure s3cmd on ec2.
After getting AWS Access Key ID and Secret Access Key, use below command to configure s3cmd. Enter the AWS Access Key ID and Secret Access Key when prompted using below command.
$ s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: xxxxxxxxxx
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes
New settings:
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Encryption password: xxxxxxxxxx
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
Step:4 Create Amazon s3 bucket for the backup process.
Create an Amazon s3 bucket with the suitable name by logging into Amazon Dashboard Console Account. Here I am using the name of the bucket as a “bucket_name” for demo purpose.
Step:5 Basic Bash Script, can be modified further as required.
#!/bin/bash Admin="yourusername@yourdomain.com" timestamp=$(date +"%F %r") copyloc="/home/vik/sourcecode/" backlogfile="/var/log/s3bakstatus.log" email_sub="Amazon s3 Backup Status" email_msgs="Backup Successful" email_msgf="Backup Failed" `nice -n 19 /usr/bin/s3cmd sync -r $copyloc s3://bucket_name/` if [ $? -eq 0 ];then echo "Backup Completed Successfully at $timestamp" >> "$backlogfile" echo "$email_msgs" | mail -s "$email_sub" -r "$(hostname)<server@yourserver.com>" $Admin else echo "Backup Failed at $timestamp" >> "$backlogfile" echo "$email_msgf" | mail -s "$email_sub" -r "$(hostname)<server@yourserver.com>" $Admin fi
In the above script, I am copying my source code from location “/home/vik/sourcecode/” and sending it to s3 bucket named “bucket_name”. You can back up any directory as per your requirements. To automate backup process cronjob can be used. Stay tuned for some more exciting posts. Cheers!
/var/log
permission denied what should i do??