Uploading to S3 in Bash

This article was originally published in my blog (affectionately referred to as blargh) on . The original blog no longer exists as I've migrated everything to this wiki.

The original URL of this post was at https://tmont.com/blargh/2014/1/uploading-to-s3-in-bash. Hopefully that link redirects back to this page.

There are already a couple of ways to do this using a 3rd party library, but I didn't really feel like including and sourcing several hundred lines of code just to run a CURL command. So here's how you can upload a file to S3 using the REST API.

This example uploads a gzipped tarball; you'll need to adjust the content-type accordingly. And obviously use a real API key and secret.

bash
file=/path/to/file/to/upload.tar.gz
bucket=your-bucket
resource="/${bucket}/${file}"
contentType="application/x-compressed-tar"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=xxxxxxxxxxxxxxxxxxxx
s3Secret=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -X PUT -T "${file}" \
  -H "Host: ${bucket}.s3.amazonaws.com" \
  -H "Date: ${dateValue}" \
  -H "Content-Type: ${contentType}" \
  -H "Authorization: AWS ${s3Key}:${signature}" \
  https://${bucket}.s3.amazonaws.com/${file}

As someone who isn't abundantly talented at writing shell scripts, the tricky part was finding the -e option for echo, which makes it handle character escapes (e.g. \n). It's kind of annoyingly complex to actually have a newline character in a string in bash.

Anyway, this little snippet is suitable for running as a cron job or just a one-off from the shell. Note that if you want to add other amazon-specific headers (such as setting permissions) you'll need to manually add those to stringToSign since they need to be part of the authorization signature.

Backup Script

The reason I needed to figure this out was that I wanted to run a backup script that uploaded stuff to an S3 bucket. I run this in a cron job once a week. It backs up a Git server, a MySQL database and some nginx configuration files. It's just a real-world example of how to upload to S3 from the shell.

bash
#!/bin/bash

cd /tmp
rm -rf backup
mkdir backup
cd backup

mkdir sql && cd sql
databases=`echo 'show databases;' | mysql -u backup | tail -n +2 | grep -v _schema | grep -v mysql`
for database in $databases
do
    mysqldump -u backup --databases $database > "${database}.sql"
done

cd ..
mkdir nginx && cd nginx
cp -R /etc/nginx/sites-enabled .
cp /etc/nginx/nginx.conf .

cd ..
mkdir git && cd git
repos=`ls -1 /home/git | grep '.git$'`
for repo in $repos; do
    cp -R "/home/git/${repo}" .
done    

cd ..
date=`date +%Y%m%d`
bucket=my-bucket
for dir in git nginx sql; do
    file="${date}-${dir}.tar.gz"
    cd $dir && tar czf $file *
    resource="/${bucket}/${file}"
    contentType="application/x-compressed-tar"
    dateValue=`date -R`
    stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
    s3Key=xxxxxxxxxxxxxxxxxxxx
    s3Secret=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
    curl -X PUT -T "${file}" \
        -H "Host: ${bucket}.s3.amazonaws.com" \
        -H "Date: ${dateValue}" \
        -H "Content-Type: ${contentType}" \
        -H "Authorization: AWS ${s3Key}:${signature}" \
        https://${bucket}.s3.amazonaws.com/${file}
    cd ..
done

cd
rm -rf /tmp/backup