Run rclone using docker on AWS EC2

Presentation

If you don’t know about rclone, it’s an awesome tool to sync object storages, be it S3 or minio, google storage or even google drive, the list seems endless !

To setup your S3 remote just run rclone config, and be sure to configure either an IAM role for your EC2 instance or set access keys with RCLONE_S3_ACCESS_KEY_ID and RCLONE_S3_SECRET_ACCESS_KEY.

You can ensure everything is working properly by running rclone lsd $your-remotename: (don’t forget the trailing :).

Now let’s start a container with rclone installed:

docker run -it  -v ~/.config/rclone/rclone.conf:/$YOUR_USER_HOME/.config/rclone/rclone.conf $YOUR_IMAGE bash

Note: don’t juge me here, I’m using a volume for the configuration for simplicity purpose.

So now, let’s run rclone lsd $your-remotename: and wait a second, or two, … or maybe 1 min if you’re lucky.

The problem

This is happening because we’re using the bridge network, should you use the host network, the command would work.
The easy way is then to run :

docker run -it  -v ~/.config/rclone/rclone.conf:/$YOUR_USER_HOME/.config/rclone/rclone.conf --network host $YOUR_IMAGE bash

If you must use the bridge network, you’ll have to change the http-put-response-hop-limit option in the instance metadata:

aws ec2 modify-instance-metadata-options --instance-id $ID --http-put-response-hop-limit 3

All credits go to this rclone issue.