When developing software that’s intended to run on a server, I like to edit the code directly on my laptop, but don’t want to run the development server locally.
Instead, I have a dev setup on a remote server, and copy the local changes as needed, using rsync to copy only the changed bits.
Running rsync is manual work, prone to errors and easy to forget. So I automated it by writing a shell script that runs it whenever I change the directory.
The script is pretty simple. It uses inotifywait (part of inotify tools on Linux) to detect changes in the current directory and runs a command when that happens. To avoid running multiple commands on a related sequence of events, the script waits a second to see if there are more events coming.
The script itself doesn’t assume which command needs to be run, so it’s useful beyond just triggering rsync. You can find the script on GitHub.
When working on a project, I start the watch script with something like:
onchange.sh rsync -avt --delete . server:/path/
Speeding up rsync+ssh
Since I’m running rsync frequently, I want it to be as quick as possible (even though it’s happening in the background). Since the changes are very small, most of the time spent is in initiating SSH connection to the remote server.
To minimise this, I’m using SSH connection multiplexing. I just ssh into the server, even if I don’t need to do anything there. This keeps the connection open. Subsequent rsyncs over ssh just reuse it.
Unfortunately, as this uses inotify tools directly, it’s not very portable to non-Linux systems. It’s easy to write the equivalent tool in, say, python using watchdog.
That’s left as an excercise to the reader ;-)
Update: there’s also lsyncd, a full-blown tool for live syncing local with remote folders.