I have been backing up from a centos computer to s3 for months using v1.2.6 of the script( centos 5, ruby 1.8.5). If I do a 'df' before and after the script runs, the available space on the / partition shrinks about the size the backups I desired (in my case, around 1GB).
After I noticed this happening, I cleared some old, unrelated files out, and ran the script twice, confirming both times that the available space shrinks about 1GB. On the second occasion, I wrote the outputs of 'du / | sort -n' to files before and after I ran the script. After comparing the du outputs however, the increased size listed in the du files is nowhere close to 1GB. It's as if the s3 script has put invisible files equal to size of the desired 2 files to backup on the / partition.
Both times I ran the script I got messages on the terminal like this (it has not been uncommon to get similar output before):
Update node file1.tar.gz
Connection reset: Connection reset by peer
99 retries left, sleeping for 30 seconds
Update node file2.tar.gz
I can not rerun the script to debug further because I will fill up the / partition. Anyone have any hints as to where and how I might delete the invisible files so I can debug further?