February 02, 2014, 01:34:29 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
  Home Help Search Login Register  
  Show Posts
Pages: [1]
1  General Category / General Discussion / Re: resume upload on: March 26, 2007, 06:28:00 PM
Resume would be VERY nice. For the last couple weeks I have been trying to upload a series of digital video files to S3 that I really want to have backed up. Each is around 1G in size. I would say it requires 5 attempts each to fully upload each file. Usually something goes wrong (on the amazon side I would presume) and the connection gets dropped. This is costing me a lot of money in bandwidth for the retries.

The only way to implement resume that I can think of (and also solve the 5G problem) is if s3sync would split up files bigger than, for example, 100M into 100M chunks and then maintain some metadata on which files are to be put back together. I know this violates the goal of all data being able to be downloaded and restored without the aid of s3sync but at least all you would have to do is to cat the files back together which most sysadmins can script up easily. If you name them sensibly (filename.avi-s3sync-1of5 or something like that for a wild guess) it shouldn't be that hard. It might be a good idea to make the chunk size configurable.
2  General Category / General Discussion / s3cmd list all on: February 26, 2007, 08:54:11 PM
Works great! Thanks!

While I'm here: Is there any way to make s3cmd list ALL of the contents of the bucket? It doesn't seem to like too large an argument and sometimes I want to list the entire contents with less and search/grep through it looking for something. Answering Y/n to every 200 or so lines is way too slow when you are backing up hundreds of thousands of files.
3  General Category / General Discussion / A small feature request... on: February 24, 2007, 08:48:00 PM
First, thanks for s3sync! It is awesome. I tried a number of other of the tools to sync my data with s3 and s3sync is by far the most useful. I am holding out some hope for s3fs-fuse but the developer doesn't seem too serious about it so I don't want to rely on it yet.

So far the only thing I could ask for of s3sync is a somewhat more verbose -v and a --progress option like rsync has. I have a relatively slow connection and it is taking forever to upload some files. Sometimes I wonder if it is still working. A more verbose -v (for example, telling me that a certain file has not changed and does not need updating like rsync has) and a progress bar to show me the progress of the current file being  uploaded perhaps along with a kb/s reading would be a nice touch and make these slow uploads a bit more tolerable.

The ability to sync only chunks of files that have changed would be nice but if I understand the protocol correctly (I have read the REST and SOAP S3 specs) you cannot upload only a section of a file and instead have to upload the whole file. I suppose s3sync could upload files in smaller chunks and only re-upload the chunks that change and keep track of which chunks belong together in what order via metadata but that would complicate the code and make it impossible to retrieve those files with any other tool which is definitely handy in case someday I need my data but s3sync is not available.
Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!