S3Sync.net
February 02, 2014, 01:25:46 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
  Home Help Search Login Register  
  Show Posts
Pages: [1]
1  General Category / Report Bugs / s3 suddenly completely broken - 403 forbidden on: March 20, 2008, 07:56:15 AM
All of sudden, I'm getting "403 forbidden" messages and nothing with s3 works.  Here's the output of "s3cmd -v listbuckets"

list all buckets {}
S3 command failed:
list_all_my_buckets
With result 403 Forbidden

I haven't changed anything on my end.

Ideas?
2  General Category / Feature Requests / reset error count on: February 14, 2008, 09:24:38 AM
It seems to me that counting cumulative errors isn't the way to go.

I'd rather see the error count reset after a successful file upload.  Then I would set the max error count to, say, 5 (try up to 5 times to send a file).  If it doesn't work five times in a row, something's wrong.  But if I'm uploading 10,000 files, even an error count of 100 doesn't necessarily mean anything's wrong.  And yet again, if 1 file fails 100 times in a row, something *is* wrong.

3  General Category / Feature Requests / Re: Preserve modification times? on: February 12, 2008, 08:17:22 AM
As a workaround, you can tar the files and s3sync the tarballs.  This also lets you compress and encrypt the data.
4  General Category / Report Bugs / Re: files with colons in them on: February 10, 2008, 01:39:41 PM
Just to confirm:  I tired r3syncing a directory in which one of the filenames contained a colon.  It worked.

As a command line parsing option, perhaps filenames that start with ./ can be assumed to be local.  So:

r3sync ./this:islocal remotebucket:/somwhere

would work.

5  General Category / Feature Requests / stdin/stdout? on: February 08, 2008, 07:24:20 AM
How hard would it be to let s3cmd read/write from stdin/stdout?

6  General Category / Report Bugs / Re: files with colons in them on: February 07, 2008, 08:32:21 AM
I didn't mean to start an argument.  (And THANK YOU for a wonderful utility.)  Can you confirm that, the command-line notwithstanding, files with colons and spaces and whatnot can be stored on S3?

(But I'm still going to nag you every month or so for a way to include an arbitrary filter, such as compression or encryption....)
7  General Category / Report Bugs / Re: files with colons in them on: February 06, 2008, 08:13:49 AM
I think we all agree that a backup utility has to backup any file that the file system supports.  The only character that can't appear in a Unix filename is a slash, so S3Sync should support everything else.

The problematic characters are usually backslashes and spaces.  S3 only supports URL-compatible names(right?), so no spaces and little punctuation.  S3Sync has to encode certain filenames.  Fortunately, robust algorithms to do this are common on the web.

But until S3 incorporates proper filename quoting and encoding, I think the only safe way to use it it to create tarballs on the local side and then S3sync the tarballs....  (This also means it's easy to support encryption and compression, but at the expense of much of the expediency of the "sync" aspect of the program.)

8  General Category / Report Bugs / files with colons in them on: February 05, 2008, 11:02:27 AM
I just got an error S3-syncing a directory with a colon in its name:

s3sync Reports:Day1 MyBackupBucket:

There seems to be no mechanism for quoting the colon in the first name.  So:

1.  Is there a workaround?

2.  Is this going to break an automated backup of my entire fs?

    That is, if I have the following directories:

    Files/
    Files/Junk/
    Files/Reports:Day1/
    Files/Reports:Day2/
    ...

    I know I can't s3sync the dirs that have colons in them.  What
    happens when I s3sync Files:

       s3sync -r Files MyBackupBucket:

    Will Files/Reports:Day1/ get transfered correctly?  How will I get
    the dir back?

Thanks.


9  General Category / Feature Requests / Re: Option to encrypt stored data using gpg on: January 28, 2008, 09:08:24 AM
I think the right way to do this is to allow an arbitrary filter before s3 writes and after s3 reads. 

Instead of the current:

Read File --> Put File on S3;  and Get File from S3 --> Write File

we'd have:

Read File --> Arbitrary Filter --> Put File; and Get File --> Arbitrary Filer --> Write file

Reasonable choices for the filters are "gzip -f" and "gunzip" (to cut down on bandwith and other costs), and some variation of gpg that doesn't require user input.  (Is there such a thing?)


10  General Category / Questions / intermittant failures on: January 28, 2008, 08:46:36 AM
And one more question:  if the internet connection to Amazon dies for, say, 30 minutes, will s3sync recovery gracefully?
11  General Category / Questions / encyrption (again) on: January 28, 2008, 08:40:38 AM
I agree that s3sync is the wrong place for encryption.  But encryption is pretty important.  My first thought was to mirror my fs on an external HD, and encrypt each file.  Then I would s3sync the external HD mirror.

But, again, it fails (see my last post) if something goes wrong with the HD, fs, etc.  And with the intorduction of another  element, it also fails if something goes wrong with the encryption program.

Ideas?

12  General Category / Questions / disaster recovery on: January 28, 2008, 08:33:09 AM
It occurs to me that if a HD fails, corrupt data could be s3sync'd before the failure is noticed.  The original and backup data would both be corrupt.

What have people done to take this possibility into account?

Thanks.
Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!