S3Sync.net
February 02, 2014, 01:35:01 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
   Home   Help Search Login Register  
Pages: [1]
  Print  
Author Topic: files with colons in them  (Read 6116 times)
jh
Newbie
*
Posts: 12


View Profile
« on: February 05, 2008, 11:02:27 AM »

I just got an error S3-syncing a directory with a colon in its name:

s3sync Reports:Day1 MyBackupBucket:

There seems to be no mechanism for quoting the colon in the first name.  So:

1.  Is there a workaround?

2.  Is this going to break an automated backup of my entire fs?

    That is, if I have the following directories:

    Files/
    Files/Junk/
    Files/Reports:Day1/
    Files/Reports:Day2/
    ...

    I know I can't s3sync the dirs that have colons in them.  What
    happens when I s3sync Files:

       s3sync -r Files MyBackupBucket:

    Will Files/Reports:Day1/ get transfered correctly?  How will I get
    the dir back?

Thanks.


Logged
tnik
Newbie
*
Posts: 5


View Profile
« Reply #1 on: February 05, 2008, 03:40:30 PM »

first, you really shouldn't be using colon's in filenames.. sure it works fine on the linux side, but what happens if you have to backup to a windows box? ya never know whats going to happen.

well, I tested it.. it chokes in s3sync (probably the internal checks to make sure you have a good source and destination)

it works fine in s3cmd.

I did spend a lil bit of time browsing through the source.. ruby seems like a semi decent code to work with but I've never messed with it..  this seems like something you may be able to ask ferrix to implement. but, he may not because that wouldn't keep it cross platform happy.
Logged
jh
Newbie
*
Posts: 12


View Profile
« Reply #2 on: February 06, 2008, 08:13:49 AM »

I think we all agree that a backup utility has to backup any file that the file system supports.  The only character that can't appear in a Unix filename is a slash, so S3Sync should support everything else.

The problematic characters are usually backslashes and spaces.  S3 only supports URL-compatible names(right?), so no spaces and little punctuation.  S3Sync has to encode certain filenames.  Fortunately, robust algorithms to do this are common on the web.

But until S3 incorporates proper filename quoting and encoding, I think the only safe way to use it it to create tarballs on the local side and then S3sync the tarballs....  (This also means it's easy to support encryption and compression, but at the expense of much of the expediency of the "sync" aspect of the program.)

Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #3 on: February 06, 2008, 11:06:02 AM »

I'm sure this is just a bug due to my parsing of the command line.  We needn't get into arguments about characters and encoding Smiley
Logged
jh
Newbie
*
Posts: 12


View Profile
« Reply #4 on: February 07, 2008, 08:32:21 AM »

I didn't mean to start an argument.  (And THANK YOU for a wonderful utility.)  Can you confirm that, the command-line notwithstanding, files with colons and spaces and whatnot can be stored on S3?

(But I'm still going to nag you every month or so for a way to include an arbitrary filter, such as compression or encryption....)
Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #5 on: February 07, 2008, 02:08:19 PM »

Pretty much anything can be stored as long as it is encoded properly.
Logged
jh
Newbie
*
Posts: 12


View Profile
« Reply #6 on: February 10, 2008, 01:39:41 PM »

Just to confirm:  I tired r3syncing a directory in which one of the filenames contained a colon.  It worked.

As a command line parsing option, perhaps filenames that start with ./ can be assumed to be local.  So:

r3sync ./this:islocal remotebucket:/somwhere

would work.

Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!