S3Sync.net
February 02, 2014, 01:29:56 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
  Home Help Search Login Register  
  Show Posts
Pages: 1 ... 22 23 [24] 25
346  General Category / General Discussion / Re: Amazon object limit of 5 GB causing problem for rsync (s3sync) on: February 20, 2007, 04:06:13 PM
The size of the folder is irrelevant, only the size of each node.  s3sync maps one file per node.  So if you have a file > 5G then you can't use s3sync.  Otherwise it should be OK.

Note however I think there may still be an S3 bug about not being able to send a file that is >2GB because of some .. hardware issues.  But I'm too lazy to look up the details right now.  AWS forum should be swarming with stuff about it.
347  General Category / Feature Requests / Re: command to move objects? on: February 20, 2007, 04:04:16 PM
As lfh says, there's no way to do this without an indirection between the names and contents.  Because one of the design goals of s3sync is to have transparent (direct) naming of its nodes, it is not possible to implement this. 

I'm not sure how your other tool does it (via indirection, or by doing another upload followed by a delete).  But either way there won't ever be a "move" operation in s3sync for above stated reasons.
348  General Category / General Discussion / Re: hard links on: February 20, 2007, 09:18:14 AM
In your endeavors have you discovered a way to modify a node's meta data without re-PUT-ing the node?  That ability would be incredibly huge for me if it existed.  I just couldn't find anything on doing that in the s3 API/docs and assume it is impossible.

Isn't .yield fun?  Before using generators I was literally running into memory limitations building up paths in memory (I run on some low-powered virtual machines).
349  General Category / Feature Requests / s3cmd option to "list all" without page breaks on: February 20, 2007, 07:00:09 AM
subject says it all
350  General Category / Feature Requests / Option to encrypt stored data using gpg on: February 20, 2007, 06:29:38 AM
This also implies some more work to cache the etag value of encrypted contents locally.. or else using the modify date only in the comparator. 

Could also store the unencrypted md5 sum to S3 as meta data, though that doesn't come down in the bucket list, so it's not clear that it would be too useful.

I think the "check date first, then issue HEAD to check md5" approach would probably be fast enough.

Edit 6/23/2008: The "--no-md5" option has since been added, which uses only the modify date and file size, as mentioned above.  For encryption it may be sufficient to force this option, rather than doing some kind of wacky md5/etag cache.
351  General Category / General Discussion / Re: hard links on: February 20, 2007, 06:21:17 AM
There's so many obstacles to hard link style backups with S3, this is just one of em.  Undecided

Before S3, I backed everything up incrementally into hard link directory structures using rsync and loved it.  Meaning, every day it would create a new directory structure full of hard links to the previous day, and then only "copy" files that had changed.  That way I had like 20 inter-linked hot backups, and geometrically backed them off so I had data going back for a whole year!  That's the real goal for me with rsync, s3, and hard links.  I don't care about a couple of files that are hardlinked together, I want to leverage the concept of links to make the backups themselves more powerful.  Like see http://www.mikerubel.org/computers/rsync_snapshots/

I think that once you surpass all the problems to do this on S3 (no renaming, no modification of meta data without a full re-upload, and a couple others) you will have a structure that basically treats S3 like a block device... although you might not have thought of it that way, that's what the solution would entail.

So the reason I am not pursuing these ends is, I think that eventually one or more of the "FUSE" type projects for S3 will mature to the point where it can be used efficiently as an rsync backup target (at least, with some modification).  I think trying to coerce s3sync into being "smart" enough to solve these issues itself is ultimately folly and would warp the design/implementation so far that it would be trying to solve two rather incompatible problems with the same code base.

Would be thrilled to continue the discussion on this. 

P.S. You're not using s3sync?  Wow.. you've got to be one of the leading contributors to the community (here, and back on the aws thread).  I'm honored, but confused... why the interest if you're not using the tool?  Did you find something better?  Maybe I'll start using it  Grin

P.P.S.: The memory usage of your innocent looking ruby snippet means it can't scale to the level of what I was talking about with intra-backup hardlink structures-- not that you intended it to necessarily.
352  General Category / Feature Requests / Platform specific extensions (NTFS stream, Mac Forks) on: February 20, 2007, 06:00:17 AM
Ruby can call to compiled code, so I am told.  If anyone is proficient with NTFS or mac OS-X file system and wants to help me implement these platform specific extensions, please let me know.
353  General Category / Feature Requests / Option to use size/modified time in comparisons on: February 20, 2007, 05:47:45 AM
This is non-trivial in some ways to do correctly, but in certain situations it could be much faster than reading every byte of every local file to calculate its etag.
354  General Category / General Discussion / Re: s3sync thread on aws forum on: February 20, 2007, 05:31:04 AM
I wish I could just pull all the content from that thread and archive it here.  But AWS (or possibly all individual posters) own the copyright to their messages.

Probably 90% of the posts there aren't important anyway, but there's probably fodder for a FAQ in there that could be made on the wiki.  I actually would prefer that thread go away in the long term and its info be crystalized in FAQ form.  Maybe I'll do that now as procrastination for starting work  Grin

It is also noteable that s3sync is now mentioned permanently on the AWS S3 "code" resources page. 
355  General Category / Announcements / 1.1.0 is released on: February 19, 2007, 08:16:06 AM
2007-02-19
Version 1.1.0
*WARNING* Lots of path-handling changes. *PLEASE* test safely before you just
swap this in for your working 1.0.x version.


- Adding --exclude (and there was much rejoicing).
- Found Yet Another Leading Slash Bug with respect to local nodes. It was always
"recursing" into the first folder even if there was no trailing slash and -r
wasn't specified. What it should have done in this case is simply create a node
for the directory itself, then stop (not check the dir's contents).
- Local node canonicalization was (potentially) stripping the trailing slash,
which we need in order to make some decisios in the local generator.
- Fixed problem where it would prepend a "/" to s3 key names even with blank
prefix.
- Fixed S3->local when there's no "/" in the source so it doesn't try to create
a folder with the bucket name.
- Updated s3try and s3_s3sync_mod to allow SSL_CERT_FILE
356  General Category / Feature Requests / Re: s3sync.rb to run without current directory defined to where s3try.rb, S3.rb on: February 19, 2007, 08:15:10 AM
OK the official stance is now to use the RUBYLIB.  I'm still open to the idea of gem-ing, but I don't know anything about it at the moment.
357  General Category / Feature Requests / Re: --exclude on: February 19, 2007, 08:14:08 AM
Done.
358  General Category / Feature Requests / Re: Support SSL_CERT_FILE on: February 19, 2007, 08:13:41 AM
Done.
359  General Category / Closed Bugs / Re: Empty (missing) prefix doesn't work right on: February 19, 2007, 07:39:46 AM
Should be fixed now.
360  General Category / Closed Bugs / Re: s3 to local fails when no slash in s3 arg on: February 19, 2007, 07:39:16 AM
Should be fixed now.
Pages: 1 ... 22 23 [24] 25
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!