S3Sync.net
February 02, 2014, 01:34:52 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
  Home Help Search Login Register  
  Show Posts
Pages: 1 2 [3]
31  General Category / Questions / Remove node happens twice on: June 05, 2007, 04:36:09 PM
Hi,

when using "-delete" as an option to remove files at s3 that have been deleted at the local source,
I usually get a "remove node this/and/that/file.ext" *and*, after the sync has been gone over the last of the files, again a "remove node this/and/that".
As far as it looks (could be coincidence, though) in the first step, only files are deleted, and only after processing *all* files, s3sync.rb returns and deletes the (now empty) folders.

Did I get this right?
Why would you do that - likely more complicated - way, instead of deleting the folders right after the last file?

Thanks.
32  General Category / Questions / Re: Restoring Files on: June 05, 2007, 04:31:18 PM
Sorry if I ask, but I need to understand this 100%:

Quote
In order to "fix" this, you would have to re-sync from local sources to s3 targets.

As long as I can live with the way things worked in the past (Version<1.1.3) I don't need to do anything. No resync, no different way of restoring things.
Right?

Using s3sync.rb version 1.1.3 will only behave differently (read: fail) as long as I don't prepare the receiving end (my local machine) as I did before (making certain all top level folders are created before starting the restore).
Correct?

If I want the restore to be easier - which version 1.1.3 would offer - I would need to resync things to s3 again.
Okay?

Thanks for your patience.
33  General Category / General Discussion / Re: translation needed // size limitation now gone? on: June 05, 2007, 01:04:48 PM
Code:
CNAME pointing to s3.amazonaws.com then request signing works
Okay. Thanks. Now it makes sense.

 Smiley
34  General Category / General Discussion / translation needed // size limitation now gone? on: June 05, 2007, 09:14:22 AM
Hi,

can somebody interpret the following sentence in some easy-english for me, please? Sometimes, my command of english just fails me utterly...  Embarrassed
Code:
Customers who use vanity domains can now make signed HTTP requests against them.



And: Does s3sync.rb has to be changed to take advantage of the "new" (revised) size limitation?
Not that I'm able to use that limit to its full capacity with my measly 512 Kbit upstream...    Undecided



*****************************************************************
AMAZON S3: VIRTUAL HOSTED BUCKETS AND OBJECT SIZE IMPROVEMENTS
*****************************************************************
The Amazon S3 team is excited to launch a significant enhancement to our
virtual hosted buckets feature (also known as vanity domains). Customers
who use vanity domains can now make signed HTTP requests against them.
Please refer to the updated Amazon S3 documentation for details. We've
released updated code samples for this feature which are available in the
Resource Center. The team has also successfully deployed a fix that removes
the 2-4 GB limitation on object sizes. All our customers should now be able
to upload, store and transfer objects that are up to 5 GB in size, as
documented in the Release Notes.
35  General Category / General Discussion / Re: personal experience: very, very stable... on: May 20, 2007, 05:03:21 PM
Agreed.
I just have too much time at my hands, so I indulge in posting here.
36  General Category / General Discussion / personal experience: very, very stable... on: May 15, 2007, 03:18:11 PM
Hi,

I've been following the amazon thread about s3sync and am actively monitoring this forum.
There is a lot of information in both sites, but very few success stories. I'd like to add this (my) one:

I've installed s3sync (and ruby and openssl) on a Synology CubeStation CS-406, a NAS-Box that provides 2 TB of Storage.
A small script starts s3sync.rb and logs its doings - and is itself called repeatedly by cron.

The machine is connected to the internet via a 4 Mbit/sec downstream and a measly 384 Kbit/sec upstream. A very common setup in my country.
Every 24 h the line is automatically disconnected by my provider. My router immediately logs back in, but there's a drop in the connection that everything (including s3sync) has to cope with (think of transferring files during that outages).


In the meantime I've uploaded some 136,867 files plus (well, thats the actual number, but many of those files have changed frequently since then - I'd guess there have been some 400,000 files transferred; thats just a guess, though).

The NAS-Box hasn't been online all the time. Sometimes a bunch of a thousand files had to be updated/removed. Sometimes just a few logs. At one point I changed the file system structure, which meant to update everything.
I made my experiences with errors in the scripts, typos by me (ex: /path/to/files/ and /path/to/files ), updates of the NAS-Box' OS, almost everything that I could think of. Nothing ever failed me.

As the script mirrors the contents of the NAS-Box to Amazon S3 as a backup, yes, I've tried a restore, too. Multiple times. It works.  Wink

Since I've set up this scenario in february, nothing has ever failed me. Not once. Not one single time.

Impressive.


Next weekend I'm going to change the harddrives of the RAID one at a time to upgrade the capacity. No sweat here. If everything fails, I just grab s3sync and restore everything from amazon. (Okay, that'll be a major annoyance for hours if not days, agreed. But just an annoyance.)

I'm protected agains malfunctions of my hardware, stupidity or errors on my part, and - as long as amazon does its own job - even against a total loss of everything at my location.

That is exactly the peace of mind I was always searching for.
It wouldn't have worked - and would not work - without s3sync.

And yes, I have tested - and to some extend worked with - other alternatives, but always came back to s3sync.
Easy of use, I guess. No GUI, great. "Scriptable" with bash (ash, in my case), perfect!

Two words:

 Grin  Grin  Grin  Grin  Grin  Grin  Grin  Grin
THANK YOU.
 Grin  Grin  Grin  Grin  Grin  Grin  Grin  Grin

Cheers.

Ah, yes. I'm using Cockpit (on mac as well as windows or linux) as a means to check on s3syncs progress or doings from time to time. Very nice tool, too.
37  General Category / Questions / Re: Can the --exclude="..."-option be used multiple times? on: May 05, 2007, 09:26:42 AM
Hi,

just wanted to give feedback: Works great!
The exclusion list is growing longer, and I am very carefully checking each sync if things are what they should be, but theres no problem.

Really really great. Thanks!
38  General Category / Questions / Can the --exclude="..."-option be used multiple times? on: April 27, 2007, 03:40:23 PM
Hm.

When I use something like "s3sync.rb -v -r -s --exclude="crit1" /source/path bucket:" things work beautifully.

Another exclude though seems to override the first one.
So "./s3sync.rb -v -r -s --exclude="crit1" --exclude="crit2" /source/path bucket:" excludes all files that have crit2 in their path, but not the crit1 ones...

Just for sakes I have tried to --exclude="crit1|crit2" which does not work.
Yes, I know --exclude takes regex's, but it was worth a try... ;-)

Right now, I'd need some 8 exclusions in a file list of some 30000 files. Can this be made to work?

39  General Category / Feature Requests / show more than 1000 files with s3cmd.rb on: April 26, 2007, 12:12:24 PM
Hi,

how can I list everything I have stored in a bucket with s3cmd WITHOUT having to repeatedly answer the "more (Y/n)" question?

I'd like to pipe s3cmd's output to sed or awk (or any other util) but unfortunately, the maximum number of line for a listing is 1000, and after that, I'd have to answer the pagination question.

Entering anything beyond "1000" gives the following error:
Code:
  S3 command failed:
  list_bucket NASa max-keys 1001 prefix   
  With result 400 Bad Request
  ./s3cmd.rb:122:in `s3cmdMain': undefined method `is_truncated' for nil:NilClass (NoMethodError)
          from ./s3cmd.rb:207

Could this be some limitation on S3's side? With JetS3t Cockpit I do see everything, but then, they are fetching this in increments of 1000, I assume...

Any idea?
40  General Category / Questions / Re: s3sync --some_options >/path/to/log.log works *after* quitting s3sync on: April 25, 2007, 11:20:46 AM
Ok. Thanks.

Might well be. I've installed s3sync on a NAS-Box with ash (not bash) as a shell.
I haven't noticed this behaviour with other commands, but then, those are "internal" in comparison to the ruby-calls.

At least I'm sure now, that it isn't s3sync.rb.

Topic closed.
41  General Category / Questions / s3sync --some_options >/path/to/log.log works *after* quitting s3sync on: April 23, 2007, 10:36:54 AM
hokay... difficult to explain, so please bear with me...

when I call something like "s3sync.rb  -v -r -s  /volume/dir  hostname:  >/volume/dir/file.log"
s3sync works beautifully, though nothing shows up in the log while it is running.
Once I quit s3sync, the log is filled.

Calling the line without the ">/volume/dir/file.log" at the end (to redirect output) gives me the single lines of s3sync operations.

This might be related to my machine, my os (linux 2.4.something) or my mistake. I just wonder if I'm doing anything wrong.

Thanks.

maelcum

42  General Category / Feature Requests / Re: --exclude on: April 23, 2007, 10:23:34 AM
After a long stay in the hospital I'm kinda up and running.
Thank you so much for implementing --exclude. It's a boon!!
43  General Category / Feature Requests / Re: --exclude on: February 18, 2007, 12:06:52 PM
I second this.
It's one of *the* most important features I'm missing.
Pages: 1 2 [3]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!