S3Sync.net
February 02, 2014, 01:28:40 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
   Home   Help Search Login Register  
Pages: [1]
  Print  
Author Topic: Does s3sync actually sync?  (Read 11732 times)
babadi
Newbie
*
Posts: 4


View Profile
« on: January 12, 2008, 02:03:00 PM »

Hi guys,

I'm trying to figure out what, if anything, I'm missing here. It was my understanding that, like rsync, s3sync is supposed to not recopy files that already exist. It seems to be suggested that this is the case in a forum thread I found (http://developer.amazonwebservices.com/connect/thread.jspa?threadID=11975&start=15&tstart=0). But this doesn't seem to be actually happening, as if I resync the same directory repeatedly, the same files get copied over and over:

martin@dev:~/dev-server-backup$ s3sync/s3sync.rb test/ bucketname:test -v
Create node test1
Create node test2
Create node test3
Create node test4
Create node test5
martin@dev:~/dev-server-backup$ s3sync/s3sync.rb test/ bucketname:test -v
Create node test1
Create node test2
Create node test3
Create node test4
Create node test5

Any insight would be appreciated.

Thanks,
Martin
Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #1 on: January 13, 2008, 01:47:10 PM »

Yes it should not be doing extra work, as you presume.

Include -d output.
Logged
babadi
Newbie
*
Posts: 4


View Profile
« Reply #2 on: January 13, 2008, 04:25:38 PM »

Sure. I deleted the files off s3 (using s3cmd), and then re-ran the same command with -d twice. Here is the output. Thanks.

martin@dev:~/dev-server-backup$ s3sync/s3sync.rb test/ bucketname:test -v -d
s3Prefix test
localPrefix /home/martin/dev-server-backup/test/
localTreeRecurse /home/martin/dev-server-backup/test
Test /home/martin/dev-server-backup/test/test2
Test /home/martin/dev-server-backup/test/test1
Test /home/martin/dev-server-backup/test/test4
Test /home/martin/dev-server-backup/test/test5
Test /home/martin/dev-server-backup/test/test3
local item /home/martin/dev-server-backup/test/test1
local node object init. Name:test1 Path:/home/martin/dev-server-backup/test/test1 Size:6 Tag:3e7705498e8be60520841409ebc69bc1
s3TreeRecurse bucketname test
Creating new connection
Trying command list_bucket bucketname max-keys 200 prefix test delimiter / with 100 retries left
Response code: 200
prefix found: /
source: test1
s3 node object init. Name:test1 Path:test/test1 Size: Tag:
Create node test1
test/test1
File extension: test/test1
Trying command put bucketname test/test1 #<S3::S3Object:0xb7a370d4> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test2
martin@dev:~/dev-server-backup$ s3sync/s3sync.rb test/ bucketname:test -v -d
s3Prefix test
localPrefix /home/martin/dev-server-backup/test/
localTreeRecurse /home/martin/dev-server-backup/test
Test /home/martin/dev-server-backup/test/test2
Test /home/martin/dev-server-backup/test/test1
Test /home/martin/dev-server-backup/test/test4
Test /home/martin/dev-server-backup/test/test5
Test /home/martin/dev-server-backup/test/test3
local item /home/martin/dev-server-backup/test/test1
local node object init. Name:test1 Path:/home/martin/dev-server-backup/test/test1 Size:6 Tag:3e7705498e8be60520841409ebc69bc1
s3TreeRecurse bucketname test
Creating new connection
Trying command list_bucket bucketname max-keys 200 prefix test delimiter / with 100 retries left
Response code: 200
source: test1
s3 node object init. Name:test1 Path:test/test1 Size: Tag:
Create node test1
test/test1
File extension: test/test1
Trying command put bucketname test/test1 #<S3::S3Object:0xb7964170> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test2
local node object init. Name:test2 Path:/home/martin/dev-server-backup/test/test2 Size:6 Tag:126a8a51b9d1bbd07fddc65819a542c3
source: test2
s3 node object init. Name:test2 Path:test/test2 Size: Tag:
Create node test2
test/test2
File extension: test/test2
Trying command put bucketname test/test2 #<S3::S3Object:0xb7958370> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test3
local node object init. Name:test3 Path:/home/martin/dev-server-backup/test/test3 Size:6 Tag:3bc3be114fb6323adc5b0ad7422d193a
source: test3
s3 node object init. Name:test3 Path:test/test3 Size: Tag:
Create node test3
test/test3
File extension: test/test3
Trying command put bucketname test/test3 #<S3::S3Object:0xb794c4a8> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test4
local node object init. Name:test4 Path:/home/martin/dev-server-backup/test/test4 Size:6 Tag:b5163cf270a3fbac34827c4a2713eef4
source: test4
s3 node object init. Name:test4 Path:test/test4 Size: Tag:
Create node test4
test/test4
File extension: test/test4
Trying command put bucketname test/test4 #<S3::S3Object:0xb793f654> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test5
local node object init. Name:test5 Path:/home/martin/dev-server-backup/test/test5 Size:6 Tag:bb4da129079c12d4ddaee64ba79a03ff
source: test5
s3 node object init. Name:test5 Path:test/test5 Size: Tag:
Create node test5
test/test5
File extension: test/test5
Trying command put bucketname test/test5 #<S3::S3Object:0xb79361f8> Content-Length 6 with 100 retries left
Response code: 200
martin@dev:~/dev-server-backup$ s3sync/s3sync.rb test/ bucketname:test -v -d
s3Prefix test
localPrefix /home/martin/dev-server-backup/test/
localTreeRecurse /home/martin/dev-server-backup/test
Test /home/martin/dev-server-backup/test/test2
Test /home/martin/dev-server-backup/test/test1
Test /home/martin/dev-server-backup/test/test4
Test /home/martin/dev-server-backup/test/test5
Test /home/martin/dev-server-backup/test/test3
local item /home/martin/dev-server-backup/test/test1
local node object init. Name:test1 Path:/home/martin/dev-server-backup/test/test1 Size:6 Tag:3e7705498e8be60520841409ebc69bc1
s3TreeRecurse bucketname test
Creating new connection
Trying command list_bucket bucketname max-keys 200 prefix test delimiter / with 100 retries left
Response code: 200
prefix found: /
source: test1
s3 node object init. Name:test1 Path:test/test1 Size: Tag:
Create node test1
test/test1
File extension: test/test1
Trying command put bucketname test/test1 #<S3::S3Object:0xb79cf150> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test2
local node object init. Name:test2 Path:/home/martin/dev-server-backup/test/test2 Size:6 Tag:126a8a51b9d1bbd07fddc65819a542c3
source: test2
s3 node object init. Name:test2 Path:test/test2 Size: Tag:
Create node test2
test/test2
File extension: test/test2
Trying command put bucketname test/test2 #<S3::S3Object:0xb79c3198> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test3
local node object init. Name:test3 Path:/home/martin/dev-server-backup/test/test3 Size:6 Tag:3bc3be114fb6323adc5b0ad7422d193a
source: test3
s3 node object init. Name:test3 Path:test/test3 Size: Tag:
Create node test3
test/test3
File extension: test/test3
Trying command put bucketname test/test3 #<S3::S3Object:0xb79b72bc> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test4
local node object init. Name:test4 Path:/home/martin/dev-server-backup/test/test4 Size:6 Tag:b5163cf270a3fbac34827c4a2713eef4
source: test4
s3 node object init. Name:test4 Path:test/test4 Size: Tag:
Create node test4
test/test4
File extension: test/test4
Trying command put bucketname test/test4 #<S3::S3Object:0xb79aa418> Content-Length 6 with 100 retries left
Response code: 200
local item /home/martin/dev-server-backup/test/test5
local node object init. Name:test5 Path:/home/martin/dev-server-backup/test/test5 Size:6 Tag:bb4da129079c12d4ddaee64ba79a03ff
source: test5
s3 node object init. Name:test5 Path:test/test5 Size: Tag:
Create node test5
test/test5
File extension: test/test5
Trying command put bucketname test/test5 #<S3::S3Object:0xb79a1070> Content-Length 6 with 100 retries left
Response code: 200
martin@dev:~/dev-server-backup$
Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #3 on: January 14, 2008, 12:22:17 AM »

Your bucket is "bucketname"?  Can you give greg13070 access so I can try to replicate the problem?  What env/yml values are you using (omitting your user id and secret password of course)
Logged
babadi
Newbie
*
Posts: 4


View Profile
« Reply #4 on: January 15, 2008, 11:52:43 PM »

I'd be happy to... although it doesn't seem to be bucket specific; I've created several buckets and seen the same behavior on each. This is probably a dumb question, but how do I use s3cmd to set ACLs? I thought I could do something like

s3cmd.rb createbucket martinsbucket x-amz-acl:public-read-write

But this fails with a 400 bad request.

Thanks for your help.
Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #5 on: January 16, 2008, 10:37:49 AM »

I am not sure you can make a bucket public writeable.. can you?  Does the request work if you do just public-read?  Personally I use s3fox to poke at permissions after upload.  I probably should update s3cmd to have that type of feature.
Logged
babadi
Newbie
*
Posts: 4


View Profile
« Reply #6 on: January 16, 2008, 10:50:08 PM »

Ah... I think I've found the issue...

It seems thats if I include an S3 prefix, e.g.

$ ruby s3sync/s3sync.rb MY_BACKUP_ROOT/ bucketname:test -v

then every file gets uploaded every time. On the other hand, if I remove the prefix, e.g.

$ ruby s3sync/s3sync.rb MY_BACKUP_ROOT/ bucketname: -v

then only the missing files get uploaded.

Thanks!
Logged
phyzome
Newbie
*
Posts: 3


View Profile
« Reply #7 on: March 30, 2008, 11:46:15 AM »

Even with babadi's tip, it doesn't always work.

I need to synchronize a few MiB every once in a while from a directory containing approx. 4000 items, all of which are immutable. (They are named after their MD5 sums.) It's unreasonable to have s3sync push the whole directory up to Amazon just for that. Is there any way to tell s3sync to ignore file mod dates when comparing?
Logged
phyzome
Newbie
*
Posts: 3


View Profile
« Reply #8 on: March 30, 2008, 11:54:09 AM »

Ah... I think I've found the issue...

It seems thats if I include an S3 prefix, e.g.

$ ruby s3sync/s3sync.rb MY_BACKUP_ROOT/ bucketname:test -v

then every file gets uploaded every time. On the other hand, if I remove the prefix, e.g.

$ ruby s3sync/s3sync.rb MY_BACKUP_ROOT/ bucketname: -v

then only the missing files get uploaded.

Thanks!

That's not enough. You also have to make sure your local directory (MY_BACKUP_ROOT) does not end in a slash.

Cancel that, still broken. This needs to get fixed.

I'm uploading using this command:

Code:
./s3sync.rb --delete --public-read --cache-control="public, max-age=315360000" --verbose "/mnt/photos/public" bofgallery:

Setup:
  • /mnt/photos/public contains a folder called "data", which contains hardlinks to image files
  • Here is a representative element of the bucket's contents.

Results from testing (with files already synced):
  • If I use /mnt/photos/public/ and bofgallery:, the files are dropped directly into the bucket with no "data/" prefix -- a duplication, of course.
  • If I use /mnt/photos/public and bofgallery:, s3sync creates a node called "public" in the bucket, and stops.
  • If I use /mnt/photos/public/data and bofgallery:, nothing seems to happen.
  • And if I use /mnt/photos/public/data/ and bofgallery:data/, everything gets uploaded over again.

I'm freakin' out here!

ETA: I'm creating a new bucket and avoiding subdirectories. That's the only thing that seems to work. So, the new script uses /mnt/photos/public/ and bofgallerydata:.
« Last Edit: March 30, 2008, 02:12:52 PM by phyzome » Logged
ile
Newbie
*
Posts: 1


View Profile
« Reply #9 on: June 19, 2008, 12:00:30 AM »

I was looking or a backup software to S3...

Is this sync still a problem here?
Logged
phyzome
Newbie
*
Posts: 3


View Profile
« Reply #10 on: June 19, 2008, 06:18:56 AM »

I was looking for a backup software to S3...

Is this sync still a problem here?

I've had success using this format:

Code:
./s3sync.rb --delete --verbose "/mnt/photos/public/" bofgallery:

Mind your slashes!
Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!