S3Sync.net
February 02, 2014, 01:32:46 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
  Home Help Search Login Register  
  Show Posts
Pages: [1]
1  General Category / Questions / Found it on: November 28, 2007, 08:31:07 AM
In case anyone finds this when Googling for a similar error...

I found the problem.  I took a wireshark trace of the session and found that the error was actually returned from the server halfway through the put although, of course, s3sync doesn't see the error until it has finished sending the file.

The full text was:

Code:
HTTP/1.1 400 Bad Request
x-amz-request-id: XXXXXXXXXXXXXXXX
x-amz-id-2: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Wed, 28 Nov 2007 11:09:42 GMT
Connection: close
Server: AmazonS3

15c
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>XXXXXXXXXXXXXXXX</RequestId><HostId>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</HostId></Error>

The problem is the "file transfer" having problems which are noticed by the other end.  I understand that (I am using a fairly flaky and relatively slow connection involving long distance rural WiFi).

I guess it would be good if s3sync could display the response text on an HTTP error, which would make it easier to find this sort of problem in the future.

Graham
2  General Category / Questions / Bad request error at end of transfer on: November 20, 2007, 08:00:47 AM
I am trying to synchronise several 100MB files using s3sync.  About half of the attempts fail due to losing connection halfway through (I understand this, and am looking into it).  Of the remainder, about half of them fail right at the end, after the upload itself seems to have completed.  Here is an extract from my log...

Quote
Progress: 104524800b  37786b/s  99%       
Progress: 104613888b  37803b/s  99%       
Progress: 104699904b  37818b/s  99%       
Progress: 104787968b  37834b/s  99%       S3 command failed:
put backup.cobb.me.uk DARhome/DARhomeDiff01.3.dar.gpg.01 #<S3::S3Object:0x2b5b0fc47418> Content-Length 104857600
With result 400 Bad Request

I would be interested to know what is failing to see if there is anything I can do to reduce the number of failures as it takes about 1 hour for each file to upload and about half of them fail after that upload.  Also, S3Sync doesn't retry after that error.

For example, is it at all likely that the problem is related to it taking along time to upload the file (in which case I might be better off splitting the file into much smaller chunks)?

I do know that it is not a problem with the particular file as if I retry the same file still has the same 50-50 chance of succeeding next time.

Graham
3  General Category / Questions / Re: How do I make it start where it left off? on: October 19, 2007, 12:04:18 PM
Thanks for looking into this.  I can confirm that I have worked round the problem by using --recursive (even though there are no sub-directories).

Graham
4  General Category / Questions / Re: How do I make it start where it left off? on: October 13, 2007, 11:39:47 AM
I think I am seeing the same problem only it occurs even if the sync completes successfully.

I created a new bucket (using s3cmd.rb) and then issued the following command:

Code:
./s3sync.rb -d --ssl --verbose --delete --progress /tmp/aa/ backup.cobb.me.uk:a

/tmp/aa contained two files: a and b.

As expected the command copied both files.  Here is the log:

Code:
s3Prefix a
localPrefix /tmp/aa/
localTreeRecurse /tmp/aa
Test /tmp/aa/a
Test /tmp/aa/b
local item /tmp/aa/a
local node object init. Name:a Path:/tmp/aa/a Size:2 Tag:60b725f10c9c85c70d97880dfe8191b3
s3TreeRecurse backup.cobb.me.uk a
Trying command list_bucket backup.cobb.me.uk max-keys 200 prefix a delimiter / with 100 retries left
Response code: 200
source: a
s3 node object init. Name:a Path:a/a Size: Tag:
Create node a
a/a
File extension: a/a
Trying command put backup.cobb.me.uk a/a #<S3::S3Object:0x2b27b4e05950> Content-Length 2 with 100 retries left
Response code: 200
local item /tmp/aa/b
local node object init. Name:b Path:/tmp/aa/b Size:2 Tag:3b5d5c3712955042212316173ccf37be
source: b
s3 node object init. Name:b Path:a/b Size: Tag:
Create node b
a/b
File extension: a/b
Trying command put backup.cobb.me.uk a/b #<S3::S3Object:0x2b27b4d57e90> Content-Length 2 with 100 retries left
Response code: 200

I then reissued exactly the same command and got almost exactly the same log:

Code:
s3Prefix a
localPrefix /tmp/aa/
localTreeRecurse /tmp/aa
Test /tmp/aa/a
Test /tmp/aa/b
local item /tmp/aa/a
local node object init. Name:a Path:/tmp/aa/a Size:2 Tag:60b725f10c9c85c70d97880dfe8191b3
s3TreeRecurse backup.cobb.me.uk a
Trying command list_bucket backup.cobb.me.uk max-keys 200 prefix a delimiter / with 100 retries left
Response code: 200
prefix found: /
source: a
s3 node object init. Name:a Path:a/a Size: Tag:
Create node a
a/a
File extension: a/a
Trying command put backup.cobb.me.uk a/a #<S3::S3Object:0x2b88a5339448> Content-Length 2 with 100 retries left
Response code: 200
local item /tmp/aa/b
local node object init. Name:b Path:/tmp/aa/b Size:2 Tag:3b5d5c3712955042212316173ccf37be
source: b
s3 node object init. Name:b Path:a/b Size: Tag:
Create node b
a/b
File extension: a/b
Trying command put backup.cobb.me.uk a/b #<S3::S3Object:0x2b88a528bd48> Content-Length 2 with 100 retries left
Response code: 200

The files were copied again.

I was hoping to use s3sync to synchronise some files which are about 1.5GB long (slices of a backup) and so copying the file unnecessarily is a big problem!

Any ideas?

Graham
Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!