S3Sync.net
February 02, 2014, 01:22:52 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
   Home   Help Search Login Register  
Pages: [1]
  Print  
Author Topic: Bad request error at end of transfer  (Read 3490 times)
Graham Cobb
Newbie
*
Posts: 4


View Profile
« on: November 20, 2007, 08:00:47 AM »

I am trying to synchronise several 100MB files using s3sync.  About half of the attempts fail due to losing connection halfway through (I understand this, and am looking into it).  Of the remainder, about half of them fail right at the end, after the upload itself seems to have completed.  Here is an extract from my log...

Quote
Progress: 104524800b  37786b/s  99%       
Progress: 104613888b  37803b/s  99%       
Progress: 104699904b  37818b/s  99%       
Progress: 104787968b  37834b/s  99%       S3 command failed:
put backup.cobb.me.uk DARhome/DARhomeDiff01.3.dar.gpg.01 #<S3::S3Object:0x2b5b0fc47418> Content-Length 104857600
With result 400 Bad Request

I would be interested to know what is failing to see if there is anything I can do to reduce the number of failures as it takes about 1 hour for each file to upload and about half of them fail after that upload.  Also, S3Sync doesn't retry after that error.

For example, is it at all likely that the problem is related to it taking along time to upload the file (in which case I might be better off splitting the file into much smaller chunks)?

I do know that it is not a problem with the particular file as if I retry the same file still has the same 50-50 chance of succeeding next time.

Graham
Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #1 on: November 20, 2007, 02:57:38 PM »

My guess is that the connection might be interfered with before all bytes were sent somehow so S3 replies 400 (a HTTP protocol error)

Tried using -s in case some entity is trying to proxy your http and inserting problems accidentally?

Other than that, I can only really help if you send me a link to a wireshark trace of the session.
Logged
Graham Cobb
Newbie
*
Posts: 4


View Profile
« Reply #2 on: November 28, 2007, 08:31:07 AM »

In case anyone finds this when Googling for a similar error...

I found the problem.  I took a wireshark trace of the session and found that the error was actually returned from the server halfway through the put although, of course, s3sync doesn't see the error until it has finished sending the file.

The full text was:

Code:
HTTP/1.1 400 Bad Request
x-amz-request-id: XXXXXXXXXXXXXXXX
x-amz-id-2: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Wed, 28 Nov 2007 11:09:42 GMT
Connection: close
Server: AmazonS3

15c
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>XXXXXXXXXXXXXXXX</RequestId><HostId>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</HostId></Error>

The problem is the "file transfer" having problems which are noticed by the other end.  I understand that (I am using a fairly flaky and relatively slow connection involving long distance rural WiFi).

I guess it would be good if s3sync could display the response text on an HTTP error, which would make it easier to find this sort of problem in the future.

Graham
Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #3 on: November 28, 2007, 12:16:30 PM »

I actually wish the ruby http lib could understand early responses... It might make implementation of 100-continue handling possible.  I don't remember for sure but I think my code emits the http response on error if you used -d.
Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!