S3Sync.net
February 02, 2014, 01:32:48 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
  Home Help Search Login Register  
  Show Posts
Pages: [1]
1  General Category / Questions / Re: Broken Pipe: solved? on: January 08, 2008, 06:10:18 AM
I'm only thinking of backward compatibility. You know....people who upgrade the script and don't test *everything* properly  - like me :-)  :-)

But yes, you are of course correct.

Faris.
2  General Category / Questions / Re: Broken Pipe: solved? on: January 07, 2008, 01:51:43 PM
Well, 1.2.4 still gives a broken pipe error but recovers from it gracefully-ish :-)


Code:

(....checks 10Gb worth of mostly 1Gb files in bak1 which were backed up previously....)

(....gets to last file  -- only a few bytes in size -- in bak1 directory then tries to go to next directory, containing some more 1Gb files)

S3 item totalbackup/bak1/lastfile
s3 node object init. Name:bak1/lastfile Path:totalbackup/bak1/lastfile Size:49 Tag:[reducated]
source: bak1/lastfile
dest: bak1/lastfile
Node bak1/lastfile unchanged
local item /home/me/totalbackup/bak2
local node object init. Name:bak2 Path:/home/me/totalbackup/bak2 Size:38 Tag:[redacted]
source: bak2
s3 node object init. Name:bak2 Path:totalbackup/bak2 Size: Tag:
Create node bak2
totalbackup/bak2
File extension: totalbackup/bak2
Trying command put mybucket totalbackup/bak2 #<S3::S3Object:0x28a2c0c4> Content-Length 38 with 100 retries left
Broken pipe: Broken pipe
No result available
99 retries left
Trying command put mybucket totalbackup/bak2 #<S3::S3Object:0x28a2c0c4> Content-Length 38 with 99 retries left
Progress: 38b  1b/s  100%       Response code: 200

bak2 is a dir node
localTreeRecurse /home/me/totalbackup bak2
Test /home/me/totalbackup/bak2/file1

(.....etc.....)

(....correctly syncs everything....)


One small wish that I'd make for 1.2.5 would be potentially to include the location of s3sync.rb in the places that s3sync uses to look for the config file. I'm having to specifically export the path as S3CONF with 1.2.4 where I didn't have to in 1.2.3. This is no big deal but it turns out the shell script I've been using to launch s3sync for my tests is not the same script that my cronjob launches, so although I had updated my test script to export S3CONF my cronjob script had not been updated so it failed to sync last night :-)

Faris.
3  General Category / Questions / Re: Broken Pipe: solved? on: January 06, 2008, 11:44:51 AM
Thank you! I'll test it as soon as it is out.
4  General Category / Questions / Re: Broken Pipe: solved? on: January 05, 2008, 06:10:48 PM
I'm afraid something isn't quite right for me:

*I left the original modification in place as well as adding the new one * Could this be the problem?

This is the point where, with no modifications, you'd get the complete failure, or with the first modification you'd get a timeout followed by 99 tries left.

Unfortunately I still get the timeout as you can see. No change basically.


Code:
.....
prefix found: /bak2/
s3TreeRecurse mybucket totalbackup /bak2/
Trying command list_bucket mybucket max-keys 200 prefix totalbackup/bak2/ delimiter / with 100 retries left
EOF error: end of file reached
Creating new connection
No result available
99 retries left
Creating new connection
Trying command list_bucket mybucket max-keys 200 prefix totalbackup/bak2/ delimiter / with 99 retries left
Response code: 200
S3 item totalbackup/bak2/file1
s3 node object init. Name:bak2/file1 Path:totalbackup/bak2/file1 Size:223252480 Tag:[redacted]


Yes, it is possible I added the code to the wrong place, but it looks right to me:

Code:
[........]
                                forceRetry = true
                                $stderr.puts "Connection timed out: #{e}"
                       rescue EOFError => e
                                # i THINK this is happening like a connection reset
                                forceRetry = true
                                $stderr.puts "EOF error: #{e}"
                                $stderr.puts "Creating new connection" if $S3syncOptions['--debug']
                                $S3syncLastBucket = bucket
                                $S3syncHttp = $S3syncConnection.make_http(bucket)
                         rescue OpenSSL::SSL::SSLError => e
                                forceRetry = true
[....]
5  General Category / Questions / Re: Broken Pipe: solved? on: January 04, 2008, 06:43:41 AM
Thanks! I'll do it today and report back asap.

Faris.
6  General Category / Questions / Re: Broken Pipe: solved? on: December 29, 2007, 11:21:48 AM
I'm struggling with this problem too.

I've discovered something interesting though. I'm not sure how useful it might be but I thought I'd post it anyway.

Yesterday I was able to s3sync just over 10Gb of files with no problem into an EU bucket.

Overnight some additional data was added to this fileset so I went to s3sync to get the differences onto S3 and lo and behold I was hit by the EOF/broken pipe issue.

If I use a different prefix I don't get the errors.

e.g. this is what I used yesterday:

s3sync.rb -r --progress /home/faris/totalbackup/  s3eu:totalbackup

The same command today results in the EOF/broken pipe issue

But if I do this instead, with tb as the prefix instead of totalbackup:

s3sync.rb -n -r --progress /home/faris/totalbackup/  s3eu:tb

It works fine. But I'm only mentioning this as an aside. What I really think is interesting comes later in my post...

All of the above is without the code modification mentioned earlier in the thread.

With the code modification the problem is resolved but not quite in the way I expected.

Forgive me if I'm giving too much detail, but I'm hoping that it might help find the actual cause of the issue.

Essentially I have a 7 day backup cycle, with a full backup on day 1, and incremental backups on subsequent days.

I'm backing up a directory structure similar to this:
/totalbackup/bak1
file1
file2
file3
(... and a few more files)

/totalbackup/bak2
file1
file2

/totalbackup/bak3
(same as bak2)

No file is larger than 1GB in size, but most of them are 1GB exactly.

Now to explain why I'm wasting your time explaining the file structure...

Basically, if I use

s3sync.rb -d -r --progress /home/faris/totalbackup/  s3eu:totalbackup

(s2eu:totalbackup already contains yesterday's synch. s3try modified as mentioned in this thread)

then I see that S3sync examines all the files in /bak1 with no issues and only spits out the EOF error when it starts looking at bak2

With the code modification mentioned here, instead of also giving a broken pipe error and then going round in circles going nowhere, it then continues correctly:

Code:
(.....)
local node object init. Name:bak2/file1 Path:/totalbackup/bak2/file1 Size:206469120 Tag:[redacted]
prefix found: /bak2/
s3TreeRecurse s3eu totalbackup /bak2/
Trying command list_bucket s3eu max-keys 200 prefix totalbackup/bak2/ delimiter / with 100 retries left
EOF error: end of file reached
No result available
99 retries left
Creating new connection
Trying command list_bucket s3eu max-keys 200 prefix totalbackup/bak2/ delimiter / with 99 retries left
Response code: 200
S3 item totalbackup/bak2/file1
(.....)

So whatever is going wrong seems to be happening when the second list_bucket command is sent to S3?
Or am I misinterpreting what -d is telling me?

Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!