S3Sync.net
February 02, 2014, 01:19:11 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
   Home   Help Search Login Register  
Pages: [1]
  Print  
Author Topic: how to skip large files ?  (Read 3233 times)
GetuWired
Newbie
*
Posts: 2


View Profile
« on: March 11, 2010, 02:28:42 PM »

I know there is a 2gb issue with aws.

The goal is to use s3sync to back up our backups to s3.   Its working fine until it gets to some backups that are larger then 2gb, and we run into:

Broken pipe: Broken pipe
99 retries left, sleeping for 30 seconds
Connection reset: Connection reset by peer
98 retries left, sleeping for 30 seconds

This will take forever to work itself out.   If we don't care about the files larger then 2gb, how can I lower this retry counter to like 2 or just have it skip it all together but keep uploading the rest of the files.

We want this to be an automated process and since backups change size daily there is no way to know which ones are going to be over 2gb. Almost all < 2gb.   

Any suggestions ?


Logged
GetuWired
Newbie
*
Posts: 2


View Profile
« Reply #1 on: March 15, 2010, 08:49:22 AM »

if there is no way to modify the rb to check for this, is there a better way to push the files rather then,

ruby s3sync.rb -r --ssl --delete /home/folder/localuploadfolder/ mybucket

can you suggest a shell script that would loop through the localuploadfolder and upload just the names of the files less then 2gb ?
Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!