S3Sync.net
February 02, 2014, 01:22:09 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
   Home   Help Search Login Register  
Pages: [1]
  Print  
Author Topic: Full TCP buffer problem  (Read 5704 times)
insane85
Newbie
*
Posts: 4


View Profile WWW
« on: March 19, 2009, 12:59:15 PM »

Hi,
after some problems with a TCP buffer error (when uploading big files) now I split my data in 4MB tar files with this command before using s3sync:
Code:
tar -czf - "$domaindir/$website" | split -b 4m - "$destdir/$website/$website.tar.gz" &> /dev/null

but when my bash script run:
Code:
ruby /usr/emanuele/s3sync/s3sync.rb -r $destdir/$website/ backup-server-integramenti1:backup-dati/$website/

TCP buffer become full after some seconds.
In my Plesk panel normally have (name, current, software limit,unit,description):

tcpsndbuf   163,228   2,867,477   bytes       Total size of buffers used to send data over TCP network connections

during s3sync uploads:

tcpsndbuf   2,712,477   2,867,477   bytes       Total size of buffers used to send data over TCP network connections


why?!   Huh

- maybe all the s3sync connections run at the same time?
- if is true, is there a way to limit s3sync simultaneus connections?
« Last Edit: March 19, 2009, 01:00:56 PM by insane85 » Logged
ferrix
Sr. Member
****
Posts: 363


(I am greg13070 on AWS forum)


View Profile
« Reply #1 on: March 19, 2009, 02:01:15 PM »

Actually the current program can only have one transfer operating at a time.  Multiple transfers is a feature that I plan to add to the new version. 

I don't really know how to help with your problem.  I don't have control over the buffering from ruby.  The only thing I can think of is to purposefully slow down the transfer speed.
Logged
insane85
Newbie
*
Posts: 4


View Profile WWW
« Reply #2 on: March 19, 2009, 08:36:23 PM »

I try with --progress option:
Code:
ruby /usr/emanuele/s3sync/s3sync.rb -r -v --progress "/usr/emanuele/s3sync/archivi/dati/integramenti.it/" backup-server-integramenti1:backup-dati/integramenti.it/

this is the response:

Create node integramenti.it.tar.gzaa
Progress: 557056b  557044b/s  5%       Connection reset: Connection reset by peer
Skipping backup-dati/integramenti.it/integramenti.it.tar.gzaa: No buffer space available - connect(2)




I try making splitted rar files of 10 MB size (I've tried also 4MB). The domain after compression is composed of about 1500 files (if 10 MB each).
Sometimes the error come at 1st file (integramenti.it.tar.gzaa) sometimes at 10th or 20th...not with regularity!

this is the line in plesk panel (like in the other post):
tcpsndbuf   2,712,477   2,867,477   bytes       Total size of buffers used to send data over TCP network connections

in the plesk help i found (virtuozzo section):
tcpsndbuf is the total size of send buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data sent from an application to a TCP socket, but not acknowledged by the remote side yet

not acknowledged by the remote side yet -> is a s3 problem that refuses my files?
Logged
dancalder
Newbie
*
Posts: 1


View Profile
« Reply #3 on: March 03, 2010, 04:43:52 AM »

I am also having this problem where the tcp buffer fills up and does not clear after sending, thus causing problems to services on the server.
Any solution to this?
Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines Valid XHTML 1.0! Valid CSS!