Show Posts
|
Pages: [1] 2
|
1
|
General Category / Questions / s3sync vs. S3 console
|
on: May 08, 2011, 10:21:22 AM
|
I've been trying to see if the formats that S3 console and s3sync use are the same--or highly compatible. I created a "prefix" and a "directory" under that prefix in S3 by means of S3 console, and I populated them with a few files. Now I am trying to download the contents under this prefix with the command
> ruby s3sync.rb -r lejar-test:pre/ /temp
where "lejar-test" is the name of my bucket, "pre" the prefix, and /temp a directory in my hard disk, and I get the error message:
S3 command failed. get_stream pre #<File:0x2de1888> With result 404 Not Found s3sync.rb:645:in `unlink': Permission denied - C:/temp/ (Errrno::EACCES) from s3sync.rb:645:in `updateFrom' from s3sync.rb:393:in `main' from s3sync.rb:735
Of course, I have no problem downloading stuff that I uploaded with s3sync itself in this way.
I am sorry to hear that s3sync is not under development anymore. I'll check tarsnap.
|
|
|
5
|
General Category / Questions / Re: s3sync in Windows XP Pro doesn't seem to work
|
on: May 20, 2010, 04:21:44 PM
|
I tried Ruby 1.8.6 from ruby-installer-1.8.6-p398-rc2 and both the "venerable" and the last version of s3sync and it doesn't work. For instance with
> ruby s3cmd.rb listbuckets
I get
S3 command failed list_all_my_buckets With result 400 Bad Request
If I try to do a simple operation with s3sync.rb itself, I not only get the 400 Bad Request, but complains about the code having undefined methods or whatever.
I wonder if anyone is having s3sync working on Windows XP and, in that case, what particular installer of Ruby they are using. I think I used to have it working with Windows XP a couple of years ago, but unfortunately I decided not to keep the code and binaries--since everything is evolving for the better and those are freely available online, Hah, Hah.
|
|
|
7
|
General Category / Questions / s3sync in Windows XP Pro doesn't seem to work
|
on: May 18, 2010, 02:52:26 PM
|
I installed the latest versions of Ruby and of s3sync, set the environment variables, etc. I am using Ruby 1.9.1 installed with rubyinstaller-1.9.1-p378-rc2. When I try to run the simplest command I can think about, namely,
> s3cmd.rb listbuckets
I get that the constant "SimpleDelegator" in HTTPStreaming.rb is uninitialized, more precisely,
C:/s3sync/HTTPStreaming.rb:53:in '<module:S3sync>': uninitialized constant S3sync::SimpleDelegator <NameError>
I can't copy and paste the output with this machine (or redirect it to a file), but essentially what is happening is that line 13 in s3cmd.rb calls s3try.rb, and line 23 in this script calls HTTPStreaming.rb, and there the process gets stuck.
This problem has nothing to do with listbuckets: if I just enter
> s3cmd.rb
I get the same result.
Out of curiosity, I tried
> s3sync.rb
and I got that, in line, 23 a file md5 is required that is nowhere to be found. I think that in the past I used to get some help with the syntax when I entered this command.
I would appreciate some clarification on these questions. I didn't have any problem when I was running s3sync on Linux.
|
|
|
8
|
General Category / Questions / Re: s3fs support for s3sync
|
on: May 18, 2010, 11:00:59 AM
|
Same thing for S3Fox, but at least this allows you to download from s3sync-uploaded stuff. There is a program for Windows called S3 Browser, that is fully compatible with s3sync. Better than nothing. We need a benevolent dictator here, who will decide what the official format should be.
|
|
|
9
|
General Category / Feature Requests / Compatibility With S3Fox
|
on: April 21, 2009, 09:49:08 PM
|
I have no problem viewing or downloading with S3Fox stuff that I originally uploaded with s3sync or s3cmd, but when I tried to do the opposite, namely to download with s3cmd stuff uploaded with S3Fox, it didn't work. If it's not complicated, I'd appreciate this feature: it will make both compatible as far as uploading and downloading stuff. If there is already some workaround, I'd like to hear from it.
|
|
|
10
|
General Category / Questions / Re: to ssl or not?
|
on: July 28, 2008, 01:43:43 PM
|
SSL will also authenticate the AWS server, i.e., check that the server you are connected to effectively is the Amazon server. On the other hand, even banks sites do not routinely authenticate themselves, even though they use SSL to encrypt transactions. This, at least, is what I understand.
|
|
|
11
|
General Category / General Discussion / Re: SSL Certificate: Fatal Mistake!
|
on: July 23, 2008, 02:59:28 PM
|
I found the answer to my naive question somewhere else in the forum. As I imagined, AWS is designed to always protect your secret key.
Thanks, ferrix. Great software! I didn't realize that you were the creator and/or maintainer of s3sync.
|
|
|
12
|
General Category / General Discussion / Re: SSL Certificate: Fatal Mistake!
|
on: July 23, 2008, 02:05:14 PM
|
You have to specify the cert of the server *or* that of a trusted root; that is the way it determines authenticity of the server. If you have a set of root certs on your system you may be able to point the program there instead of using the single approach. Then it behaves more like a web browser like you expect.
Thanks for the help. This makes sense. I'll have to think the thing over with the help of a cryptography book, but for now I'll take your word. This may be a naive question, but, if I just don't use SSL altogether, will my AWS keys be sent also in cleartext? I don't care if my documents are sent in cleartext, but obviously I don't want to give access to my account to malicious third parties. This stuff is just non-trivial to use because it's not a complete interface. Maybe you ought to try jungle disk?
Just the fact that it is not a complete interface is what makes it appealing. I was just looking for some equivalent of rsync to access S3. I suppose another consideration would be that if I die tomorrow, my wife won't know what to do with my backup system if she needs it, but most probably I'll outlive the backup system.
|
|
|
13
|
General Category / General Discussion / Re: SSL Certificate: Fatal Mistake!
|
on: July 22, 2008, 11:25:37 PM
|
Thanks, ferrix. I learned my lesson. My intention was to have a backup system that would not require much monitoring on my part. I didn't want to spend my time reading system mail or checking forums--at the time I implemented it, this forum didn't even exist. It's not even clear to me why I need to input a certificate, when the question is to determine the authenticity of the S3 server, not that of mine, but this is another question.
|
|
|
14
|
General Category / General Discussion / SSL Certificate: Fatal Mistake!
|
on: July 21, 2008, 10:56:03 PM
|
I've been using s3sync to back up my server for about a year. When I wrote the script I used the certificate suggested in the README.txt file. For some time it was working fine. But today I realized that for the last five months the Amazon server was not accepting my certificate. I've got many system mails in my root account in the server warning me of this, but I was not checking this. Thus, for the last five months, no backups were made. Fortunately my hard disk didn't crash. Since I didn't change anything in my server, I have to assume that Amazon has changed the software and decided not to accept certifictates that they were previously accepting. Great idea! I have only to hope that they will not change my passphrase without warning me. I would appreciate if anyone can comment on this.
|
|
|
15
|
General Category / Closed Bugs / Weird behavior of --delete option when downloading with trailing slash.
|
on: October 18, 2007, 09:28:13 AM
|
When I download from Amazon S3 using a trailing slash in the source address, s3sync tries to delete the whole directory that I am downloading. For example, suppose I upload first the directory dir:
./s3sync.rb -r --delete /root/tmp/dir bucket:pre
Here everything works fine and some keys pre/dir, pre/dir/file1, pre/dir/file2, etc are created in "bucket", whereas other stuff is deleted. Now if I try to download:
./s3sync.rb -r --delete bucket:pre/dir/ /root/tmp/dir
it downloads the stuff as it should, but it warns me with the message:
"Could not delete directory /root/tmp/dir: directory not empty"
This warning is not totally idle: if the directory happens to be empty, it erases it. It seems to me that this shouldn't happen. The above command should just copy the "files and directories" with prefix "pre/dir" to /root/tmp/dir, deleting files and directories in /root/tmp/dir that don't appear in S3, but not try to delete the directory /root/tmp/dir. I tried different variations on this, and the culprit seems to be the trailing slash /.
|
|
|
|