ferrix
|
|
« Reply #15 on: November 12, 2007, 01:19:00 PM » |
|
Wow holding that date constant just totally messed with me. Thanks for the help!
|
|
|
Logged
|
|
|
|
ferrix
|
|
« Reply #16 on: November 12, 2007, 02:18:30 PM » |
|
OK, this is going to actually be a big deal. The underlying aws ruby code changed a bunch. working on it...
|
|
|
Logged
|
|
|
|
jckdnk111
Newbie
Posts: 1
|
|
« Reply #17 on: November 13, 2007, 11:48:04 AM » |
|
I'm glad you are on this so quickly ... I've been fighting with it for 2 days and still no luck. I'm eagerly awaiting the fix. Thanks for continuing to support your excellent application!
|
|
|
Logged
|
|
|
|
maxi
Newbie
Posts: 7
|
|
« Reply #18 on: November 13, 2007, 12:06:48 PM » |
|
Hi there!
exactly the same problem here. It seemed that EC2 instances that are already running for quite a time still work. But newly created instanced first get a 301 Moved Permanently and then (after finding out that aws_s3_host: <bucketname>.s3.amazonaws.com should be used) deny any access :'-(
Looking forward to a fix :-) Regards & thanks for the great work!!
Maxi
|
|
|
Logged
|
|
|
|
mrvanes
Newbie
Posts: 10
|
|
« Reply #19 on: November 15, 2007, 08:31:49 AM » |
|
Just 5 minutes into trying s3sync I stumbled at exactly this problem and as far as I understand the fix is quite simple. The only thing you need to do is change your URL scheme from http://s3.amazonaws.com/<bucketname> to http://<bucketname>.s3.amazonaws.com/ and DNS will do the rest... it's transparent, it's easy. As far as I understand. I would change it myself if I knew ruby and where you construct your URL in the code.
|
|
|
Logged
|
|
|
|
mrvanes
Newbie
Posts: 10
|
|
« Reply #20 on: November 15, 2007, 08:42:40 AM » |
|
Ok, so a quick glance learned you construct the complete path (bucket+key) outside generate_url. Too bad, this requires a lot more work now, since you need the bucket and the path separately when generating the URL for the above trick to work...
|
|
|
Logged
|
|
|
|
maxi
Newbie
Posts: 7
|
|
« Reply #21 on: November 15, 2007, 08:58:18 AM » |
|
Yeah - actually I thought that, too but since ferrix wrote "OK, this is going to actually be a big deal." I wasn't too optimistic though :-}. Unfortunately I'm a ruby n00b as well and therefor I'd highly appreciate any workarounds for this problem. Anyone has an Idea yet?
Regards - Maxi
|
|
|
Logged
|
|
|
|
ferrix
|
|
« Reply #22 on: November 15, 2007, 04:04:59 PM » |
|
I'm going to try to get to this this weekend.
|
|
|
Logged
|
|
|
|
maxi
Newbie
Posts: 7
|
|
« Reply #23 on: November 16, 2007, 05:09:21 AM » |
|
Great! I'm looking forward to it :-)
|
|
|
Logged
|
|
|
|
Clou
Newbie
Posts: 10
|
|
« Reply #24 on: November 16, 2007, 06:19:14 AM » |
|
Just 5 minutes into trying s3sync I stumbled at exactly this problem and as far as I understand the fix is quite simple. The only thing you need to do is change your URL scheme from http://s3.amazonaws.com/<bucketname> to http://<bucketname>.s3.amazonaws.com/ and DNS will do the rest... it's transparent, it's easy. As far as I understand. I would change it myself if I knew ruby and where you construct your URL in the code. Unfortunately it's not that simple: You could use the changed URL scheme, and it would work for all EU buckets, and for most US buckets. But in US buckets uppercase letters are allowed, and those buckets can not be addressed as virtual hosts... I dont' know if there are other characters that can be used in US buckets which cannot be used in EU buckets. But that will have to be considered. I'm also not very firm in ruby and don't know how to code it, so i'm looking forward to see when ferrix has had time to look into it.
|
|
|
Logged
|
|
|
|
maxi
Newbie
Posts: 7
|
|
« Reply #25 on: November 16, 2007, 06:24:50 AM » |
|
@Clou: You're mentioning an interesting point: There are US and EU Buckets. But for now (as far as I could see) there are only EC2-Instances for US I guess. So when combining EC2/S3 for now it would only make sense to have US-Only EC2/S3-Combinations unless one want to give EU-Customers direct access to S3-Buckets via Link, right?
Regards - Maxi
|
|
|
Logged
|
|
|
|
Clou
Newbie
Posts: 10
|
|
« Reply #26 on: November 16, 2007, 06:53:08 AM » |
|
@Clou: You're mentioning an interesting point: There are US and EU Buckets. But for now (as far as I could see) there are only EC2-Instances for US I guess. So when combining EC2/S3 for now it would only make sense to have US-Only EC2/S3-Combinations unless one want to give EU-Customers direct access to S3-Buckets via Link, right?
As I understand it that's correct. It also has to be considered that traffic is charged between EC2/S3-EU but not between EC2/S3-US, but we're getting off-topic ;-)
|
|
|
Logged
|
|
|
|
ferrix
|
|
« Reply #27 on: November 17, 2007, 08:25:11 PM » |
|
New release on its way; check announce board.
|
|
|
Logged
|
|
|
|
maxi
Newbie
Posts: 7
|
|
« Reply #28 on: November 18, 2007, 08:57:04 AM » |
|
I just downloaded/installed/tried it. It worked immediately ;-) Great! Thanks a lot!!
Regards - Maxi
|
|
|
Logged
|
|
|
|
maxi
Newbie
Posts: 7
|
|
« Reply #29 on: November 18, 2007, 09:15:56 AM » |
|
hmm - now that I tried another bucket there's still the old problem again :-( Any Idea?
Response code: 403 S3 command failed: put test #<S3::S3Object:0xb7cebbe4> Content-Length 13240 With result 403 Forbidden <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><RequestId>0B42B62520D73B5B</RequestId><SignatureProvided>6WBByPaheeQLJx3nCrqkPAbkWCs=</SignatureProvided><StringToSignBytes>50 55 54 0a 0a 0a 53 75 6e 2c 20 31 38 20 4e 6f 76 20
|
|
|
Logged
|
|
|
|
|