I'm following the following link https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileDotNet.html
to upload files from local machine to an S3 bucket on VPC. The application is also testing and running on the on-premise machine.
var s3Client = new AmazonS3Client(RegionEndpoint.USEast2);
var fileTransferUtility = new TransferUtility(s3Client);
await fileTransferUtility.UploadAsync("c:\tmp\test.txt", "bucketName");
However, the code gets the following error.
A socket operation was attempted to an unreachable network
Should an Url be given?
Here is the network traffic captured by Fiddler. However, it gets a different exception for the code.
GET http://1xx.1xx.1xx.2xx/latest/meta-data/iam/security-credentials HTTP/1.1
Host: 1xx.1xx.1xx.2xx
HTTP/1.1 503 Service Unavailable
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Content-Length: 787
Network Error
Network Error (tcp_error)
A communication error occurred: "Operation timed out"
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.
For assistance, contact your network support team.
.aws\config
[default]
region = USWest2
I had the same error today, even though I had a valid $USERPROFILE\.aws\credentials file - it was actually because $USERPROFILE\AppData\Local\AWSToolkit\RegisteredAccounts.json couldn't be decrypted (not sure why), which causes AWS to think you don't have have local credentials, and hence tries to make a connection to the EC2 metadata URL which is http://169.254.169.254/latest/meta-data/?. On a local development machine that won't be accessible. For me, deleting the $USERPROFILE\AppData\Local\AWSToolkit\RegisteredAccounts.json file did the trick. FWIW, I only managed to figure this out by reading through the source of the AWS SDK...
Related
I am using Nginx + Websockets on a precise 64 vagrant box, with c#/mono for the app server. The goal is to serve up static content directly through Nginx, and handle both plain 'ol http service requests (on /service) and also websocket requests (on /webSocket) all on the same port. Here's the relevant conf:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 default_server;
# (1)
location / {
root /var/www;
index index.html;
}
# (2)
location /service {
add_header Access-Control-Allow-Origin *;
proxy_pass http://localhost:9000;
}
# (3)
location /webSocket {
proxy_pass http://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Most of it is working well. Static content, yes, service requests, yes, and even the first part of websockets. I am getting a well-formed 101 Switching Protocols request from my client (Firefox or Chrome). I am making a nice http upgrade response and sending it. And then the client does... nothing... nothing is received.
But the crazy thing is, when I give up and manually kill my server side app, and the client web socket closes with error, THEN the Response Headers show up on the client browser debugger. The whole thing looks like:
REQUEST HEADERS
Request URL: http://localhost:8086/webSocket
Request Method: GET
Status Code: HTTP/1.1 101 Switching Protocols
Request Headers 16:18:50.000
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0
Upgrade: websocket
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: Nx0sUAemOFWM2rsaCAJpfQ==
Sec-WebSocket-Extensions: permessage-deflate
Pragma: no-cache
Origin: http://localhost:8086
Host: localhost:8086
Connection: keep-alive, Upgrade
Cache-Control: no-cache
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
RESPONSE HEADERS Δ19660ms
Upgrade: websocket
Transfer-Encoding: chunked
Server: nginx/1.1.19
Sec-WebSocket-Accept: gNiAeqxcVjkiReIpdtP0EZiClpg=
Date: Mon, 08 Jun 2015 20:18:48 GMT
Connection: keep-alive
NO RESPONSE BODY Δ0ms
There's no evidence that my server app is not flushing out its send data right away - I'm using async TcpSocket.BeginSend and FinishSend and the send seems to complete right away. And it works fine on plain http service comm.
So where is my websocket message data going??? It seems like Nginx doesn't want to send it back to my client, except until I close the Tcp connection from the server side.
Has anybody experienced something like this before. Everything I've read on Nginx and Websockets is concerned with the basic setup that gets the hop-by-hop upgrade working, and I've got that. No one has anything to say as to why sending back from the server side doesn't seem to go anywhere.
I'm using SignalR version 2.1.2 with SignalR.Redis 2.1.2 on Server 2012 R2, IIS 8.5 with WebSockets enabled.
All is running perfectly in my development environment. I can even stand up copies on different servers (e.g. http machine1/myapp/signalr, http machine2/myapp/signalr) of the site configured to use the same backplane, and both UI's get messages pubb'd to them perfectly.
I then moved "myapp" to our next environment, which is a cluster of 2 machines sitting behind an F5 load balancer, with a dns alias setup to route to the F5, and then round robin "myapp". The website itself can connect to signalr just fine, and can receive published messages it subscribes to, BUT when I try to publish to the site via the alias (e.g. http myappalias/signalr), I get a 400, Bad Request error response. Here is an example of the error.
InnerException: Microsoft.AspNet.SignalR.Client.Infrastructure.StartException
_HResult=-2146233088
_message=Error during start request. Stopping the connection.
HResult=-2146233088
IsTransient=false
Message=Error during start request. Stopping the connection.
InnerException: System.AggregateException
_HResult=-2146233088
_message=One or more errors occurred.
HResult=-2146233088
IsTransient=false
Message=One or more errors occurred.
InnerException: Microsoft.AspNet.SignalR.Client.HttpClientException
_HResult=-2146233088
_message=StatusCode: 400, ReasonPhrase: 'Bad Request', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
Pragma: no-cache
Transfer-Encoding: chunked
X-Content-Type-Options: nosniff
Persistent-Auth: true
Cache-Control: no-cache
Date: Thu, 13 Nov 2014 22:30:22 GMT
Server: Microsoft-IIS/8.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Content-Type: text/html
Expires: -1
}
Here is some test code I'm using to publish test messages to each environment, where it fails on "connection.Start().Wait()"
class Program
{
static void Main(string[] args)
{
var connection = new HubConnection("http://myappalias/signalr");
connection.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
var proxy = connection.CreateHubProxy("MyAppHub");
connection.Start().Wait();
ConsoleKeyInfo key = Console.ReadKey();
do
{
proxy.Invoke("NewMessage", new Message() { Payload = "Hello" });
Console.WriteLine("Message fired.");
key = Console.ReadKey();
} while (key.Key != ConsoleKey.Escape);
}
}
Now, if I don't use the "myappalias", and instead hit the server head on, it works perfectly. It appears either the F5 is the problem, the client needs to be configured differently for this scenario or I have to do something different when setting up signlar's startup class. Here is an example of the startup class I'm using.
[assembly: OwinStartup(typeof(MyApp.Startup))]
namespace MyApp
{
public class Startup
{
private static readonly ILog log = LogManager.GetLogger
(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
public void Configuration(IAppBuilder app)
{
try
{
log.Debug(LoggingConstants.Begin);
string redisServer = ConfigurationManager.AppSettings["redis:server"];
int redisPort = Convert.ToInt32(ConfigurationManager.AppSettings["redis:port"]);
HubConfiguration configuration = new HubConfiguration();
configuration.EnableDetailedErrors = true;
configuration.EnableJavaScriptProxies = false;
configuration.Resolver = GlobalHost.DependencyResolver.UseRedis(redisServer, redisPort, string.Empty, "MyApp");
app.MapSignalR("/signalr", configuration);
log.Info("SIGNALR - Startup Complete");
}
finally
{
log.Debug(LoggingConstants.End);
}
}
}
}
I download the client source code, and wired that in directly instead of the nuget package, so I could step through everything. I seems it successfully negotiates, and then attempt to "connect" with SSE's and then LongPolling transports, but fails at both.
Question 1.1
Anyone know of an alternative to Signalr for .NET that supports scaling with load balancing in a less "I want to pull my hair out" kind of way?
It should not be necessary to configure source address affinity to use SignalR behind a load balancer. It's certainly not wrong to set up session affinity, but that doesn't fix your underlying problem.
If you look closely at the content of the 400 response, you probably see a message similar to "The ConnectionId is in the incorrect format."
SignalR uses the server's machine key to create an anti-CSRF token, but this requires that all the servers in your farm share a machine key for the token to be properly decrypted in when SignalR requests hop servers. The /negotiate request that you see succeed is the request that retrieves the anti-CSRF token. When the SignalR client then uses the anti-CSRF token to make a /connect request, it failed because the /connect request was processed by a different server that didn't create the token and is unable to decrypt it.
This explains why setting up session affinity fixed your problem, but sharing a machine key will help you avoid this problem even if something goes wrong with session affinity.
Here is an issue that filed on GitHub by someone who experienced a similar issue: https://github.com/SignalR/SignalR/issues/2292.
The problem was fixed by switching the profile for "MyApp" in the F5, to using the "source_addr" profile built into the F5 as a parent profile with a timeout of 1 hour. Here is a description of what that profile does:
Source address affinity persistence Also known as simple persistence,
source address affinity persistence supports TCP and UDP protocols,
and directs session requests to the same server based solely on the
source IP address of a packet.
EDIT
This ended up "Working" for a while, but if I deploy a publisher (something that simply publishes through the signalr client) without republishing the Hub, the publisher times out trying to connect over and over and over again. uhg.
HttpClient starts throwing exceptions after a few requests to a specific server. After some tests I noticed that it always stops working at request number 33. The server sends this response header:
Keep-Alive:timeout=5, max=32
I have tried to dispose HttpClient at request number 32 or less but it does not solve the problem.
How should I handle it in order to send requests without problems to this server ?
Try calling HttpClient.Dispose or explicitly setting the Connection: close header
client.DefaultHeaders.Add("Connection", "close");
When I open a web socket to my server - it looks like the client is automatically sending an HTTP request as part of the first payload. How can I prevent this?
That is the nature of the web socket protocol. It starts with a handshake done over simple HTTP
GET /mychat HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat
Sec-WebSocket-Version: 13
Origin: http://example.com
More: http://en.wikipedia.org/wiki/WebSocket#WebSocket_protocol_handshake
You cannot create a websocket without this handshake.
I've created a Sample Application based on SuperWebSocket (running v0.3). I manage to get connect to the WebSocket Server through Telnet, but for some reason I'm having trouble doing it through JavaScript (running Chrome 17.0.963.46 m).
Through Telnet I can connect through either localhost:911 or 192.168.1.147:911.
My Application is running on http://localhost/Raphael-Test/, and I've tried running through both localhost and the local networks IP, both gets stuck at "Connecting" ie. status 0.
Is there anything obvious I'm missing, any configuration that should be done in the Web Application itself? I should add that I've successfully tried out the LiveChat demo, got it working through JavaScript.
This is my current Client Implementation running when the Page has been fully loaded:
ws = new WebSocket("ws://192.168.1.147:911");
ws.onopen = function () {
alert("connected");
};
ws.onmessage = function (evt) {
var msg = evt.data;
alert(msg);
};
Handshake (with NO response):
GET / HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Host: 192.168.1.147:911
Origin: http://192.168.1.147
Sec-WebSocket-Key: 8bl46pmPrixTYRJ/5i9Sug==
Sec-WebSocket-Version: 13
It turns out I used the SuperSocket Server base classes instead of SuperWebSocket on the server side. This made the TCP connection itself work as expected, but did of course not handle the WebSocket Handshake and therefore the connection failed.