How to use SignalR via Ngnix Reverse SSL Proxy - c#

I'm trying to config a web api with SignalR enabled, so I can push real-time information to the clients via websockets. In front of the api I have a nginx reverse ssl proxy running in a docker container.
Clients can connect just fine via the reverse proxy when ssl is disabled in the nginx configuration:
server {
listen 55555;
# listen 55555 ssl;
ssl_certificate /config/keys/cert.crt;
ssl_certificate_key /config/keys/cert.key;
location / {
proxy_pass http://192.168.1.175:50000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
But when I configure the proxy to listen for ssl the clients can't seem to connect, or at least the client gets a "The server disconnected before the handshake could be started" IOException after what appears to be a time out.
WebAPI log:
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 POST http://192.168.1.150/chatHub/negotiate?negotiateVersion=1 - 0
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint '/chatHub/negotiate'
dbug: Microsoft.AspNetCore.Http.Connections.Internal.HttpConnectionManager[1]
New connection i-HjjQP9EsA_rBTX7vFs2Q created.
dbug: Microsoft.AspNetCore.Http.Connections.Internal.HttpConnectionDispatcher[10]
Sending negotiation response.
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
Executed endpoint '/chatHub/negotiate'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 POST http://192.168.1.150/chatHub/negotiate?negotiateVersion=1 - 0 - 200 316 application/json 117.3988ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 POST http://192.168.1.150/chatHub/negotiate?negotiateVersion=1 - 0
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint '/chatHub/negotiate'
dbug: Microsoft.AspNetCore.Http.Connections.Internal.HttpConnectionManager[1]
New connection K0y7OI3-3KvPIXWV92HEkA created.
dbug: Microsoft.AspNetCore.Http.Connections.Internal.HttpConnectionDispatcher[10]
Sending negotiation response.
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
Executed endpoint '/chatHub/negotiate'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 POST http://192.168.1.150/chatHub/negotiate?negotiateVersion=1 - 0 - 200 316 application/json 11.6330ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://192.168.1.150/chatHub?id=zdDUpBZ-RsnQgEZsJ9Sctg - -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint '/chatHub'
dbug: Microsoft.AspNetCore.Http.Connections.Internal.HttpConnectionDispatcher[4]
Establishing new connection.
dbug: Microsoft.AspNetCore.SignalR.HubConnectionHandler[5]
OnConnectedAsync started.
dbug: Microsoft.AspNetCore.SignalR.HubConnectionContext[2]
Handshake was canceled.
dbug: Microsoft.AspNetCore.Http.Connections.Internal.HttpConnectionManager[2]
Removing connection zdDUpBZ-RsnQgEZsJ9Sctg from the list of connections.
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
Executed endpoint '/chatHub'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET http://192.168.1.150/chatHub?id=zdDUpBZ-RsnQgEZsJ9Sctg - - - 200 - text/event-stream 60079.4344ms
I've tried including the following in the WebApplication settings:
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
options.KnownProxies.Add(IPAddress.Parse("192.168.1.150")); // Proxy Server IP
});
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto | ForwardedHeaders.XForwardedHost
});
But as far as I can tell, it doesn't really make a difference.
I'm unsure whether this is a nginx reverse proxy/sll or a Webpi/SignalR issue. Any ideas? :)

Related

Query health check endpoints in ASP.NET

I implemented a health check endpoint following this doc. My ASP.NET application is dockerized and runs using docker-compose, with the port mapped/exposed.
Question: I am not sure how to query the health check endpoint from clients such as postman.
When I send a GET request to the /healthz endpoint as the following, postman throws the following error.
http://host.docker.internal:1200/healthz
Error: Client network socket disconnected before secure TLS connection was established
while I can see the following in the logs of the docker container.
[05:01:03 DBG] Connection id "0HMLSHSV1HRD2" accepted.
[05:01:03 DBG] Connection id "0HMLSHSV1HRD2" started.
[05:01:03 INF] Request starting HTTP/1.1 GET http://host.docker.internal:1200/healthz - -
[05:01:03 DBG] Wildcard detected, all requests with hosts will be allowed.
[05:01:03 VRB] All hosts are allowed.
[05:01:03 DBG] 1 candidate(s) found for the request path '/healthz'
[05:01:03 DBG] Request matched endpoint 'Health checks'
[05:01:03 DBG] Static files was skipped as the request already matched an endpoint.
[05:01:03 DBG] Https port '1200' loaded from configuration.
[05:01:03 DBG] Redirecting to 'https://host.docker.internal:1200/healthz'.
[05:01:03 DBG] Connection id "0HMLSHSV1HRD2" completed keep alive response.
[05:01:03 INF] Request finished HTTP/1.1 GET http://host.docker.internal:1200/healthz - - - 307 0 - 89.5220ms

System.IO.IOException: The handshake failed due to an unexpected packet format

I'm trying to connect to a non-secure web socket server through WS, although when setting up a proxy with nginx I receive (on the WS server):
The handshake failed due to an unexpected packet format.
This is handled as an IOException, and crashes the entire server.
stacktrace
Here is my nginx configuration:
server {
listen 80;
listen [::]:80;
server_name ws.tunnel.cf;
return 302 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/ssl/certs/cert.pem;
ssl_certificate_key /etc/ssl/private/key.pem;
server_name ws.tunnel.cf;
location / {
proxy_pass http://213.43.156.21:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
How exactly can I connect to my non secure web socket server, without making it WSS?
I am using Fleck as a package for the server.
Code for connecting:
const socket = new WebSocket('wss://ws.tunnel.cf');
socket.addEventListener('open', function (event) {
console.log('WS:open');
});

Remote Access to Local ASP.NET Core app NGINX

I published my ASP.NET Core app on my raspberrypi3 (raspbian) with nginx.
I configured nginx following the microsoft documentation: on localhost everything works correctly but I can't access the app from other devices on my local network (ERR_CONN_REFUSED).
I set the reverse proxy on port 81 because on port 80 I have another server that manages php sites (including phpmyadmin) like this:
server {
listen 80 default_server;
listen [::]:80;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name php.it *.php.it;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.3-fpm.sock;
}}
and
server {
listen 81;
server_name example.com *.example.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}}
I don't know if the problem is in the configuration of the /etc/nginx/sites-available/default file or in the asp.net Core application.
I think the problem is localhost, but I have no idea how to solve it.
Also, I'm not sure what I should put in server_sites, what does it refer to?
Thanks
In your first config, you have set up domain php.it so your site should be accessible via http://php.it/ if you had set up correct DNS.
But you have set up listening to default_server so it is accessible also without host.
In your second case, there is missing default_server so host ist mandatory. On example.com:81 should be your page accessible, again if you had set up correct DNS.
Solutions:
If you want to access website without domain, just pure IP, remove server_name.
If you want to set up default server even for port 81, add listen 81 default_server;

Nginx not passing websocket upgrade response back to client?

I am using Nginx + Websockets on a precise 64 vagrant box, with c#/mono for the app server. The goal is to serve up static content directly through Nginx, and handle both plain 'ol http service requests (on /service) and also websocket requests (on /webSocket) all on the same port. Here's the relevant conf:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80 default_server;
# (1)
location / {
root /var/www;
index index.html;
}
# (2)
location /service {
add_header Access-Control-Allow-Origin *;
proxy_pass http://localhost:9000;
}
# (3)
location /webSocket {
proxy_pass http://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Most of it is working well. Static content, yes, service requests, yes, and even the first part of websockets. I am getting a well-formed 101 Switching Protocols request from my client (Firefox or Chrome). I am making a nice http upgrade response and sending it. And then the client does... nothing... nothing is received.
But the crazy thing is, when I give up and manually kill my server side app, and the client web socket closes with error, THEN the Response Headers show up on the client browser debugger. The whole thing looks like:
REQUEST HEADERS
Request URL: http://localhost:8086/webSocket
Request Method: GET
Status Code: HTTP/1.1 101 Switching Protocols
Request Headers 16:18:50.000
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0
Upgrade: websocket
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: Nx0sUAemOFWM2rsaCAJpfQ==
Sec-WebSocket-Extensions: permessage-deflate
Pragma: no-cache
Origin: http://localhost:8086
Host: localhost:8086
Connection: keep-alive, Upgrade
Cache-Control: no-cache
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
RESPONSE HEADERS Δ19660ms
Upgrade: websocket
Transfer-Encoding: chunked
Server: nginx/1.1.19
Sec-WebSocket-Accept: gNiAeqxcVjkiReIpdtP0EZiClpg=
Date: Mon, 08 Jun 2015 20:18:48 GMT
Connection: keep-alive
NO RESPONSE BODY Δ0ms
There's no evidence that my server app is not flushing out its send data right away - I'm using async TcpSocket.BeginSend and FinishSend and the send seems to complete right away. And it works fine on plain http service comm.
So where is my websocket message data going??? It seems like Nginx doesn't want to send it back to my client, except until I close the Tcp connection from the server side.
Has anybody experienced something like this before. Everything I've read on Nginx and Websockets is concerned with the basic setup that gets the hop-by-hop upgrade working, and I've got that. No one has anything to say as to why sending back from the server side doesn't seem to go anywhere.

WebSocket sending an HTTP header on connection - how to prevent?

When I open a web socket to my server - it looks like the client is automatically sending an HTTP request as part of the first payload. How can I prevent this?
That is the nature of the web socket protocol. It starts with a handshake done over simple HTTP
GET /mychat HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat
Sec-WebSocket-Version: 13
Origin: http://example.com
More: http://en.wikipedia.org/wiki/WebSocket#WebSocket_protocol_handshake
You cannot create a websocket without this handshake.

Categories

Resources