In Unity3D, is there any way to instantly check network availability? By "instantly", I mean during a single frame, because lines of code below this check need to work based on network availability.
It all happens during Start(). I know I can ping a web page, and get network availability based on any errors occurring during the download of a web page. However, an operation like this takes several seconds whereas I need to know the result immediately, before moving to next line of code in the script.
Assuming your game is running at reasonable frame rates 30fps or greater then any solution that you can come up with (even pinging the host of your server) will only be valid for instances where the latency of the round trip is less than 1/30th of a second or lower ( roughly 30 ms)
As such it is unrealistic to handle this between frames (except for maybe on local networks)
Instead i would suggest into looking into threading your network based code to decouple it from frames
Don't do this.
As long as you do not provide more information of what exactly you are planning, one cannot give you proper answers.
This is unsatisfying for both sides.
But what you actually could do:
Open a TCP connection to a web available device like the google.com server.
Once the network state is changed (connected, disconnected, ...) trigger a simple c# event or set a variable like isOnline = true;.
This can be a way. But it is a bad one.
It all happens during Start()
Yes, this is possible and can be done in one frame if this is the case. I would have discouraged it so much if this operation is performed every frame in the Update function but that's not the case. If this is done in the beginning of the app, that's fine. If you do this while the game is running, you will affect the performace.
but operation like this take several seconds
This is designed like this in order to avoid blocking the main Thread.
Network operation should be done in a Thread or with the async methods to avoid blocking the main Thread. This is how most Unity network API such as the WWW and UnityWebRequest work. They use Thread in the background and then give you coroutine to manage that Thread by yielding/waiting in a coroutine function over frames until the network request completes.
To accomplish this in one frame just use HttpWebRequest and provide a server url to check. Most examples uses the google.com since that's always online but make sure to provide "User-Agent" so that the connection is not rejected on mobile devices. Finally, if HttpStatusCode is not 200 or if there is an exception then there is a problem, otherwise assume it is connected.
bool isOnline()
{
bool success = true;
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://google.com");
request.Method = "GET";
//Make sure Google don't reject you when called on mobile device (Android)
request.changeSysTemHeader("User-Agent", "Mozilla / 5.0(Windows NT 10.0; WOW64) AppleWebKit / 537.36(KHTML, like Gecko) Chrome / 55.0.2883.87 Safari / 537.36");
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response == null)
{
success = false;
}
if (response != null && response.StatusCode != HttpStatusCode.OK)
{
success = false;
}
}
catch (Exception)
{
success = false;
}
return success;
}
Class for the custom changeSysTemHeader function used to change the User-Agent:
public static class ExtensionMethods
{
public static void changeSysTemHeader(this HttpWebRequest request, string key, string value)
{
WebHeaderCollection wHeader = new WebHeaderCollection();
wHeader[key] = value;
FieldInfo fildInfo = request.GetType().GetField("webHeaders",
System.Reflection.BindingFlags.NonPublic
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.GetField);
fildInfo.SetValue(request, wHeader);
}
}
Simple usage from the Start function done in one frame:
void Start()
{
Debug.Log(isOnline());
}
Related
I wrote this code that works perfectly, but I fear that ping every 2 seconds consumes too many resources or can create some problems with internet connection.
new Thread(() =>
{
if (CheckInternetConnection() == false)
{
Dispatcher.Invoke(new Action(delegate
{
//internet access lost
}));
}
else
{
Dispatcher.Invoke(new Action(delegate
{
//internet access
}));
}
Thread.Sleep(2000);
}).Start();
[DllImport("wininet.dll")]
private extern static bool InternetGetConnectedState(out int Description, int ReservedValue);
public static bool CheckInternetConnection()
{
int output = 0;
return InternetGetConnectedState(out output, 0);
}
These are two events that don't work in all occasions (only when IP or network card changes)
NetworkChange.NetworkAvailabilityChanged += NetworkChange_NetworkAvailabilityChanged
NetworkChange.NetworkAddressChanged += NetworkChange_NetworkAddressChanged;
Can someone help me ?
Note : In regaurds to your original solution
NetworkChange.NetworkAvailabilityChanged works fine, but
there are a couple of caveats: 1) it doesn't tell you if you have
Internet access, it just tells you whether there's at least one
non-loopback network adapter working, and 2) there are often extra
network adapters installed for various reasons that leave the system
in a "network is available" state, even when your main
Internet-connected adapter is disabled/unavailable - thanks to Peter Duniho
Since networking is more than just your routers or network card, and is really every hop to where ever it is you are trying to connect to at any time. The easiest and most reliable way is just ping a well known source like google, or use some sort of heart beat to one of your internet services.
The reasons this is the only reliable way is that any number of connectivity issues can occur in between you and the outside world. Even major service providers can go down.
So an IMCP ping to a known server like Google, or calling OpenRead on a WebClient are 2 valid approaches. These calls are not expensive comparatively and can be put into a light weight timer or continual task.
As for your comments you can probably signal a custom event to denote the loss of network after a certain amount of fails to be safe
To answer your question
But I fear that ping every 2 seconds consumes too many resources or
can create some problems with internet connection.
Both methods are very inexpensive in regards to CPU and network traffic, any resources used should be very minimal
Note : Just make sure you are pinging or connecting to a server with high availability, this will
allow such shenanigans and not just block you
Ping Example
using System.Net.NetworkInformation;
// Implementation
using (var ping = new Ping())
{
var reply = ping.Send("www.google.com");
if (reply != null && reply.Status != IPStatus.Success)
{
// Raise an event
// you might want to check for consistent failures
// before signalling a the Internet is down
}
}
// Or if you wanted to get fancy ping multiple sources
private async Task<List<PingReply>> PingAsync(List<string> listOfIPs)
{
Ping pingSender = new Ping();
var tasks = listOfIPs.Select(ip => pingSender.SendPingAsync(ip, 2000));
var results = await Task.WhenAll(tasks);
return results.ToList();
}
Connection Example
using System.Net;
// Implementation
try
{
using (WebClient client = new WebClient())
{
using (client.OpenRead("http://www.google.com/"))
{
// success
}
}
}
catch
{
// Raise an event
// you might want to check for consistent failures
// before signalling the Internet is down
}
Note : Both these methods have an async variant that will return a
Task and can be awaited for an Asynchronous programming pattern better suited for IO bound tasks
Resources
Ping.Send Method
Ping.SendAsync Method
WebClient.OpenRead Method
WebClient.OpenReadAsync Method
NetworkInterface.GetIsNetworkAvailable() is unreliable... since it would return true even if all the networks are not connected to internet. The best approach to check for connectivity, in my opinion, is to ping a well known and fast online resource. For example:
public static Boolean InternetAvailable()
{
try
{
using (WebClient client = new WebClient())
{
using (client.OpenRead("http://www.google.com/"))
{
return true;
}
}
}
catch
{
return false;
}
}
Anyway, those two events you are subscribing don't work the way you think... actually they check for the hardware status of your network adapters... not whether they are connected to internet or not. They have the same drawback as NetworkInterface.GetIsNetworkAvailable(). Keep on checking for connectivity into a separate thread that pings a safe source and act accordingly. Your Interop solution is excellent too.
Doing ping to public resources brings extra calls to your app and adds a dependency on that website or whatever you would use in the loop.
What if you use this method: NetworkInterface.GetIsNetworkAvailable() ?
Would it be enough for your app's purposes?
I found it here https://learn.microsoft.com/en-us/dotnet/api/system.net.networkinformation.networkinterface.getisnetworkavailable?view=netframework-4.7.1#System_Net_NetworkInformation_NetworkInterface_GetIsNetworkAvailable
I have an application that batches web requests to a single endpoint using the HttpWebRequest mechanism, the goal of the application is to revise large collections of product listings (specifically their descriptions).
Here is an example of the code I use to make these requests:
static class SomeClass
{
static RequestCachePolicy cachePolicy;
public static string DoRequest(string requestXml)
{
string responseXml = string.Empty;
Uri ep = new Uri(API_ENDPOINT);
HttpWebRequest theRequest = (HttpWebRequest)WebRequest.Create(ep);
theRequest.ContentType = "text/xml;charset=\"utf-8\"";
theRequest.Accept = "text/xml";
theRequest.Method = "POST";
theRequest.Headers[HttpRequestHeader.AcceptEncoding] = "gzip";
theRequest.Proxy = null;
if (cachePolicy == null) {
cachePolicy = new RequestCachePolicy(RequestCacheLevel.BypassCache);
}
theRequest.CachePolicy = cachePolicy;
using (Stream requestStream = theRequest.GetRequestStream())
{
using (StreamWriter requestWriter = new StreamWriter(requestStream))
{
requestWriter.Write(requestXml);
}
}
WebResponse theResponse = theRequest.GetResponse();
using (Stream responseStream = theResponse.GetResponseStream())
{
using (MemoryStream ms = new MemoryStream())
{
responseStream.CopyTo(ms);
byte[] resultBytes = GzCompressor.Decompress(ms.ToArray());
responseXml = Encoding.UTF8.GetString(resultBytes);
}
}
return responseXml;
}
}
My question is this; If I thread the task, I can call and complete at most 3 requests per second (based on the average sent data length) and this is through a gigabit connection to a router running business grade fibre internet. However if I divide the task up into 2 sets, and run the second set in a second process, I can double the requests complete per second.
The same can be said if I divide the task into 3 or 4 (after that performance seems to plateau unless I grab another machine to do the same), why is this? and can I change something in the first process so that running multiple processes (or computers) is no longer needed?
Things I have tried so far include the following:
Implementing GZip compression (as seen in the example above).
Re-using the RequestCachePolicy (as seen in the example above).
Setting Expect100Continue to false.
Setting DefaultConnectionLimit before the ServicePoint is created to a larger number.
Reusing the HttpWebRequest (does not work as remote host does not support it).
Increasing the ReceiveBufferSize on the ServicePoint both before and after creation.
Disabling proxy detection in Internet Explorer's Lan Settings.
My suspicion is not with the remote host as I can quite clearly wrench far more performance out by the methods I explained, but instead that some mechanism is capping the amount amount of data that is allowed to be sent through the HttpWebRequest (maybe something to do with the ServicePoint?). Thanks in advance, and please let me know if there is anything else you need clarifying.
--
Just to expand on the topic, my colleague and I used the same code on a system running Windows Server Standard 2016 64bit and requests using this method run significantly faster and more numerous. This seems to be pointing out that there is likely some sort of software bottleneck imposed proving that there is something going on. The slow operations are observed on Windows 10 Home/Pro 64bit and lower on faster hardware than the server is running on.
Scaling
I do not have a better solution for your problem but i think i know why your performance seems to peek or why it is machine dependent.
Usually a program has the best performance when the number of threads or processes matches exactly the number of cores. That is because the system can run them independently and the overhead for scheduling or context switching is minimized.
You arrived at your peek performance at 3 or 4 different tasks. From that i would conclude your machine has 2 or 4 cores. That would exactly match my explanation.
I have an ASP.NET 3.5 app using WebForms, currently it is being hosted on IIS6. Everything behaves great.
However, after switching to a Windows 2012 server with IIS8 installed, we intermittently get truncated requests. The majority of the time this manifests in a viewstate exception in our event log, however, on forms that do not have ViewState, we get incomplete posts (the last few fields are missing / partially truncated).
This became so problematic that we escalated to Microsoft support, and after weeks of debugging, they said that this is the "correct" behavior for II7 and above. Their explanation was the change in the IIS pipeline from 6 to 7.
IIS6 and below would buffer the entire request before passing it along to Asp.net, truncated requests would be ignored.
IIS7 and above would send the request to Asp.net after the initial headers were sent, it would be up to the app to handle truncated requests.
This becomes problematic when either there are connectivity issues (the user unplugs their cable during tranmission) or when the user presses stop / reloads the page during a post.
In our HTTP logs, we see "connection_dropped" messages that correlate to the truncated requests.
I am having trouble believing that this behavior is intended, but we have tested on a few different servers and get the same results with IIS7 and above (Windows 2008, 2008 R2, and 2012).
My questions are:
1) Does this behavior even make sense?
2) If this is "correct" behavior, how do you protect your app against potentially processing incomplete data?
3) Why is it the application developer's responsibility to detect incomplete requests? Hypothetically, why would the app developer handle the incomplete request other than ignoring it?
Update
I wrote a small asp.net application and website to demonstrate the issue.
Server
Handler.ashx.cs
public class Handler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
if (context.Request.HttpMethod == "POST")
{
var lengthString = context.Request.Form["Length"];
var data = context.Request.Form["Data"];
if (lengthString == null)
{
throw new Exception("Missing field: Length");
}
if (data == null)
{
throw new Exception("Missing field: Data");
}
var expectedLength = int.Parse(lengthString);
if (data.Length != expectedLength)
{
throw new Exception(string.Format("Length expected: {0}, actual: {1}, difference: {2}", expectedLength, data.Length, expectedLength - data.Length));
}
}
context.Response.ContentType = "text/plain";
context.Response.Write("Hello World, Request.HttpMethod=" + context.Request.HttpMethod);
}
public bool IsReusable
{
get { return false; }
}
}
Client
Program.cs
static void Main(string[] args)
{
var uri = new Uri("http://localhost/TestSite/Handler.ashx");
var data = new string('a', 1024*1024); // 1mb
var payload = Encoding.UTF8.GetBytes(string.Format("Length={0}&Data={1}", data.length, data));
// send request truncated by 256 bytes
// my assumption here is that the Handler.ashx should not try and handle such a request
Post(uri, payload, 256);
}
private static void Post(Uri uri, byte[] payload, int bytesToTruncate)
{
var socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
{
// this allows us to disconnect unexpectedly
LingerState = new LingerOption(true, 0)
};
socket.Connect(uri.Host, uri.Port);
SendRequest(socket, uri, payload, bytesToTruncate);
socket.Close();
}
private static void SendRequest(Socket socket, Uri uri, byte[] payload, int bytesToTruncate)
{
var headers = CreateHeaders(uri, payload.Length);
SendHeaders(socket, headers);
SendBody(socket, payload, Math.Max(payload.Length - bytesToTruncate, 0));
}
private static string CreateHeaders(Uri uri, int contentLength)
{
var headers = new StringBuilder();
headers.AppendLine(string.Format("POST {0} HTTP/1.1", uri.PathAndQuery));
headers.AppendLine(string.Format("Host: {0}", uri.Host));
headers.AppendLine("Content-Type: application/x-www-form-urlencoded");
headers.AppendLine("User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:26.0) Gecko/20100101 Firefox/99.0");
headers.AppendLine("Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
headers.AppendLine("Connection: Close");
headers.AppendLine(string.Format("Content-Length: {0}", contentLength));
return headers.ToString();
}
private static void SendHeaders(Socket socket, string headers)
{
socket.Send(Encoding.ASCII.GetBytes(headers));
socket.Send(Encoding.ASCII.GetBytes("\n"));
}
private static void SendBody(Socket socket, byte[] payload, int numBytesToSend)
{
socket.Send(payload, 0, numBytesToSend, SocketFlags.None);
}
1) If you're running pipeline for the app pool to which your 3.5 application is assigned in Integrated mode, you might have trouble with how your requests are handled due to ISAPI behavior. You may be generating requests it doesn't understand properly and it then truncates them to a default value. Have you tried running the app pool in Classic mode?
2) Functional testing. Lots and lots of functional testing. Create a test harness and make all the calls your application can make to make sure it's working properly. This is not a 100% solution, but nothing really is. There are many computer science papers explaining why it's impossible to test every single possible situation in which your app may run based on the Halting Problem.
3) Because you wrote the code. You should not have incomplete requests because the request might be for an important piece of data and you need to send back an error saying that there was a problem processing a request, otherwise the issuing party just sees the request as having mysteriously vanished.
The reason IIS changed its behaviour because we (developers) needed more control over request handling. In case of broken request, we had problem investigating cause of invisible requests. We need to log requests at application level for investigation & record keeping. For example, if request involves a financial transaction like credit card transaction, we need more control and we need to record every step for compliance.
IIS is a web server framework and application level data validation is not their responsibility. If request was broken, that means the input was incomplete & application level logic will decide what to do. Application must respond correct error codes and by failures. This is the reason ASP.NET mvc has model validation which allows you to validate complete input at application level.
You can use IsClientConnected to check whether the underlying socket is still connected or not.
As the web has gone more AJAX, and more mobile, and we sometimes use ping to check health of remote services, we do not necessarily conclude that broken request is an error and must be dropped. We may still want to live with broken requests. It is the choice that application level developer can make and not the IIS.
I am currently working on a WinForm app to stream videos from IP camera using the RTSP protocol in C#. Everything worked fine. Part of the requirement for the app includes a function to check whether the IP camera is online or not.
So I did a ping function using the System.Net.NetworkInformation.Ping class to ping the IP camera. Say if the RTSP url of the camera is as follows rtsp://[CAMERA IP]:554/Master0-RTSP/1.0, I would only need to extract the [CAMERA IP] part and use the Ping class to see if the camera is online or not by using its IP.
Initially, it works until an issue came, say if one to enter an IP which may not be the intended IP Camera (say an IP of a computer) the ping function would still work if the entered IP of the entered device is online.
I tried to search for something like a RTSP ping but could not find one. Was hoping for any advices or opinions on this matter. Any example in C# are greatly appreciated. Thank you for your kind attention.
OPTIONS can possibly work but the standard specifies the correct way is through using theGET_PARAMETER.
RFC2326 outlines that clearly
http://www.ietf.org/rfc/rfc2326.txt
10.8 GET_PARAMETER
The GET_PARAMETER request retrieves the value of a parameter of a
presentation or stream specified in the URI. The content of the reply
and response is left to the implementation. GET_PARAMETER with no
entity body may be used to test client or server liveness ("ping").
While GET_PARAMETER may not be supported by the server there is no way to tell how that server will react to the OPTIONS request which does not even require a sessionID. Therefor it cannot be guaranteed it will keep your existing session alive.
This is clear from reading the same RFC about the OPTIONS request
10.1 OPTIONS
The behavior is equivalent to that described in [H9.2]. An OPTIONS
request may be issued at any time, e.g., if the client is about to
try a nonstandard request. It does not influence server state.
Example:
C->S: OPTIONS * RTSP/1.0
CSeq: 1
Require: implicit-play
Proxy-Require: gzipped-messages
S->C: RTSP/1.0 200 OK
CSeq: 1
Public: DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE
Note that these are necessarily fictional features (one would hope
that we would not purposefully overlook a truly useful feature just
so that we could have a strong example in this section).
If GET_PARAMETER is not supported then you would issue a PLAY request with the SessionId of the session you want to keep alive.
This should work even if OPTIONS doesn't as PLAY honors the Session ID and if you are already playing there is no adverse effect.
For the C# RtspClient see my project # https://net7mma.codeplex.com/
And the article on CodeProject # http://www.codeproject.com/Articles/507218/Managed-Media-Aggregation-using-Rtsp-and-Rtp
Regarding RTSP in C# see this thread Using RTMP or RTSP protocol in C#
Regarding Ping ... you can implement is as DESCRIBE operation ... but pay attention do not make it too frequently, the device should be affected.
http://www.ietf.org/rfc/rfc2326.txt
Instead of ICMP ping, you might want to keep a helper RTSP session without video/audio RTP streams, checking good standing of socket connection and sending OPTIONS or DESCRIBE command on a regular basis, e.g. once a minute, in order to see if the device is responsive.
Some suggest using GET_PARAMETER instead of options, however this is inferior method. OPTIONS is mandatory, GET_PARAMETER is not. Both serve different purpose. Both have small server side execution expense. OPTIONS is clearly the better of the two.
Some servers may not support setting stream parameters and thus not support GET_PARAMETER and SET_PARAMETER.
You can use RTSPClientSharp and do something like this:
public static async Task TestRTSPConnection(string rtspAddress, string user, string password)
{
var serverUri = new Uri(rtspAddress);
var credentials = new NetworkCredential(user, password);
var connectionParameters = new ConnectionParameters(serverUri, credentials);
var cancellationTokenSource = new CancellationTokenSource();
var connectTask = ConnectAsync(connectionParameters, cancellationTokenSource.Token);
if (await Task.WhenAny(connectTask, Task.Delay(15000 /*timeout*/)) == connectTask)
{
if (!connectTask.Result)
{
logger.Warn("Connection refused - check username and password");
}
logger.Info("Connection test completed");
}
else
{
logger.Warn("Connection timed out - check username and password");
}
}
private static async Task<bool> ConnectAsync(ConnectionParameters connectionParameters, CancellationToken token)
{
try
{
using (var rtspClient = new RtspClient(connectionParameters))
{
rtspClient.FrameReceived +=
(sender, frame) => logger.Info($"New frame {frame.Timestamp}: {frame.GetType().Name}");
while (true)
{
logger.Info("Connecting...");
try
{
await rtspClient.ConnectAsync(token);
}
catch (OperationCanceledException)
{
logger.Info("Finishing test before connection could be established. Check credentials");
return false;
}
catch (RtspClientException e)
{
logger.Error($"{e.Message}: {e.InnerException?.Message}");
return false;
}
logger.Info("Connected - camera is online");
return true;
}
}
}
catch (OperationCanceledException)
{
return false;
}
}
It works for me pretty well if you just care about pinging and if the camera is online or not. Also timeout happens when credentials are incorrect. You get direct failure if port is not exposed or connection is refused.
Given an async controller:
public class MyController : AsyncController
{
[NoAsyncTimeout]
public void MyActionAsync() { ... }
public void MyActionCompleted() { ... }
}
Assume MyActionAsync kicks off a process that takes several minutes. If the user now goes to the MyAction action, the browser will wait with the connection open. If the user closes his browser, the connection is closed. Is it possible to detect when that happens on the server (preferably inside the controller)? If so, how? I've tried overriding OnException but that never fires in this scenario.
Note: I do appreciate the helpful answers below, but the key aspect of this question is that I'm using an AsyncController. This means that the HTTP requests are still open (they are long-lived like COMET or BOSH) which means it's a live socket connection. Why can't the server be notified when this live connection is terminated (i.e. "connection reset by peer", the TCP RST packet)?
I realise this question is old, but it turned up frequently in my search for the same answer.
The details below only apply to .Net 4.5
HttpContext.Response.ClientDisconnectedToken is what you want. That will give you a CancellationToken you can pass to your async/await calls.
public async Task<ActionResult> Index()
{
//The Connected Client 'manages' this token.
//HttpContext.Response.ClientDisconnectedToken.IsCancellationRequested will be set to true if the client disconnects
try
{
using (var client = new System.Net.Http.HttpClient())
{
var url = "http://google.com";
var html = await client.GetAsync(url, HttpContext.Response.ClientDisconnectedToken);
}
}
catch (TaskCanceledException e)
{
//The Client has gone
//you can handle this and the request will keep on being processed, but no one is there to see the resonse
}
return View();
}
You can test the snippet above by putting a breakpoint at the start of the function then closing your browser window.
And another snippet, not directly related to your question but useful all the same...
You can also put a hard limit on the amount of time an action can execute for by using the AsyncTimeout attribute. To use this use add an additional parameter of type CancellationToken. This token will allow ASP.Net to time-out the request if execution takes too long.
[AsyncTimeout(500)] //500ms
public async Task<ActionResult> Index(CancellationToken cancel)
{
//ASP.Net manages the cancel token.
//cancel.IsCancellationRequested will be set to true after 500ms
try
{
using (var client = new System.Net.Http.HttpClient())
{
var url = "http://google.com";
var html = await client.GetAsync(url, cancel);
}
}
catch (TaskCanceledException e)
{
//ASP.Net has killed the request
//Yellow Screen Of Death with System.TimeoutException
//the return View() below wont render
}
return View();
}
You can test this one by putting a breakpoint at the start of the function (thus making the request take more than 500ms when the breakpoint is hit) then letting it run out.
Does not Response.IsClientConnected work fairly well for this? I have just now tried out to in my case cancel large file uploads. By that I mean if a client abort their (in my case Ajax) requests I can see that in my Action. I am not saying it is 100% accurate but my small scale testing shows that the client browser aborts the request, and that the Action gets the correct response from IsClientConnected.
It's just as #Darin says. HTTP is a stateless protocol which means that there are no way (by using HTTP) to detect if the client is still there or not. HTTP 1.0 closes the socket after each request, while HTTP/1.1 can keep it open for a while (a keep alive timeout can be set as a header). That a HTTP/1.1 client closes the socket (or the server for that matter) doesn't mean that the client has gone away, just that the socket hasn't been used for a while.
There are something called COMET servers which are used to let client/server continue to "chat" over HTTP. Search for comet here at SO or on the net, there are several implementations available.
For obvious reasons the server cannot be notified that the client has closed his browser. Or that he went to the toilet :-) What you could do is have the client continuously poll the server with AJAX requests at regular interval (window.setInterval) and if the server detects that it is no longer polled it means the client is no longer there.