webserver forcing browser to only open one parallel connection - c#

Hello I'm trying to write a webserver in C#.
The server is going to dynamically create a website based on some templates I defined.
The problem I have is that you only can access the webpage if you enter a password.
So I decided to make the browser open up a keep-alive connection, passing every request through it.
Then I have control over logged in clients and not logged in clients. Now the problem is that Firefox and Google Chrome, when it comes to requesting the images on the website, they just open up another connection from the same ip but a different port.
My webserver thinks that its another client and sends the login http page instead of the requested image.
So every time the website loads only 1 - 4 images are getting actually sent.
Now my question: Is there any way to force the browser NOT to open up parallel connections?
Or if not possible how should I deal with the problem?
For those who like to see some code here is what the core of the server looks like, just to understand my problem:
void ThreadStart()
{
while (true)
{
RunClient(listener.AcceptTcpClient());
}
}
void RunClient(TcpClient c)
{
Thread tht = new Thread(new ParameterizedThreadStart(RunIt));
tht.IsBackground = true;
tht.Start(c);//The login page is getting sent...
thtt.Add(tht);
}
Thanks in advance, Alex

Authenticating a HTTP connection rather than individual requests is wrong, wrong, wrong. Even if you could make the browser reuse a single connection (which you can't, because that's not how HTTP works), you wouldn't be able to count on this being respected by proxies or transparent web caches.
This is (some of) what cookies were invented for. Use them, or some kind of session identifier built into the URLs.

Related

Launching Browser in end user machine

We have an internal ASp.NET MVC application which receives requests from an external system.The request looks something like this:
http://test.com/abc/showinfo/12345678990
Controller Action
[Route]
public ActionResult Showinfo(string somenumber)
{
if (somenumber.Contains("1234"))
// Launch Chrome Browser
else
// Launch IE Browser
}
I tried with Process.Start(url ), its works fine in my localbox but fails in Dev server.
Is it possible to Launch a browser in end user system? if yes then please let me know the steps.
Is it possible to Launch a browser in end user system?
Yes.
You can use PsExec to do it. But do note that if you impersonate credentials, it will send the password over the network in plain text.
psexec \\marklap c:\thebrowserpath\thebrowser.exe
References:
http://windowsitpro.com/powershell/common-ways-run-programs-remote-computers
http://ss64.com/nt/psexec.html
System.Diagnostics.Process.Start("iexplore.exe", "http://google.com");
You can replace http://google.com with your custom url
I don't know if it is possible with your solution, but I think it would be more sensible to open a browser on the client as reaction to an answer from your service, i.e. the following workflow
The client sends a request to your server
Your server processes the request and constructs and answer. This answer contains the URL and the desired browser
The server sends the answer back to the client
The client receives the answer and checks for the URL and the desired browser
The client starts the desired browser with the URL
Depending on the client system (is it some sort of custom application/service?) opening the a new browser could be quite simple (Process.Start() or similar). If the whole thing runs in a browser, it could also be just opening a new tab (if it's the same browser). Opening an alternative browser may be tricky, but could be possible for instance in Chrome with Native Messaging (https://developer.chrome.com/extensions/nativeMessaging)

ASP.NET Web API StreamContent - make browser show download progress

From an ASP.NET Web Api 2.x controller I'm am serving files using an instance of the StreamContent type. When a file is requested, its blob is located in the database, and a blob stream is opened. The blob stream is then used as input to a StreamContent instance.
Boiled down, my controller action looks similar to this:
[HttpGet]
[Route("{blobId}")]
public HttpResponseMessage DownloadBlob(int blobId)
{
// ... find the blob in DB and open the 'myBlobStream' based on the given id
var result = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(myBlobStream)
};
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.ContentLength = myBlobStream.Length;
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = "foo.txt",
Size = myBlobStream.Length
};
return result;
}
When I hit the endpoint in Chrome (v. 35) it says that it is resolving the host (localhost) and when the file has downloaded it then appears in the download bar. However, I am wondering what is needed to enable Chrome (or any other browser) to show the download progress?
I thought this would be fixed by included the header information like content-type, content-length, and content-disposition, but from what I have tried, that does not make any difference.
Turned out that my implementation was correct. I closed fiddler and everything worked as expected. Don't know if fiddler somehow waits for the entire response to complete before it sends it through its proxy - at least, that would explain why the browser stays in the "resolving host" state until the entire file has been downloaded.
The Web API doesn't "push" information so, unless you have a background thread on your client polling the server for the download status every few seconds or so, this is a bad idea. For a number of reasons in fact:
Increased load on the server to serve multiple requests (imagine if many clients did that at the same time)
Increased data communication from your client (would be important if you were doing this on a mobile phone contract)
etc. (I'm sure I can think of more but it's late)
You might want to consider SignalR for this, although I'm no expert on it. According to the summary in the page I linked:
ASP.NET SignalR is a new library for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available. SignalR supports Web Sockets, and falls back to other compatible techniques for older browsers. SignalR includes APIs for connection management (for instance, connect and disconnect events), grouping connections, and authorization.
If your Web API can allow it, I suppose a potential alternative would be to first send a quick GET request to receive the size of the file you're about to download and store it in your client. In fact, you could utilise the Content-Length header here to avoid the extra GET. Then do your file download and, while it's happening, your client can report the download progress by comparing how much of the file it has received against the full size of the file it got from the server.

FiddlerCore behavior when network is disconnected

I'm handling local requests by using FiddlerCore like this:
private static void FiddlerApplication_BeforeRequest(Session session)
{
if (session.hostname.ToLower() == "localhost")
ProcessRequest(session);
}
Everything works well but when the Internet network is down, I'm getting the following message:
"[Fiddler] DNS Lookup for "www.google.com" failed. The system reports that no network connection is available. No such host is known"
My question is:
How should I configure FiddlerCore so when the network is down, I will receive the regular 404 page?
You're noting: When a proxy is present, the browser doesn't show its default error pages. That is a correct statement, and it's not unique to Fiddler.
You're confusing folks because you're talking about "regular 404 response"s. That's confusing because the page you're talking about has nothing to do with a HTTP/404-- it's the browser's DNS Lookup Failure or Server Unreachable error page, which it shows when a DNS lookup fails or a TCP/IP connection attempt fails. Neither one of those is a 404, which is an error code that can be returned by a server only after the DNS lookup succeeds and the TCP/IP connection is successful.
To your question of: How can I make a request through a proxy result in the same error page that would be shown if the proxy weren't present, the answer is that, in general, you can't. You could do something goofy like copying the HTML out of the browser's error page and having the proxy return that, but because each browser (and version) may use different error pages, your ruse would be easily detectable.
One thing you could try is to make a given site bypass the proxy (such that the proxy is only used for the hosts you care about). To do that, you'd create a Proxy Autoconfiguration script file (a PAC file) and have its FindProxyForURL function return DIRECT for everything except the site(s) you want to have go through the proxy. see the Configuration Script section of this post.
But, stepping back a bit, do you even need a proxy at all? Can't you just run your web server on localhost? When you startup Fiddler/FiddlerCore, it listens on localhost:[port]. Just direct your HTTP request there without setting Fiddler as the system proxy.

HTTPHandler does not handle secondary requests

I want to run my personal web sites via an httphandler (I have a web server and static ip at home.)
Eventually, I will incorporate a data access layer and domain router into the handler, but for now, I am just trying to use it to return static web content.
I have the handler mapped to all verbs and paths with no access restrictions in IIS 7 on Windows 7.
I have added a little file logging at the beginning of process request. As it is the first thing in the handler, I use the logging to tell me when the handler is hit.
At the moment, the handler just returns a single web page that I have already written.
The handler itself is mostly just this:
using (FileStream fs = new FileStream(Request.PhysicalApplicationPath + "index.htm",
FileMode.Open))
{
fs.CopyTo(Response.OutputStream);
}
I understand that this won't work for anything but the one file.
So my issue is this: the HTML file has links to some images in it. I would expect that the browser would come back to the server to get those images as new requests. I would expect those requests to fail (because they'd be mapped to index.htm). But I would expect to see the logging hit at least twice (and potentially hit recursively). However, I only see a single request. The web page comes up and the images are 'X's.
When I refresh the browser, I see another request come through, but only for the root page again. The page is basic HTML, I do not have an asp.net application (nor do I want one, I like HTML/CSS/JS).
What do I have to do to get more than just the first request sent from the browser? I assume I'm just totally off the mark because I wrote an HTTP Module first, but strangely got the same exact behavior. I'm thinking I need to specify some response headers, but don't see that in any example.

Running server-side function as browser closes

Background: I'm creating a very simple chatroom-like ASP.NET page with C# Code-Behind. The current users/chat messages are displayed in Controls located within an AJAX Update Panel, and using a Timer - they pull information from a DB every few seconds.
I'm trying to find a simple way to handle setting a User's status to "Offline" when they exit their browser as opposed to hitting the "Logoff" button. The "Offline" status is currently just a 1 char (y/n) for IsOnline.
So far I have looked into window.onbeforeunload with Javascript, setting a hidden form variable with a function on this event -> Of course the trouble is, I'd still have to test this hidden form variable in my Code-Behind somewhere to do the final Server-Side DB Query, effectively setting the User offline.
I may be completely obfusticating this likely simple problem! and of course I'd appreciate any completely different alternative suggestions.
Thanks
I suspect you are barking up the wrong tree. Remember, it is possible for the user to suddenly lose their internet connection, their browser could crash, or switch off their computer using the big red switch. There will be cases where the server simply never hears from the browser again.
The best way to do this is with a "dead man's switch." Since you said that they are pulling information from the database every few seconds, use that opportunity to store (in the database) a timestamp for the last time you heard from a given client.
Every minute or so, on the server, do a query to find clients that have not polled for a couple of minutes, and set the user offline... all on the server.
Javascript cannot be reliable, because I can close my browser by abending it.
A more reliable method might be to send periodic "hi I'm still alive" messages from the browser to the server, and have the server change the status when it stops receiving these messages.
I can only agree with Joel here. There is no reliable way for you to know when the HTTP agent wants to terminate the conversation.

Categories

Resources