I have a chat room application that has been implemented in C# with SignalR WebSockets capabilities and hosted on Azure so it connects using WebSockets. I've also Implemented the same application to use long polling as a transport method.
What I want to do now is find "tests" as to which I can compare the network traffic and latency issues (or any other major differences) on both applications. One suggested evaluation for a comparison is the initial connection of the unnecessary network throughput but not quite sure how to go about that.
Any comments and suggestions would be highly appreciated.
Would a simple latency display be enough ?
An easy way to do this is :
implement a client to server call in which you send a browser-computed Date.now()
make the server immediately call a method in the client, sending back the value unchanged
the client computes the difference Date.now() - receivedDate. You now know the time interval for a back and forth request client->server->client.
Related
1- Client application sends a a request to an http server (ashx file, IHttpHandler).
Remark-1: Client is a .dll which will be hosted by other stand alone applications.
Remark-2: Server was first developed as a web service, then for unknown reasons it became very slow, so we implemented it from scratch.
2- Server registers the request in database so that a long duration process is performed on data.
3- Client needs to get notified when the process is finished.
First thing that crossed my mind was implementing a Timer in client. Although I'm not sure if it is ok to do it inside of a host application which is not aware of such usage.
Then it crossed my mind if there may be a something useful in TcpListener or lower layers of socket programming instead of a high frequency timer and flooding server with update requests.
So, I appreciate any suggestion on proper way of doing this task.
UPDATE:
After giving some order to my codes, I update requirements like this:
1- Server "Broadcast"s ID of clients, like: "Client-a, read your instructions", then "Client-c", "Clinet-j",... . This is not mission critical, if a client looses the broadcast, it will return after one minute by a timer tick and will check instructiosn.
2- This server is gona be hosted on a shared hosting plan, at least at first. Solution must be acceptable in boundary of share hosting.
3- preferably all clients connect to the only one socket. No usage of extra resources.
Any recommendation is appreciated.
I'm very interested in how financial data is streamed from server to client. I often here the term 'push-pull' used. I wondered if someone could give me an example (preferably in Java, C# or perhaps Javascript) how this is actually achieved? Whenever I have written amateur hobby projects at home I often end up querying a URL (containing the price) and continuously calling this within a while(true) loop, with a thread.sleep(x), even if the price doesnt change.
Thanks in advance
Don't know what you mean with 'streaming financial data', but the concept of push/pull is not limited to the financial sector :)
Generally speaking, pull strategy means that the client is actively getting data through a pre-defined communication channel (in your case a socket to an existing and known URL) and polling this channel for new information.
In contrast to that you have the push strategy where you are notified of any changes and you supply the communication channel and register it with the connection's partner. E.g. you have a webservice and your connection partner will post information to that webservice whenever he sees fit. See http://en.wikipedia.org/wiki/Observer_pattern for this concept.
Hope this helps a bit.
If client is working over HTTP the push is always initiated by client, i.e. client requests new updates and server sends them. If client is thin client (i.e. application running in browser) the modern way is to use AJAX to retrieve data without refreshing the page. But again the initiative is at client side, but user just does not see it. It is done on scheduled basis using javascript.
The most "real time" approach is using HTTP tunneling technique: client performs HTTP GET to special URL mapped to servlet that does not close the connection. It just holds it open. When it has something to send to client it writes to stream. So, you get server-to-client push, but still initial connection was performed by client.
You were doing pull. Pulling is when the clients request the data from the server and the server acts on that request.
If the server would have send you data upon receiving new data, that would have been push.
So the difference is: push is initiated by the server and pull is initiated by the client.
Financial data is often transmitted with software like TIBCO Rendezvous. A publisher sends the message to a daemon and listeners that subscribed that subject gets the message from the daemon.
Websockets
EventSource
These are the two web-based PUSH techniques.
As for browser support:
Chrome/Safari/Firefox6 support Both of those.
Opera supports EventSource and Websockets but has the latter disabled by default.
Firefox 4 supports websockets but has it disabled by default.
IE<10 supports neither, IE10 might support one if your lucky
There are many pull techniques, including HTTP and ajax.
I'm writing a client/server architecture where there are going to be possibly hundreds of clients over multiple virtual machines, mostly on the intranet but some in other locations.
Each client will be gathering data constantly and sending a message to a server every second or so. Each message will probably be about 128 characters or so in length.
My question is, for this architecture where I am writing both client/server in .NET is should I go with WCF or some socket code I've written previously. I need scalability (which the socket code has in mind), reliability and just the ability to handle that many messages.
I would not make final decision without peforming some proof of concept. Create very simple service, host it and use some stress test to get real performance results. Than validate results against your requirements. You have mentioned amount of messages but you didn't mentioned expected response time. There is currently discussed similar question on MSDN forum which complains about slow response time of WCF compared to sockets.
Other requirements are not directly mentioned in your post so I will make some assumption for best performance:
Use netTcpBinding - best performance, binary encoding, requires .NET server / clients. I guess you are going to use Net.Tcp because your other choice was direct socket programming.
Don't use security if you don't have to - reduces performance. Probably not possible for clients outside your intranet.
Reuse proxy on clients if possible. Openning TCP connection is expensive if you reuse the same proxy you will have single connection per proxy. This will affect instancing of you services - by default single service instance will handle all requests from single proxy.
Set service throttling so that your service host is ready for many clients
Also you should make some decisions about load balancing. Load balancing for WCF net.tcp connections requires sticky sessions (session affinity) so that after openning the channel client always calls the service on the same server (bacause instance of that service was created only on single server).
100 requests per second does not sound like much for a WCF service, especially with that little payload. But it should be quite quick to setup a simple setup with a WCF service with one echo method just returning the input and then hook up a client with a bunch of threads and a loop.
If you already have a working socket implementation you might keep it, but otherwise you can pick WCF and spend your precious development time elsewhere.
From my experience with WCF, i can tell you that it's performance on high load is very very nice. Especially you can chose between several bindings to achieve your requirements for the different scenarios (httpBinding for outside communication, netPeerTcpBinding in local network e.g.).
I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc
Imagine a WinForms client app that displays fairly complex calculated data fetched from a server app with .Net Remoting over a HTTPChannel.
Since the client app might be running for a whole workday, I need a method to notify the client that new data is available so the user is able to start a reload of the data when he needs to.
Currently I am using remoted .Net events, serializing the event to the client and then rethrowing the event on the side of the client.
I am not very happy with this setup and plan to reimplement it.
Important for me is:
.Net 2.0 based technology
easy of use
low complexity
robust enough to survive a server or client restart still functional
When limited to .Net 2.0, how would you implement such a feature? What technologies / libraries would you use?
I am looking for inspiration on how to attack the problem.
Edit:
The client and server exist in the same organisation, typically a LAN, perhaps a WAN/VPN situation.
This mechanism should only make the client aware that there is new data available. I'd like to keep remoting for getting the actual data to the client since that is working pretty well. MSMQ comes with windows, doesn't it? So it should be ok to use it, but I'm open to any alternative.
I've implemented a similar notification mechanism using MSMQ. The client machine opens a local, public queue, and then advises the server of it's queue name. When changes occur, the server pushes notifications into all the client queues that it's be made aware of. This way the client will know that data is ready, even if it wasn't running when the notification was sent.
The only downside is that it requires MSMQ on the clients, so this may not work if you don't have that kind of control over your client's machines.
For an extra level of redundancy (for example, if a client machine is completely down, and therefore the client queue is unavailable) you could queue notifications on the server prior to dissemination to clients. Notifications in the server queues are only removed when the client is successfully contacted (or perhaps after 3 failed attempts, etc.)
Also in that regard, if the server fails to deliver messages to a client a measured number of times, over a measured period of time, then support entities are notified, error alerts go out, and the client queue is removed from the list of destinations. When I say "measured" I mean a frequency/duration that makes sense to the setting. In my case, it was 5 retries with 5 minute intervals between attempts.
It might also make sense to have the client "renew" it's notification subscription at intervals. If a renewal doesn't occur, then eventually the client queue is removed from the destination list by a "groomer" process in the service.
It sounds as though you need to implement a message-queue based solution. Easy to implement, can survive reboots, and the technology is mature both on the server (MSMQ, MGQSeries) and on the client (System.Messaging)
If you can't find anything built-in and assuming you know the address of all the clients, you could send them a UDP message when data changes. Using UdpClient, this is very easy. The datagram doesn't even need to contain any data if the client app can assume that any UDP data on a certain port means it needs to get new data from the server.
If necessary, you can even make this a broadcast packet (if you don't know who the clients are and they are on the same subnet as the server), so long as the server isn't too "chatty".
Whatever solution you decide on, I would urge you to avoid having the clients poll. This will create a lot of unecessary network traffic and still won't perform all that well.
I would usually use a UI timer on the client to periodically hit the server to see if there was new or updated data. (Assuming you have a mechanism to identify that you have new data like time stamps for new rows, or file time stamps, or a table with last-calculated dates, etc)
That way the server doesn't have to know about the clients. The clients can check at their leisure, etc.