Alright, so I'm creating an application, and in it, the clients need to communicate with the server as soon as it launches. So I have two ideas for this. (A) I could have it so that the client sends a message to the server using TCP/IP to tell it what it needs, and the server sends that back over the connection, or the client just downloads a file from a web server.
Since both are transferring the same file over the network, both should be the same speed right? Well I don't know, that's why I'm asking. And I know that somebody will probably say "oh well try it yourself", and I'm sure I could if I got a runtime operation and used that with both, but I don't have my server set up yet, and I would change how it operates severely if I knew ahead of time.
So, is it faster to download from a web server, or contact a server and have it send the information over? And if there's any better idea as for getting info from the server, let me know!
Your two operations are; from a network perspective, identical:
Client establishes TCP socket to server
Client sends request for file
Server responds with file
Using HTTP as the format of the request doesn't change the nature of the operation. You do have to deal with the overhead of going through the web server logic, but that is almost certainly negligible in comparison to the actual network operation.
Related
1- Client application sends a a request to an http server (ashx file, IHttpHandler).
Remark-1: Client is a .dll which will be hosted by other stand alone applications.
Remark-2: Server was first developed as a web service, then for unknown reasons it became very slow, so we implemented it from scratch.
2- Server registers the request in database so that a long duration process is performed on data.
3- Client needs to get notified when the process is finished.
First thing that crossed my mind was implementing a Timer in client. Although I'm not sure if it is ok to do it inside of a host application which is not aware of such usage.
Then it crossed my mind if there may be a something useful in TcpListener or lower layers of socket programming instead of a high frequency timer and flooding server with update requests.
So, I appreciate any suggestion on proper way of doing this task.
UPDATE:
After giving some order to my codes, I update requirements like this:
1- Server "Broadcast"s ID of clients, like: "Client-a, read your instructions", then "Client-c", "Clinet-j",... . This is not mission critical, if a client looses the broadcast, it will return after one minute by a timer tick and will check instructiosn.
2- This server is gona be hosted on a shared hosting plan, at least at first. Solution must be acceptable in boundary of share hosting.
3- preferably all clients connect to the only one socket. No usage of extra resources.
Any recommendation is appreciated.
I've begun learning TCP Networking with C#. I've followed the various tutorials and looked at the example code out there and have a working TCP server and client via async connections and writes/reads. I've got file transfers working as well.
Now I'd like to be able to track the progress of the transfer (0% -> 100%) on both the server and the client. When initiating the transfer from the server to client I send the expected file size, so the client knows how many bytes to expect, so I imagine I can easily do: curCount / totalCount on the client. But I'm a bit confused about how to do this for the server.
How accurate can the server tell the transfer situation for the client? Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)? Or should I have the client relay back to the server the client's completion?
I'd like to know this for when to close the connection, as well as be able to visually display progress. Should the server trust the client to close the connection (barring network errors/timeouts/etc)? Or can the server close the connection as soon as it's written to the stream?
There are two distinct percentages of completion here: the client's and the server's. If you consider the server to be done when it has sent the last byte, the server's percentage will always be at least as high as the client's. If you consider the server to be done when the client has processed the last byte the server's percentage will lag the one of the client. No matter what you do you will have differing values on both ends.
The values will differ by the amount of data currently queued in the various buffers between the server app and the client app. This buffer space is usually quite small. AFAIK the maximum TCP window size is by default 200ms of data transfer.
Probably, you don't need to worry about this issue at all because the progress values of both parties will be tightly tied to each other.
Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)?
This is an adequate solution.
Or should I have the client relay back to the server the client's completion?
This would be the 2nd case I described in my 1st paragraph. Also acceptable, although not necessarily a better result and more overhead. I cannot imagine a situation right now in which I'd do it this way.
Should the server trust the client to close the connection (barring network errors/timeouts/etc)?
When one party of a TCP exchange is done sending it should shutdown the socket for sending (using Socket.Shutdown(Send)). This will cause the other party to read zero bytes and know that the transfer is done.
Before closing a socket, it should be shut down. If the Shutdown call completes without error it is guaranteed that the remote party has received all data and that the local party has received all data as well.
Or can the server close the connection as soon as it's written to the stream?
First, shut down, then close. Closing alone does not imply successful transfer.
I have got an TCP Socket Application that client side sending huge string messages to server at same time.And server getting this messages writing them into Access DB.So if there are so many clients server side can not handle each client properly and sometime server closing itself.
Is there any way tell client's thread before send message wait for queue if there is another client currently in queue? With this server don't need to be handle for example 30 client's demand at same time.
For example;
Client sends message => Server processing 1 client's demand
Client is waiting for 1 client's demands for complete than 2 Client sends message => Server processing 1 client's demand
Client is waiting for 2 client's demands for complete.
My problem is appears when I use access db. While opening access connection saving data into tables and closing db is taking time and server go haywire :) If I don't use access db I can get huge messages with no problem.
Yes, you can do that however that s not the most efficient way of doing it. Your scheme is single threaded.
What you want to do is create a threadpool and accept messages from multiple clients and process them as seperate threads.
If that s too complicated. You can have a producer consumer queue within your server, all incoming messages will be stored in a queue while your server will be processing them first come first serve basis.
I think you should consider using a web server for your application, and replacing your protocol with HTTP. Instead of sending huge strings on a TCP stream, just POST the string to the server using your favorite HttpClient class.
By moving to HTTP you more or less solve all your performance issues. The web server already knows how to handle multiple long requests so you don't need to worry about that. Since you're sending big strings, the HTTP overhead is not going to affect your performance.
This topic has been discussed million times before, but let me clarify my needs:
I need a single server which controls a system and includes the necessary functions. Furthermore, there will be "n" Clients which represents only the HI/GUI and call server side functions. The server itself should be able to send data back to the clients and call client-side functions too (like shutdown, exit and so on...)
I have heard about duplex services/contracts (http://msdn.microsoft.com/en-us/library/ms731064.aspx), but I'm not sure how far I'll come with that.
How would you handle this?
I recently made a proof of concept app that made both the server and the client host a WCF service each. The client connects to the server and then in a handshake call, gives the server the connection information to allow the server create a separate connection back to the client. It worked a treat with multiple clients on network links from local lan to 64k line on remote sites at the same time.
You could use WCF, and host the service on the server in IIS, in the application on the client and let the client register it's endpoint on the server.
I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc