I have got an TCP Socket Application that client side sending huge string messages to server at same time.And server getting this messages writing them into Access DB.So if there are so many clients server side can not handle each client properly and sometime server closing itself.
Is there any way tell client's thread before send message wait for queue if there is another client currently in queue? With this server don't need to be handle for example 30 client's demand at same time.
For example;
Client sends message => Server processing 1 client's demand
Client is waiting for 1 client's demands for complete than 2 Client sends message => Server processing 1 client's demand
Client is waiting for 2 client's demands for complete.
My problem is appears when I use access db. While opening access connection saving data into tables and closing db is taking time and server go haywire :) If I don't use access db I can get huge messages with no problem.
Yes, you can do that however that s not the most efficient way of doing it. Your scheme is single threaded.
What you want to do is create a threadpool and accept messages from multiple clients and process them as seperate threads.
If that s too complicated. You can have a producer consumer queue within your server, all incoming messages will be stored in a queue while your server will be processing them first come first serve basis.
I think you should consider using a web server for your application, and replacing your protocol with HTTP. Instead of sending huge strings on a TCP stream, just POST the string to the server using your favorite HttpClient class.
By moving to HTTP you more or less solve all your performance issues. The web server already knows how to handle multiple long requests so you don't need to worry about that. Since you're sending big strings, the HTTP overhead is not going to affect your performance.
Related
Alright, so I'm creating an application, and in it, the clients need to communicate with the server as soon as it launches. So I have two ideas for this. (A) I could have it so that the client sends a message to the server using TCP/IP to tell it what it needs, and the server sends that back over the connection, or the client just downloads a file from a web server.
Since both are transferring the same file over the network, both should be the same speed right? Well I don't know, that's why I'm asking. And I know that somebody will probably say "oh well try it yourself", and I'm sure I could if I got a runtime operation and used that with both, but I don't have my server set up yet, and I would change how it operates severely if I knew ahead of time.
So, is it faster to download from a web server, or contact a server and have it send the information over? And if there's any better idea as for getting info from the server, let me know!
Your two operations are; from a network perspective, identical:
Client establishes TCP socket to server
Client sends request for file
Server responds with file
Using HTTP as the format of the request doesn't change the nature of the operation. You do have to deal with the overhead of going through the web server logic, but that is almost certainly negligible in comparison to the actual network operation.
I need to create client-server applications in local network with such functionality:
single server (data access and so on ...)
multi clients (ordinary 3-5, up to 20)
each client must authorize on server (need to check what rights it has)
client send request to server (ordinary less than 1 Kb, up to 3-5 Kb) and get responses (ordinary 30-100 Kb, some times it can be a big amount of data, up to 1-2 Mb)
after some queries from client(s) server notifies all clients and send them new and updated data (so server must know how much clients are connected)
if network connection dies, client must reconnect
I think NetMQ with Protobuf will be enough for my purposes. I look at documents and see, that the most suitable pattern for my task is http://zguide.zeromq.org/page:all#Service-Oriented-Reliable-Queuing-Majordomo-Pattern when each client is a worker and server worker is a client at same time.
I think this solution too is complicated?
Are there simpler ways to solve such problem (simpler pattern or maybe something based on WCF)?
I'm working on a game that depends on the standard System.Net.Sockets library for networking. What's the most efficient and standardized "system" I should use? Should the client send data requests every set amount of seconds, when a certain event happens? My other question, is a port forward required for a client to listen and receive data? How is this done, is there another socket created specifically for listening only on the client? How can I send messages and listen on the same socket on the client? I'm having a difficult time grasping the concept of networking, I started messing with it two days ago.
Should the client send data requests every set amount of seconds, when a certain event happens?
No. Send your message as soon as you can. The socket stack has algorithms that determine when data is actually sent. For instance the Nagle algorithm.
However, if you send a LOT of messages it can be beneficial to enqueue everything in the same socket method call. However, you need to send several thousand of messages per client and second for that to give you any benefit.
My other question, is a port forward required for a client to listen and receive data?
No. Once a socket connection have been established it's bidirectional. i.e. both end points and send and receive information without screwing something up for the other end point.
But to achieve that you typically have to use asynchronous operations so that you can keep receiving all the time.
How is this done, is there another socket created specifically for listening only on the client?
The server has a dedicated socket (a listener) which only purpose is to accept client sockets. When the listener have accepted a new connection from a remote end point you get a new socket object which represents the connection to the newly connected endpoint.
How can I send messages and listen on the same socket on the client?
The easiest way is to use asynchronous receives and blocking sends.
If you do not want to take care of everything by yourself, you can try my Apache licensed library http://sharpmessaging.net.
Creating a stable, high quality server will require you to have a wealth of knowledge on networking and managing your objects.
I highly recommend you start with something smaller before attempting to create your own server from scratch, or at the very least play around with a server for a different game that's already made, attempt to improve upon it or add new features.
That being said, there are a few ways you can setup the server, if you plan on having more than a couple of clients you don't generally want them to all send data whenever they feel like it as this can bog down the server, you want to structure it in such a way that the client sends as little data as possible on a scheduled basis and the server can request more when its ready. How that's setup and structured is up to you.
A server generally has to have a port forwarded on the router in order for requests to make it to the server from the internet, and here is why. When your computer makes a connection to a website (stackoverflow for example) it sends out a request on a random port, the router remembers the port that you sent out on and remembers who sent it (you), when the server sends the information you requested back the router knows you wanted that data and sends it back to you, in the case of RUNNING a server there is no outbound request to a client (Jack for example), so the router doesnt know where jacks request is supposed to go. By adding a port forwarding rule in the router your saying that all information passed to port 25565 (for example) is supposed to go to your server.
Clients generally do not need to forward ports because they are only making outbound requests and receiving data.
Server Starts, server starts listening on port 25565
Client starts, client connects to server on port 25565 and initiates a connection
Server responds to client on whatever port the client used to connect (this is done behind the scenes in sockets)
Communication continues from here.
I've begun learning TCP Networking with C#. I've followed the various tutorials and looked at the example code out there and have a working TCP server and client via async connections and writes/reads. I've got file transfers working as well.
Now I'd like to be able to track the progress of the transfer (0% -> 100%) on both the server and the client. When initiating the transfer from the server to client I send the expected file size, so the client knows how many bytes to expect, so I imagine I can easily do: curCount / totalCount on the client. But I'm a bit confused about how to do this for the server.
How accurate can the server tell the transfer situation for the client? Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)? Or should I have the client relay back to the server the client's completion?
I'd like to know this for when to close the connection, as well as be able to visually display progress. Should the server trust the client to close the connection (barring network errors/timeouts/etc)? Or can the server close the connection as soon as it's written to the stream?
There are two distinct percentages of completion here: the client's and the server's. If you consider the server to be done when it has sent the last byte, the server's percentage will always be at least as high as the client's. If you consider the server to be done when the client has processed the last byte the server's percentage will lag the one of the client. No matter what you do you will have differing values on both ends.
The values will differ by the amount of data currently queued in the various buffers between the server app and the client app. This buffer space is usually quite small. AFAIK the maximum TCP window size is by default 200ms of data transfer.
Probably, you don't need to worry about this issue at all because the progress values of both parties will be tightly tied to each other.
Should I guess based on the server's own status (either via the networkStream.BeginWrite() callback, or via the chunk loading from disk and networking writing)?
This is an adequate solution.
Or should I have the client relay back to the server the client's completion?
This would be the 2nd case I described in my 1st paragraph. Also acceptable, although not necessarily a better result and more overhead. I cannot imagine a situation right now in which I'd do it this way.
Should the server trust the client to close the connection (barring network errors/timeouts/etc)?
When one party of a TCP exchange is done sending it should shutdown the socket for sending (using Socket.Shutdown(Send)). This will cause the other party to read zero bytes and know that the transfer is done.
Before closing a socket, it should be shut down. If the Shutdown call completes without error it is guaranteed that the remote party has received all data and that the local party has received all data as well.
Or can the server close the connection as soon as it's written to the stream?
First, shut down, then close. Closing alone does not imply successful transfer.
I need to create a server process which can push high frequency data (1000 updates per second) to around 50 client. I'm thinking the best way you do this is using async sockets with the SocketAsyncEventArgs type.
The client -> server connections will be long running at least several days to indefinite. I plan to have a server process listening and the clients connect and the server starts pushing the data to the clients.
Can someone point me to or show me an example of how to do this? I can't find any example showing a server process pushing an object to a client.
EDIT: This is over a gigibit LAN. Using windows server with 16 cores and 24gb ram
thanks
First, some more requirements from your side is required. You have server with lots of muscle, but it will fail miserably if you don't do what has to be done.
can the client live without some of the data? I mean, does the stream of the data need to reach other side in proper order, without any drops?
how big is 'the data'? few bytes or?
fact: scheduling interval on windows is 10 msec.
fact: no matter WHEN you send, clients will receive it depending on lots of stuff - network config, number of routers in-between, client processor load, and so on. so you need some kind of timestamping here
Depending on all this, you could design a priority queue with one thread servicing it and sending out UDP datagrams for each client. Also, since (4) is in effect, you can 'clump' some of your data together and have 10 updates per second of 100 data.
If you want to achieve something else, then LAN will be required here with lots of quality network equipment.
If you want to use .NET Sockets to create this server-client project, then this is a good outline of what's needed:
Since the server will be transferring data to several clients simultaneously, you'll need to use the asynchronous Socket.Beginxxx methods or the SocketAsyncEventArgs class.
You'll have clients connect to your server. The server will accept those connections and then add the newly connected client to an internal clients list.
You'll have a thread running within the server, that periodically sends notifications to all sockets in the clients list. If any exceptions/errors occurs while sending data to a socket, then that client is removed from the list.
You'll have to make sure that access to the clients list is synchronized since the server is a multithreaded application.
You don't need to worry about buffering your send data since the TCP stack takes care of that. If you do not want to buffer your data at all (i.e. have the socket send data immediately), then set Socket.NoDelay to true.
It doesn't seem like you need any data from your clients, but if you do, you'd have to make sure your server has a Socket.BeginReceive loop if using Socket.BeginXXX pattern or Socket.ReceiveAsync method if using SocketAsyncEventArgs.
Once you have the connection and transmission of data between server and client going, you then need to worry about serialization and deserialization of objects between client and server.
Serialization which occurs on the server is easy, since you can use the BinaryFormatter or other encoders to encode your object and dump the data onto the socket.
Deserialization on the other hand, which occurs on the client, can be pretty complex because an object can span multiple packets and you can have multiple objects in one packet. You essentially need a way to identify the beginning and end of an object within the stream, so that you can pluck out the object data and deserialize it.
One way to do this is to embed your data in a well known protocol, like HTTP, and send it using that format. Unfortunately, this also means you'd have to write a HTTP parser at the client. Not an easy task.
Another way is to leverage an existing encoding scheme like Google's protocol buffers. This approach would require learning how to use the protocol buffers stack.
You can also embed the data in an XML structure and then have a stream-to-XML decoder on your client side. This is probably the easiest approach but the least efficient.
As you can see, this is not an easy project, but you can get started with the Socket.BeginSend examples here and here, and the SocketAsyncEventArgs example here
Other Tips:
You can improve the reliability of your communication by having the client maintain two connections to the server for redundancy purposes. The reason being that TCP connections take a while to establish, so if one fails, you can still receive data from the other one while the client attempts to reconnect the failed connection.
You can look into using TCPClient class for the client implementation, since it's mostly reading a stream from a network connection.
What about rendezvous or 29 west? It would save reinventing the wheel. Dunno about amqp or zeromq they might work fine too....