I'm building a network server and starting a lot of AppDomains on the server to which requests are routed. What will be the fastest way to send off a request payload to one of the AppDomains for processing?
Read in the payload from the socket into a byte array and marshal it.
Marshal the network stream (inherits from MarshalByRef) to the AppDomain.
Read the payload. Decode it into objects. Marshal the decoded objects.
Use named pipes to transfer the byte array.
Use loopback sockets.
Maybe there is a way to marshal the actual socket connection?
The decoding mostly creates immutable objects that are used to determine how to fulfill the clients request and the AppDomain then creates a response and marshals it back to the host AppDomain which sends it back through the socket.
The method should prefer less memory over less CPU.
WCF is not an option.
TCP binary remoting is certainly fast, I do not how much faster it is than raw sockets which is probably the fastest, but a royal PIA.
I have run 1500 - 2000 req per second in production using HTTP binary remoting between two boxes. On the same box you should have much high performance using TCP or a name pipes channel, depending in the CPU cycles it takes to process the data.
If I was you I would take a look at how Cassini is implemented. It does pretty much exactly what you are talking about doing.
Actually Cassini has been sort of superceded by Webhost which is the built-in webserver that ships with Visual Studio now. Take a look at this post on Phil Haack's blog for more.
Very good question. If I were coming at this problem I would probably use a Buffered Stream / Memory Stream and marshal the stream into the AppDomain that consumes the object to reduce marshaling or serializing many object graphs that were created in a different AppDomain.
But then again, it sounds like you are almost completely duplicating the functionality of IIS, so I would look/reflector into the System.Web.Hosting namespace and see how they handle it and their WorkerThreadPool etc....
6 .Maybe there is a way to marshal the
actual socket connection?
6-th is IMO the best option.
Socket from process perspective is just a handle. AppDomains reside in single process. That means that appdomains can interchange socket handles.
If socket marshalling is not working, you can try recreating socket in other appdomain. You can use DuplicateAndClose to do this.
If that will not work, you should do some perfomance testing to choose the best data transfer method. (I would choose named pipes or memomry mapped files)
Related
I'm trying to write a client and server program in C#, the client sends request to server the server handle the request in threads and send response to the client.
I write the client and server but the problem is, some threads uses too much memory and blocks the other requests.
Is there any way to limit a thread or the application memory usage.
Thanks
There is no any mechanisms to restrict memory usage on dedicated threads. It's obvious that there are some architectural and\or coding bugs in your program.
You cannot define memory limits for "per-thread" .The memory is allocated from shared pool.Instead one option would be able to make a queue,then have fixed number of threads(1,2,3,4 etc).
This way if request is made it'll handle them but 4 at a time(or however many you want).In this way you can prevent the memory.
I am trying to send over image data from a compiled C++ process to a compiled C# process. The C++ process is accessing the webcam and doing some processing on the image. The image is represented by an 2D array of pixels with each pixel value being an 8 bit value (0-255) which is the gray-scale value of that pixel.
The image size is 640 by 480.
The C# application does some more processing and displays this image onto the screen. The processes are both running at the same time on my laptop (Windows 7 OS) but I cannot make a single process that does all the steps which is why I need my C++ and C# code to communicate.
I was wondering what is the best way to do this? I read about writing a UDP or TCP server in the C# part and a client on the C++ part, I can then send over the image data as a datagram. I was wondering if this is the best way and if it is whether UDP or TCP would be better?
EDIT: The C++ process is unmanaged C++, I don't have the option to run it as a managed DLL. Could I use named pipes to send over the image?
Finally is UDP guaranteed in order if it is communicating locally? I realise the image would be over the limit for UDP but if it is inorder I should be able to split the images up to send over.
Interprocess communication can be done via sockets or pipes.
With sockets(TCP and UDP) you're essentially sending the data over the internet to yourself. Luckily since your comp knows itself, the data shouldn't leave the comp so this should be pretty quick. TCP is guaranteed to be in order and has a bunch of other nice features while UDP is pretty much slap some headers onto the data and hope for the best. For this application TCP should be fine. UDP adds unneeded complexity.
Pipes are the other way to have two processes to communicate. You basically have the C++ or C# process create a pipe and start the other process. You just use the pipe like a file: write to and read from it. This can be done in C/C++ using a combination of the pipe, fork, and exec functions or simply using the popen function. C# probably has similar functions.
I suggest using a pipe using _popen, (popen for windows) and writing a series of ints to the pipe and reading it from the other side. This is probably the easiest way... besides using one language of course...
If you are writing both of the programs, you can compile C++ one as DLL, and call function that returns an array or some structure from your C# program with DllImport Attribute in System.Runtime.InteropServices namespace.
Why can't you do it in the same process? Is it because you need to mix C# and C++? In that case C++/CLI can be used as a bridge between the environments to have both C# code for the .NET CLR and C++ code compiled natively in one process.
If you really need two processes there are several options when running on a local machine, but a small TCP-based service is probably best. The size of each image will be 307kb which is larger than the 65kb limit of UDP.
I was wondering if this is the best way and if it is whether UDP or TCP would be better?
You usually resort to UDP as a speed optimization when TCP is not fast enough and packet loss is inconvenient rather than when it can't be handled. If you can't handle losing part of the image in the transmission I doubt you can resort to UDP.
Moreover, UDP is unlikely to give a performance boost in your case since you'll be using the loopback interface. This means that all TCP packets are likely to arrive in order and without loss, making TCP extra cheap.
If you write your application using TCP and in the future, for some reason, you decide the processes no longer run on the same machine, you won't have to change your code.
Finally, TCP sockets are just easier to use, so unless TCP is not fast enough on your machine, I would stick with TCP sockets.
is UDP guaranteed in order if it is communicating locally?
AFAIK, this behavior is not guaranteed. It is very likely to work most of the time, but unless you can find a quote from relevant documentation, I wouldn't count on this.
Could I use named pipes to send over the image?
Yes, named pipes are very similar to sockets, but they're known to be slow.
Once way of doing it apart from sockets would be to save the image data onto the disk from your C++ application and read it off the disk in your C# application. Of course you will need to make sure some sort of read/write synchronisation so that the file is not read before its fully written.
Or you finally decide to use UDP or TCP, try using RTP. RTP uses UDP with an extra layer of time stamps, sequence numbering to ensure correct order of data delivery. You should be able to find C++ and C# implementations of the protocol. Specifically to mention is that you can send images over a RTP/MJPEG stream if your application is producing JPEG images.
Just move to completely managed code :p (To keep it all in the same process)
https://net7mma.codeplex.com/SourceControl/latest has a C# RtspServer and RtpClient
I am attempting to send player information from my Game to my network client to then be sent off to the server.
Currently the ClientNetwork -> ClientGame relationship is held with XML files. They read/write back and forth at very high speeds. If you use just one XML file for this trade, one will "hog" the file at times, making a kind of lag when one cannot read because the other is viciously writing and rewriting.
To fix this I have 2 of each of my XML files. If it cannot read one, it will read the other. In theory they should be using both of them, since it'd be a tradeoff from one to another. Not working up to par.
But my main problem is just the usage in general of XML is very sloppy, dozens of try-catch statements to make sure they're all happy (and my personal favorite, try catches within try catches -- WE HAVE TO GO DEEPER)
I am just curious of if there is a better way to be doing this. I need a static point of variables that can be accessed by both client side programs. I'm afraid someone is going to say databases...
I'd like to state for anyone who is looking into this as well and stumbled across this page that Shared Memory is awesome. Though I have to convert all strings to characters and then to bytes and read them one by one, in the whole it's ALOT better than dealing with things that cannot read/write the same file at the same time. If you wish to further understand it rather than just use it, go to this link, it explains a lot of the messaging varieties and how to use them.
Yes there is!
The term you are looking for is interprocess communication - communication between two processes on the same machine.
There are various methods which allow two processes on the same machine to communicate with each other, including:
Named pipes
Shared memory
Sockets
HTTP
Fortunately C# applications can simply use the WCF framework to perform IPC (interprocess communication) using one of the above, and let the WCF framework take care of the difficult bits! Here are a couple of guides to get you started (there are many more):
WCF Tutorial - Basic Interprocess Communication
Many to One Local IPC using WCF and NetNamedPipeBinding
Also, one of the neat things about WCF is that you can also use it to communicate between different machines simply by changing the "Transport" (i.e. the communication method) to one which works over a network, (e.g. HTTP).
If you are targetting .Net 2.0 then you should look into either .Net remoting or web services instead.
A simple TCP stream jumps out at me. Have the network client open a listening TCP socket, and have the game connect to the network client. You could continue to send the same XML data you're already writing, if you like.
I agree with the tcp/ip socket answer proposed by David. I would simply submit the data to a socket on the local pc and have the other application listen to the socket. You can transmit data easily and quickly using this method and it will work no matter what version of the .net framework you are targeting.
I'm looking at options for optimizing the number of concurrent connections my socket servers can handle and had an idea that hinges on being able to serialize C# sockets so that they can be removed from memory and then restored as needed. This scenario is only acceptable for me because sessions last for hours and the sockets are used very infrequently to send to clients and never for receiving during this time period. My current implementation is memory bound because I am holding each socket in memory for the lifetime of the corresponding client's session. Again, my thought is that if I were able to serialize the socket and write it to disc or stick it in a distributed cache/database/file store I could free up memory on the servers at the expense of some extra time required to process each send (i.e. deserialize the socket and then send on it). I've tried a couple of options, but ran into road blocks with each:
Serialize/Deserialize the socket by reading and writing through a pointer to the object in memory. I can't seem to restore the socket after serialization.
Use the Socket.DuplicateAndClose() method to get the SocketInformation, then serialize that and when needed restore the socket to the same process using the SocketInformation. I can't seem to use the socket once it is restored and I'm not sure this is going to amount to a significant memory savings as it seems to leave unmanaged resources in memory.
It seems to me that there should be a way to accomplish this. Ultimately, I'm looking for someone to either point me in the right direction or confirm that it is or isn't possible.
Any help is greatly appreciated!
This sounds like a good continuation to Alice's Adventures in Wonderland - a wonderful non-sense. You can't serialize a socket because this just doesn't make sense. Class "socket" (I mean not .NET Socket class but a type of objects which are called socket) don't support operation of "serialization" because sockets are (if we think in real-world objects) not data containers but gates to the communication channel. You can make a copy of the book, but it will be very hard to make a paper copy of a door.
Now about memory. You can have about 64K of sockets on your Windows system (I can be wrong with exact number, but the aproximate is this). Even with 100 bytes per socket you will occupy just 6 Mb of memory. In modern server OS (Windows, Linux, you name it) 6 Mb of user-mode memory is less than nothing. You will gain much more if you review overall application architecture.
If I understand the question correctly, you are trying to serialize a Socket object, saving off its information (object contents), and then later trying to reconstitute that object with the saved info.
This won't work, because you can't simply save the contents of the Socket object and restore it later. Deep down, the socket uses an actual socket handler (open file descriptor) from the operating system. Saving and restoring this data won't reconnect the actual device handle within the operating system.
A socket requires physically connecting it (opening it) at the operating system level. This is similar to a Stream object. You can't simply save off the contents of the object and restore it later; it requires an attachment to a file descriptor within the operating system.
Your sockets is not the problem. They use very little memory. It's more likely how you treat inbound and outbound data that is the problem.
Are you allocating new (byte) buffers for each operation?
Create a buffer pool instead. Let the pool create a new buffer if it's empty. Don't forget to return a buffer when you are done with it.
What are you doing once you got the data in a buffer?
Are you building a string? If you have lots of incoming data or large strings you might want to switch to a StringBuilder. string.Format("{0}kdkd{1}jdjd{2}", var1, var2, var3) allocates less memory than var1 + "kdkd" + var2 + "jdjd" + var3.
How are you wrapping the socket?
You got a fat class with lots of stuff in it? Then it's your fat class that is the problem.
I suppose similar questions were already asked, but I was unable to find any. Please feel free to point me to an existing solutions.
I'll explain my scenario. I'd like to create a server application. There are many clients (currently only a few dozens, but it should scale up to 1000+) that connect to the server (which is running on a single machine).
Each client periodically sends a small amount of data to the server to process (processing is quick). The server can also send small amounts of data to each client on a regular basis. The response time should be low (<100 ms), but realtime or anything like that is not required.
My first idea was back from when I was still programming in VB6: Create a server socket to listen to incoming requests, then create a client socket for each possible client (singlethreaded). I doubt this scales well. It is also difficult to implement the communication.
So I figured I'd create a listener thread to accept new client connections and a different thread to actually read the incoming data by the clients. Since there are going to be many clients, I don't want to create a thread for each client. Instead, I'd prefer to use a single thread to read all incoming data in a loop, then either processing these data directly or creating work items for a different thread to process. I guess this approach would scale well enough. Any comments on this idea are most welcome.
The remaining problem I'm worried about is easy of communication. The above solution seems to require a manual protocol, possibly sending ASCII commands via TCP. While this would work, I think there should be a better way nowadays.
Some interface/proxyish way seems reasonable. I worked a bit with Java RMI before. From my point of understanding, .NET Remoting serves a similar purpose. Is Remoting a feasible solution to the scenario I described (many clients)? Is there an even better way I don't know of yet?
Edit:
This is not in LAN, but internet, if that matters.
If possible, it should also run under Linux.
As AresnMkrt pointed out, you should try WCF.
Just take it as is (with netTcpBinding, but don't forget to switch security off) and create a Tracer Bullet - measure if performance meets your requirements.
If not, you can try to tune WCF - WCF is very extensible, and you can modify message serialization to send ASCII messages as you want.
Are you sure you need a binary protocol? Rather, are you sure you need to invent a whole new protocol where plain RESTful service with JSON/XML will suffice? WCF can help you in this regard a lot.