I am currently using .net core and would like to be able to drop some tcp and/or udp connections that are open on the host machine.
I have already built all the logic for checking the open connections and I am now just needing to find some resource that will help me drop the connection on the machine.
With windows I can use a third party program passing some parameters during initialization. However I need it to work on a linux machine.
Windows Solution = http://www.nirsoft.net/utils/cports.html
Network connections are a very common example of unmanaged Resources, that need to be disposed off. A big reason we Dipose of SQL Connections, is because of the underlying network connection.
When dealing with disposeable classes my advise is: "Create. Use. Dispose. All in the same piece of code, ideally using a using statement."
Note that with Networking you usually have to apply some form of Multitasking. Async/Await is the more modern approach, but a seperate thread can also be used. That prevents the UI thread from being blocked, while still making sure the using is not split up logically - and thus stays reliable.
Related
I'm working on an application just now which uses a bunch of external DLLs to make a connection to a server somewhere. Oddly, the exposed methods for these DLLs allow a connection but NOT a disconnection or close. These libraries work fine unless you make a lot of subsequent calls to the server in one chunk, so what I decided to do was disconnect and reconnect after X amount of calls.
However, herein lies the issue. I cannot disconnect because no disconnect method is given. SO my question is, how can I totally kill this unmanaged object so I can recreate it again?
If you're using unmanaged resources in C# you should have your classes that use and interact with the unmanaged resources implementing IDisposable and creating and destroying them with using blocks.
If you can't disconnect, depending on exactly what you're interfacing sometimes setting the variable containing your unmanaged resource to null will clear some of it up. Really though, there's not a great deal you can do without proper disconnect/dispose methods.
You could manually close the underlying connection to the server. I cant help you any more with how to do that without knowing more about the service your consuming (HTTP TCP ect?). You could put a trace (like wireshark) up and see what's being transferred.
Bottom line though is their software is broken. Can you not contact the vendor?
The best solution I could find for this, was to run each call to the external DLL in it's own thread, which was eventually killed when the thread ended. This was the only resolution that worked, given I had no access to updated DLLs.
We are currently investigating the most efficient way of communicating between 120-140 embedded hardware devices running on the .NET Micro framework and a server.
Each embedded device needs to send to, and request information from the server on a fairly regular basis all in real time through TCP.
My question is this: Would it be better to initialise 140 TCP connections to the server, and then hang on to these connections, or initialise a new connection for each requests to and from the devices? Would holding on to and managing 140 TCP connections put a lot of strain on the server?
When the server detects new data in the database it needs to send this new info to 1..* devices (information is targeted to specific devices), if I held on to the 140 connections I would need to do a lookup for the correct connection each time I needed to send information instead of just sending to an IP:PORT associated with the new data.
I guess another possibly stupid question would be is it actually possibly to hang on to 140 TCP connections on a single port?
Any suggestions/comments are appreciated!
In general you are better maintaining the connections for as long as possible. If you have each device opening a connection each time it sends a message you can end up effectively DoS'ing the server as it ends up with lots of sockets in the TIME_WAIT state taking up space in it's tables.
I worked on a system where there were a bunch of clients talking to a server and while they could be turned on and off regularly, it was still better to maintain the connection (and re-establish it when it had dropped and a new message needed to be sent). You may end up needing to write slightly more complex code, but I've found it to be well worth the effort for the reduced load on the server.
Modern operating systems may have bigger buffers than the ones I actually encountered the DoS effect on, but it's fundamentally not the best idea to be using lots of connections like that.
Things can get relatively complicated on the client side, especially when the device tends to go to sleep transparently to the application because that means connections will time out while the app thinks they are still open. When we did this we ended up with relatively complex network code because we needed to deal with the fact that the sockets could (and would) fail as a matter of course and we simply needed to setup a new connection and re-attempt sending the message. You just tuck this code away into your libraries and forget about it once it's done though.
In actual fact in practice our initial application had even more complex code because it was dealing with a network library that was semi-aware of the stop start nature of the devices and tried to resend failed messages, sometimes meaning that the same message got sent twice. We ended up doing an extra layer of communication on top in order to ensure duplicates got rejected. If you're using C# or regular BSD style sockets you shouldn't have that problem though I'm guessing. This was a proprietary library that managed the reconnects but caused headaches with the resends and it's inappropriate default time-outs.
You usually can connect much more than 140 "clients" to a server (that is with decent network / HW / RAM)...
I recommend always to test this sort of thing with real scenarios (load etc.) to decide since there are aspects like network (performance, stability...), HW (server RAM etc.) and SW (what does the server exactly do?) that can only be checked by you.
Depending on the protocol you could/should even put some timeout/reconnect mechanism in there.
The lookup you mean would be really fast - just use ConcurrentDictionary to hold the needed information with IP:PORT as the key (assuming the server runs on a full .NET 4).
For some references see:
http://msdn.microsoft.com/en-us/library/dd287191.aspx
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/02/17/c.net-little-wonders-the-concurrentdictionary.aspx
EDIT - as per comments:
Holding on to a TCP/IP connection doesn't take much processing client-side... it costs a bit of memory. I would recommend to do a small test (1-2 clients) to check this assumption for your specific case.
If you are talking about a system with hardware devices then I suggest to go with closing the connection every time the client finishes sending data.
To make sure the client gets some update from the server, the client can wait for a 5 second period for any data to arrive from the server. If the data is received within/before this timeframe, then close the connection and process the data. If not, close the connection and wait after sending next set of data.
This way scaling becomes much easier. Keeping the connections open always leads to strain on the resources and in my opinion is not necessary unless it is some life-saving device like heart rate monitor, oxygen supply monitor etc.,
I am attempting to send player information from my Game to my network client to then be sent off to the server.
Currently the ClientNetwork -> ClientGame relationship is held with XML files. They read/write back and forth at very high speeds. If you use just one XML file for this trade, one will "hog" the file at times, making a kind of lag when one cannot read because the other is viciously writing and rewriting.
To fix this I have 2 of each of my XML files. If it cannot read one, it will read the other. In theory they should be using both of them, since it'd be a tradeoff from one to another. Not working up to par.
But my main problem is just the usage in general of XML is very sloppy, dozens of try-catch statements to make sure they're all happy (and my personal favorite, try catches within try catches -- WE HAVE TO GO DEEPER)
I am just curious of if there is a better way to be doing this. I need a static point of variables that can be accessed by both client side programs. I'm afraid someone is going to say databases...
I'd like to state for anyone who is looking into this as well and stumbled across this page that Shared Memory is awesome. Though I have to convert all strings to characters and then to bytes and read them one by one, in the whole it's ALOT better than dealing with things that cannot read/write the same file at the same time. If you wish to further understand it rather than just use it, go to this link, it explains a lot of the messaging varieties and how to use them.
Yes there is!
The term you are looking for is interprocess communication - communication between two processes on the same machine.
There are various methods which allow two processes on the same machine to communicate with each other, including:
Named pipes
Shared memory
Sockets
HTTP
Fortunately C# applications can simply use the WCF framework to perform IPC (interprocess communication) using one of the above, and let the WCF framework take care of the difficult bits! Here are a couple of guides to get you started (there are many more):
WCF Tutorial - Basic Interprocess Communication
Many to One Local IPC using WCF and NetNamedPipeBinding
Also, one of the neat things about WCF is that you can also use it to communicate between different machines simply by changing the "Transport" (i.e. the communication method) to one which works over a network, (e.g. HTTP).
If you are targetting .Net 2.0 then you should look into either .Net remoting or web services instead.
A simple TCP stream jumps out at me. Have the network client open a listening TCP socket, and have the game connect to the network client. You could continue to send the same XML data you're already writing, if you like.
I agree with the tcp/ip socket answer proposed by David. I would simply submit the data to a socket on the local pc and have the other application listen to the socket. You can transmit data easily and quickly using this method and it will work no matter what version of the .net framework you are targeting.
I am developing open source socket server library: https://sourceforge.net/projects/socketservers/
And I would to like to add socket reuse feature to this lib. I have implement draft of this feature, but I do not see any benefits in my tests. The client makes 32K connect-disconnects by 8 items to the server and measure the time. But here is no difference between reusing socket and not reusing socket - same time elapsed for this test.
What I am doing wrong in test?
What benefit should server get when reuse sockets, and how to measure this benefit?
I can explain what happens from an unmanaged point of view and how DisconnectEx() is used, perhaps someone can then map this to the managed scenario.
In unmanaged code you would use DisconnectEx() to reuse a socket for an subsequent AcceptEx() or ConnectEx() call, more likely the former. So you'd initially create x sockets and post your overlapped async accept operations using AcceptEx(). When clients connect to these pending connection you would do your server stuff and then at the end call DisconnectEx() on the socket and post a new AcceptEx() using that socket. This avoids the need to create a new socket at this point and it's thus more efficient for the server. The performance difference is probably pretty small but worth having on heavily loaded servers that are accepting lots of short lived connections.
So I suggest you post some code showing how you're reusing your socket after calling Disconnect(true) on it...
The question is if the OS or the runtime does not perform the reuse automatically when invoking a new socket.The Socket.Disconnect Method documentation points into this direction:
Closes the socket connection and allows reuse of the socket.
So this seem to be an over-optimization.
In case you mean something like SO_REUSEADDR or SO_REUSEPORT:
Socket reuse is essentially important if e.g. your server crashes but there are still connections lingering.
If you restart your server, you'd normally have to wait till the operating system gracefully closed those connections, before you can rebind your socket to that port.
This could mean, that some processes which heavily rely on your server to come to a halt till it has been restarted.
Due to the socket reuse feature, you circumvent this problem.
There might be other uses for this, but I can only think of this one right now.
Hope that helped.
I'm working with an application, and I am able to make C# scripts to run in this environment. I can import DLLs of any kind into this environment. My problem is that I'd like to enable communication between these scripts. As the environment is controlled and I have no access to the source code of the application, I'm at a loss as to how to do this.
Things I've tried:
File I/O: Just writing the messages that I would like each to read in .txt files and having the other read it. Problem is that I need this scripts to run quite quickly and that took up too much time.
nServiceBus: I tried this, but I just couldn't get it to work in the environment that I'm dealing with. I'm not saying it can't be done, just that I can't get it done.
Does anyone know of a simple way to do this, that is also pretty fast?
Your method of interprocess communication should depend on how important it is that each message get processed.
For instance, if process A tells process B to, say, send an email to your IT staff saying that a server is down, it's pretty important.
If however you're streaming audio, individual messages (packets) aren't critical to the performance of the app, and can be dropped.
If the former, you should consider using persistent storage such as a database to store messages, and let each process poll the database to retrieve its own messages. In this way, if a process is terminated or loses communication with the other processes temporarily, it will be able to retrieve whatever messages it has missed when it starts up again.
The answer is simple;
Since you can import any DLL into the script you may create a custom DLL that will implement communication between the processes in any way you desire: shared memory, named pipe, TCP/UDP.
You could use a form of Interprocess Communication, even within the same process. Treat your scripts as separate processes, and communicate that way.
Named pipes could be a good option in this situation. They are very fast, and fairly easy to use in .NET 3.5.
Alternatively, if the scripts are loaded into a single AppDomain, you could use a static class or singleton as a communication service. However, if the scripts get loaded in isolation, this may not be possible.
Well, not knowing the details of your environment, there is not much I can really offer. You are using the term "C# scripts"...I am not exactly sure what that means, as C# is generally a compiled language.
If you are using normal C#, have you looked into WCF with Named Pipes? If your assemblies are running on the same physical machine, you should be able to easily and quickly create some WCF services hosted with the Named Pipe binding. Named pipes provide a simple, efficient, and quick message transfer mechanism in a local context. WCF itself is pretty easy to use, and is a native component of the .NET framework.
Since you already have the File I/O in place you might get enough speed by placing it on a RAM disk. If you are polling for changes today a FileSystemWatcher could help to get your communication more responsive.
You can use PipeStream. Which are fast than disk IO as they are done using main memory.
XMPP/Jabber is another appraoch take a look at jabber.net.
Another easy way is to open a TCP Socket on a predefined Port, connect to it from the other process and communicate that way.