Howto - Authenticating a calling assembly across a tcp/ip boundary - c#

I have an server engine, which generally works in the server (imaginatively). But occasionaly it will be executing on a client, or occasionally on another server, and will be wanting to use this server to do some of extra/additional processing.
The link between will be protected by ssl/tls and certificates. So the communication will be secure, but i'm not sure that calling engine is my code.
How would you go about athenticating that engine. What would Alice and Bob say on this subject?

You won't find a 100% way to do this.
Since:
The code is running on the client
Then the code is available for disassembly
Then whatever security measures you have placed into the code running on the client, an attacker can circumvent
Basically, since Karl can phone up Alice and pretend to be Bob, by having Bob's voice, and knowing everything Bob knows (the disassembled code), then there is no way Alice can verify that it really is Bob, or just a very good impostor.
If you design your software so that it can only run on specific types of hardware, with TPM or similar technology, then you might have a chance, but only through software, you cannot create a 100% solution.
Even with a TPM-enabled solution, you could still risk the impostor circumventing it by sitting inbetween.
It all depends on what kind of attacks you want to prevent.

There is no general solution for this problem. You can't know for sure that the remote machine will not be suborned by an attacker, and thus you cannot be absolutely sure that remote code in communication with you is the code you originally intended to be communicating with you.
People use TPMs and similar mechanisms to try to do remote attestation of the trustworthiness of the remote hardware, but trying to do such things purely in software is hopeless as you cannot know what is running on the remote side. In the general case, given long enough an attacker can also suborn a TPM.

Related

How would I make a program to monitor another system's vitals with a remote connection?

I'm looking to write a custom program in VB.NET / C++ / C# that would allow me to monitor a system's vitals over a Remote Desktop Connection.
I'm only looking for tips on how to implement a connection like this in code (eg. is it just a simple object or call to a WScript function? or is it much more sophisticated?). As to the specifics of operation after making the connection, I have that figured out based on another program which shares some similar features.
I would definitely look on Google and self-teach this, but I don't even know where to begin / what to search for. Some advice into this would be amazing, thanks!
EDIT: This doesn't have to go through an RDP connection, I'm definitely looking for better ways. Reason I mention RDP is because I currently do this manually over an RDP connection, but I don't wanna have to constantly open the window.
I don't think the RDP protocol is the right solution for this. Other mechanisms were invented for this, such as WMI.
WMI is a scriptable interface that allows you to query the local or remote computer's information. You can use your tool of choice - C#, VBScript, or my personal preference - Powershell. Here is an example of how to get all of the processes on a remote machine.
EDIT:
This doesn't have to go through an RDP connection, I'm definitely looking for better ways. Reason I mention RDP is because I currently do this manually over an RDP connection, but I don't wanna have to constantly open the window.
Then ignore everything below the line. Really.
The RDP protocol is very specialized for sharing specific resources. Namely the screen, disks, clipboard, printer, ports, and sound. That's it for what's out-of-the-box. The best thing you could possibly do is occasionally monitor the contents of a file with the RDP protocol - and it is cumbersome and slow.
I'd encourage you to look at alternative solutions like WMI instead.
That said, it is possible to do this with RDP's support for Virtual Channels. You could create a scriptable virtual channel to accomplish this (which is no easy feat). You would have to write a client and server. Your server side functionality would report the information you are interested in monitoring, and the client side would receive it. Again, I would stress that this is not the correct solution.

Does HttpListener work well on Mono?

I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono.
I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database.
I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.)
Could anyone point me towards some resources discussing this module please?
Many thanks, Bill, billpg.com
(A little background to my question for the interested.)
Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal.
So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread.
The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.
Greetings,
The HttpListener class in Mono works without much of a problem. I think that the most significant difference between its usage in a MS environment and a Linux environment is that port 80 cannot be bound to without a root/su/sudo security. Other ports do not have this restriction. For instance if you specify the prefix: http://localhost:1234/ the HttpListener works as expected. However if you add the prefix http://localhost/, which you would expect to listen on port 80, it fails silently. If you explicitly attempt to bind to port 80 (http://localhost:80/) then you throw an exception. If you invoke your application as a super user or root, you can explicitly bind to port 80 (http://localhost:80/).
I have not yet explored the rest of the HttpListener members in enough detail to make any useful comments about how well it operates in a linux environment. However, if there is interest, I will continue to post my observations.
chickenSandwich
I am not sure why you want to look so deep into the hood. Even on Microsoft side, the documents about http.sys may not provide you really valuable information if you are using the .NET Framework.
To know if something works on Mono good enough, you are always supposed to download its VMware or VPC image, and test your applications on it.
http://www.go-mono.com/mono-downloads/download.html
Though Mono is much more mature than a few years ago, we cannot say it has been tested by enough real-world applications like Microsoft.NET. So please test out your applications and submit issues you find to Mono team.
Based on my experience, minor issues are fixed within only a few days, while for major issues it takes a longer time. But with Mono source code available, you can fix on your own or find out good workarounds most of the times.

Code Access Security - Basics and Example

I was going through this link to understand CodeAccessSecurity:
http://www.codeproject.com/KB/security/UB_CAS_NET.aspx
It's a great article but it left me with following questions:
If you can demand and get whatever permissions you want, then any executable can get Full_Trust on machine. If permissions are already there, then why do we need to demand those?
Code is executing on Server, so the permissions are on server not on client machine?
Article takes an example of removing write permissions from an assembly to show security exception. Though in real world, System.IO assembly (or related classes) will take care of these permissions. So is there a real scenario where we will need CAS?
The idea of "least privilege access" a very important Principal of secuirty. A hacker is going to make your application do something that it wasn't intended to do. Whatever rights the application has at the time of attack then the attacker will have thoughs same rights. You can't stop every attack against your application, so you need lower the impact of a possible attack as much as you can. This isn't bullet proof, but this significantly raises the bar. An attacker maybe able to chain a privilege escalation attack in his exploit.
In most situations you can't control the actions of the client. In general you should assume that the attacker can control the client using a debugger or a using modified or rewritten client. This is especially true for web applications. You want to protect the server as much as possible, and adjusting permissions is a common way of doing that.
Sorry, I can't answer this one without Google. But CAS is deprecated anyway.

How to create an easy-to-program-for server for many clients in C#?

I suppose similar questions were already asked, but I was unable to find any. Please feel free to point me to an existing solutions.
I'll explain my scenario. I'd like to create a server application. There are many clients (currently only a few dozens, but it should scale up to 1000+) that connect to the server (which is running on a single machine).
Each client periodically sends a small amount of data to the server to process (processing is quick). The server can also send small amounts of data to each client on a regular basis. The response time should be low (<100 ms), but realtime or anything like that is not required.
My first idea was back from when I was still programming in VB6: Create a server socket to listen to incoming requests, then create a client socket for each possible client (singlethreaded). I doubt this scales well. It is also difficult to implement the communication.
So I figured I'd create a listener thread to accept new client connections and a different thread to actually read the incoming data by the clients. Since there are going to be many clients, I don't want to create a thread for each client. Instead, I'd prefer to use a single thread to read all incoming data in a loop, then either processing these data directly or creating work items for a different thread to process. I guess this approach would scale well enough. Any comments on this idea are most welcome.
The remaining problem I'm worried about is easy of communication. The above solution seems to require a manual protocol, possibly sending ASCII commands via TCP. While this would work, I think there should be a better way nowadays.
Some interface/proxyish way seems reasonable. I worked a bit with Java RMI before. From my point of understanding, .NET Remoting serves a similar purpose. Is Remoting a feasible solution to the scenario I described (many clients)? Is there an even better way I don't know of yet?
Edit:
This is not in LAN, but internet, if that matters.
If possible, it should also run under Linux.
As AresnMkrt pointed out, you should try WCF.
Just take it as is (with netTcpBinding, but don't forget to switch security off) and create a Tracer Bullet - measure if performance meets your requirements.
If not, you can try to tune WCF - WCF is very extensible, and you can modify message serialization to send ASCII messages as you want.
Are you sure you need a binary protocol? Rather, are you sure you need to invent a whole new protocol where plain RESTful service with JSON/XML will suffice? WCF can help you in this regard a lot.

Conventions to follow to make Commercial software harder to crack?

What are some good conventions to follow if I want to make my application harder to crack?
As long as your entire application is client side, it's completely impossible to protect it from being cracked. The only way to protect an application from being cracked is to make it have to connect to a server to function (like an online game, for example).
And even then, I have seen some cracks that simulate a server and send a dummy confirmation to the program so it thinks it's talking to a real, legit server (in this case I'm talking about a "call home" verification strategy, not a game).
Also, keep in mind that where there is a will, there's a way. If someone wants your product badly, they will get it. And in the end you will implement protection that can cause complications for your honest customers and is just seen as a challenge to crackers.
Also, see this thread for a very thorough discussion on this topic.
A lot of the answers seem to miss the point that the question was how to make it harder, not how to make it impossible.
Obfuscation is the first critical step in that process. Anything further will be too easy to work out if the code is not Obfuscated.
After that, it does depend a bit on what you are trying to avoid. Installation without a license? The timed trial blowing up? Increased usage of the software (e.g. on more CPUs) without paying additional fees?
In today's world of virtual machines, the long term anti-cracking strategy has to involve some calling of home. The environment is just too easy to make pristine. That being said, some types of software are useless if you have to go back to a pristine state to use them. If that is your type of software, then there are rather obscure places to put things in the registry to track timed trials. And in general a license key scheme that is hard to forge.
One thing to be aware of though - don't get too fancy. Quite often the licensing scheme gets the least amount of QA, and hits serious problems in production where legitimate customers get locked out. Don't drive away real paying customers out of fear of copying by people would most likely wouldn't have paid you a dime anyway.
Book: Writing Secure Code 2
There are 3rd party tools to obfuscate your code. Visual Studio comes with one.
BUT, first, you should seriously think about why you'd bother. If your app is good enough and popular enough to desire being cracked, it will be, despite all of your efforts.
Here are some tips, not perfect but maybe could help:
update your software frequently
if your software connects to some server somewhere change the protocol now and then. you can even have a number of protocols and alternate between them depending on some algorithm
store part of your software on a server which downloads every time you run the software
when you start your program do a crc check of your dlls that you load i.e. have a list of crc's for approved dll's
have a service that overlooks your main application doing crc checks once in a while and monitoring your other dependent dll's/assemblies.
unfortunately the more you spend on copy protecting your software the less you have to spend on functionality, all about balance.
another approach is to sell your software cheap but to do frequent, cheap upgrades/updates, that way it will not profitable to crack.
The thing with .NET code is it is relatively easy to reverse engineer using tools like .NET Reflector. Obfuscation of code can help but it's still possible to work out.
If you want a fast solution (but of course, there's no promise that you won't be cracked - it's just some "protection"), you can search for some tools like Themida or Star Force. These are both famous protection shells.
It's impossible really. Just release a patch often then change the salt in your encryption. However if your software get's cracked be proud it must be really good :-)
this is almost like mission impossible, unless you have very few customers.
just consider - have you ever seen a version of Windows that is not cracked?
If you invent a way to protect it, someone can invent a way to crack it. Spend enought effort so that when people use it in an "illegal" way, they are aware of it. Most things beyond that risk being a waste of time ;o)

Categories

Resources