Adding encryption to a server built with SocketAsyncEventArgs - c#

I've looked high and low and am quite surprised to find absolutely nothing that answers this question:
How can one implement SSL/TLS (or similar) encryption when using SocketAsyncEventArgs? I did read that one could theoretically use a SslStream and "cheat" by creating a go-between to coordinate data in and out of the stream, to and from the socket. That all seems ridiculous...
I Looked into BouncyCastle but they don't seem to have support for server-side encryption. Admittedly, the source for this info is almost 10 years old, but my own research turned up nothing.
I'm not interested in changing my server architecture, so please refrain from telling me "I don't really need the performance of SocketAsyncEVentArgs and should just change to TcpListener".
I'm interested in any method of implementing reliable encryption between client and server using SocketAsyncEventArgs.

There is no direct way to attach SAEA to encryption. They don't share an API in common, so everything is bridges.
The easiest way to do this is - as you know - SslStream, but: that isn't usually compatible with SocketAsyncEventArgs. There are alternatives - I can think of at least 3 different ways of doing this with "pipelines", for example; but all of them would be a major architecture change from naked SAEA. So if the difference between SAEA and SslStream is too large, the difference between SAEA and IDuplexPipe is even larger. However, "pipelines" is designed for high scalable perf, so... maybe it'll suit your tastes anyway? I've blogged about pipelines a lot recently, if it would help; plus I have github examples of client/server code, including a 2.5M+ ops/second redis-like server.

Related

PGP Service for .NET Allowing Arbitrary Keys

I am in need of a PGP service for .NET that will provide the following:
Encryption/decryption of files provided as byte arrays and/or streams (e.g. writing to hard drive and having the service read it is unacceptable)
Use of arbitrary keys passed in as byte arrays and/or streams
Needs to work for a headless service running on a server with nobody watching it (no modal popups or user input required)
We've felt out a couple of products but not been totally pleased with how any of them worked. Are there any suggestions? Thanks!
It's hard to guess what you could try as there are not much OpenPGP implementations for .NET. Namely, OpenPGPBlackbox package of our SecureBlackbox product is the only comprehensive self-contained implementation for .NET (BouncyCastle offers something as well, but they seem to be limited to older RFC 2440). You are welcome to check OpenPGPBlackbox and if you have problems with it, contact our technical support as described on product pages.

How to write a "truly" private method in C#?

In fact, private methods are implemented in C# that can still be searched with Reflection.
What I am going to do is to write public string Encrypt(string data) and private string Decrypt(string cipher) methods to perform encryption and decryption.
Unfortunately, if someone knows .NET framework, he can use Reflection to find Decrypt methods and it decrypt everything that is encrypted.
It seems that is not that secure. So I want to make Decrypt method to truly private method.
But how to do that?
Updated 09 Jan 2012 10:52PM Sydney Time
bdares provides the technical explanation of this question
Eric Lippert provides the political explanation of this question
Thanks both experts!
You can't. If the attacker has access to your code, compiled or source, he can trace your program and find where it's being encrypted or decrypted.
You can add a layer of security by storing the key in a separate location, but generally if the attacker is executing code on your server, you're already screwed.
(You're only worried about this if the attacker is executing code on your server, because otherwise it doesn't matter whether or not the method is private. Also, he can't use reflection to find method names unless he's executing code on your server. In short: you're worrying about the wrong thing here.)
Your fundamental problem is that you've got the trust model wrong. If someone can use reflection then they are the user. You are the software provider. You work for them. Trust flows from them, not from you. They are the person who has to trust you, not you them.
If you don't trust the user then do not sell them your software in the first place. Don't sell weapons to people who you believe plan to attack you.
I believe you are referring to obfuscation, which is an attempt to hide/disguise code from being read by humans when opened in program such as Reflector.
Supplied within Visual Studio is a community use license for PreEmptive Solutions dotfuscator which will provide this functionality on small projects, and also for Windows Phone projects (if you download the add-on). There are also commercial platforms available too, from the same vendor and others .
This blog post explains a little more.
If you're creating your own encryption method, you're doing it wrong. People who know way more about encryption than you or I have already come up with excellent methods for encryption, and MS has implemented most of them already.
For good encryption, it's the keys, not the method, that makes encryption secure. Keep the keys safe and the algorithm can (and should) be published for all to see.
If you're trying to distribute both content and keep it encrypted, aka DRM, you're most probably doomed to failure unless you can keep the keys very well hidden in hardware, and even that will only buy you some time -- maybe months, maybe years.
I am not sure about your exact application. But if you are selling a product to a customer who will be doing both the Encryption and Decryption on their own system, then there is no way to keep the encryption secret from them. But you can instead allow them to generate a new Private Key for their own use. In this way each customer's data is 'secure' in regards to other customers; though obviously still not so secure within the same customer's site. In other situations where you control the encrypted content you can also look into creating a private master key to be generated on your side and only allow the customer to have a public key.

Creating a DSP system from scratch

I love electronic music and I am interested in how it all ticks.
I've found lots of helpful questions on Stack Overflow on libraries that can be used to play with audio, filters etc. But what I am really curious about is what is actually hapening: how is the data being passed between effects and oscillators? I have done research into the mathematical side of dsp and I've got that end of the problem sussed but I am unsure what buffering system to use etc. The final goal is to have a simple object heirarchy of effects and oscillators that pass the data between each other (maybe using multithreading if I don't end up pulling out all my hair trying to implement it). It's not going to be the next Propellerhead Reason but I am interested in how it all works and this is more of an exercise than something that will yeild an end product.
At the moment I use .net and C# and I have recently learnt F# (which may or may not lead to some interesting ways of handling the data) but if these are not suitable for the job I can learn another system if necessary.
The question is: what is the best way to get the large amounts of signal data through the program using buffers? For instance would I be better off using a Queue, Array,Linked List etc? Should I make the samples immutable and create a new set of data each time I apply an effect to the system or just edit the values in the buffer? Shoud I have a dispatcher/thread pool style object that organises passing data or should the effect functions pass data directly between each other?
Thanks.
EDIT: another related question is how would I then use the windows API to play this array? I don't really want to use DirectShow because Microsoft has pretty much left it to die now
EDIT2: thanks for all the answers. After looking at all the technologies I will either use XNA 4(I spent a while trawling the internet and found this site which explains how to do it) or NAudio to output the music... not sure which one yet, depends on how advanced the system ends up being. When C# 5.0 comes out I will use its async capabilities to create an effects architecture on top of that. I've pretty much used everybody's answer equally so now I have a conundrum of who to give the bounty to...
Have you looked at VST.NET (http://vstnet.codeplex.com/)? It's a library to write VST using C# and it has some examples. You can also consider writing a VST, so that your code can be used from any host application (but even if you don't want, looking at their code can be useful).
Signal data is usually big and requires a lot of processing. Do not use a linked list! Most libraries I know simply use an array to put all the audio data (after all, that's what the sound card expect).
From a VST.NET sample:
public override void Process(VstAudioBuffer[] inChannels, VstAudioBuffer[] outChannels)
{
VstAudioBuffer audioChannel = outChannels[0];
for (int n = 0; n < audioChannel.SampleCount; n++)
{
audioChannel[n] = Delay.ProcessSample(inChannels[0][n]);
}
}
The audioChannel is a wrapper around an unmanaged float* buffer.
You probably store your samples in an immutable array. Then, when you want to play them, you copy the data in the output buffer (change the frequency if you want) and perform effects in this buffer. Note you can use several output buffers (or channels) and sum them at the end.
Edit
I know two low-level ways to play your array: DirectSound and WaveOut from Windows API. C# Example using DirectSound. C# example with WaveOut. However, you might prefer use an external higher-level library, like NAudio. NAudio is convenient for .NET audio manipulation - see this blog post for sending a sine wave to the audio card. You can see they are also using an array of float, which is what I recommend (if you do your computations using bytes, you'll end up with a lot of aliasing in the sound).
F# is probably a good choice here, as it's well fitted to manipulate functions. Functions are probably good building blocks for signal creation and processing.
F# is also good at manipulating collections in general, and arrays in particular, thanks to the higher-order functions in the Array module.
These qualities make F# popular in the finance sector and are also useful for signal processing, I would guess.
Visual F# 2010 for Technical Computing has a section dedicated to Fourier Transform, which could be relevant to what you want to do. I guess there is plenty of free information about the transform on the net, though.
Finally, to play samples, you can use XNA. I think the latest version of the API (4.0) also allows recording, but I have never used that. There is a famous music editing app for the Xbox called ezmuse+ Hamst3r Edition that uses XNA, so it's definitely possible.
With respect to buffering and asynchrony/threading/synchronization issues I suggest you to take a look at the new TPL Data Flow library. With its block primitives, concurrent data structures, data flow networks, async message prcessing, and TPL's Task based abstraction (that can be used with the async/await C# 5 features), it's a very good fit for this type of applications.
I don't know if this is really what you're looking for, but this was one of my personal projects while in college. I didn't truly understand how sound and DSP worked until I implemented it myself. I was trying to get as close to the speaker as possible, so I did it using only libsndfile, to handle the file format intricacies for me.
Basically, my first project was to create a large array of doubles, fill it with a sine wave, then use sf_writef_double() to write that array to a file to create something that I could play, and see the result in a waveform editor.
Next, I added another function in between the sine call, and the write call, to add an effect.
This way you start playing with very low-level oscillators and effects, and you can see the results immediately. Plus, it's very little code to get something like this working.
Personally, I would start with the simplest possible solution you can, then slowly add on. Try just writing out to a file and using your audio player to play it, so you don't have to deal with the audio apis. Just use a single array to start, and modify-in-place. Definitely start off single-threaded. As your project grows, you can start moving to other solutions, like pipes instead of the array, multi-threading it, or working with the audio API.
If you're wanting to create a project you can ship, depending on exactly what it is, you'll probably have to move to more complex libraries, like some real-time audio processing. But the basics you learn by doing the simple way above will definitely help when you get to this point.
Good luck!
I've done quite a bit of real-time DSP, although not with audio. While either of your ideas (immutable buffer) vs (mutable buffer modified in place) could work, what I prefer to do is create a single permanent buffer for each link in the signal path. Most effects don't lend themselves well to modification in place, since each input sample affects multiple output samples. The buffer-for-each-link technique works especially well when you have resampling stages.
Here, when samples arrive, the first buffer is overwritten. Then the first filter reads the new data from its input buffer (the first buffer) and writes to its output (the second buffer). Then it invokes the second stage to read from the second buffer and write into the third.
This pattern completely eliminates dynamic allocation, allows each stage to keep a variable amount of history (since effects need some memory), and is very flexible as far as enabling rearranging the filters in the path.
Alright, I'll have a stab at the bounty as well then :)
I'm actually in a very similar situation. I've been making electronic music for ages, but only over the past couple of years I've started exploring actual audio processing.
You mention that you have researched the maths. I think that's crucial. I'm currently fighting my way through Ken Steiglitz' A Digital Signal Processing Primer - With Applications to Digital Audio and Computer Music. If you don't know your complex numbers and phasors it's going to be very difficult.
I'm a Linux guy so I've started writing LADSPA plugins in C. I think it's good to start at that basic level, to really understand what's going on. If I was on Windows I'd download the VST SDK from Steinberg and write a quick proof of concept plugin that just adds noise or whatever.
Another benefit of choosing a framework like VST or LADSPA is that you can immediately use your plugins in your normal audio suite. The satisfaction of applying your first home-built plugin to an audio track is unbeatable. Plus, you will be able to share your plugins with other musicians.
There are probably ways to do this in C#/F#, but I would recommend C++ if you plan to write VST plugins, just to avoid any unnecessary overhead. That seems to be the industry standard.
In terms of buffering, I've been using circular buffers (a good article here: http://www.dspguide.com/ch28/2.htm). A good exercise is to implement a finite response filter (what Steiglitz refers to as a feedforward filter) - these rely on buffering and are quite fun to play around with.
I've got a repo on Github with a few very basic LADSPA plugins. The architectural difference aside, they could potentially be useful for someone writing VST plugins as well. https://github.com/andreasjansson/my_ladspa_plugins
Another good source of example code is the CSound project. There's tonnes of DSP code in there, and the software is aimed primarily at musicians.
Start with reading this and this.
This will give you idea on WHAT you have to do.
Then, learn DirectShow architecture - and learn HOW not to do it, but try to create your simplified version of it.
You could have a look at BYOND. It is an environment for programmatic audio / midi instrument and effect creation in C#. It is available as standalone and as VST instru and effect.
FULL DISCLOSURE I am the developer of BYOND.

C# Authorization & Consistent Validation w/ PHP

So I've made a simple C# application and I'm currently using HTTPrequests to login to my phpBB forum, using a custom PHP file to check the post count of the user, and consistently resends HTTPrequests every 30 seconds. Unfortunately, I fear that this can easily be cracked despite the obfusculation. I've heard of serialization, but I don't know what that is.
Any suggestions for consistently validating the post count/login or optimizing it?
Some things that may help:
Are these on the same server or different servers? PHP has solid built in COM support, so there is no reason to use any kind of sockets if they are on the same server.
I can think of two options here: (a) Provide no authentication and make the data such that if someone gets it there is no downside (b) encrypt the data / authentication yourself.
(b) may be easier than you think. PHP has solid built in encryption:
$iv = mcrypt_create_iv (mcrypt_get_iv_size (MCRYPT_RIJNDAEL_128, MCRYPT_MODE_ECB), MCRYPT_RAND);
$key = "ThisIsYourKeyOfDoomAndPower";
$encryptedData = base64_encode(mcrypt_encrypt (MCRYPT_RIJNDAEL_128, $key, $dataToEncode, MCRYPT_MODE_ECB, $iv));
Hopefully this helps you get on the right track...
First of all serialization is not a method to protect your code. You can read more about it on Wikipedia.
The problem you may likely have is that you may pass your forum credentials in insecure way (without SSL/TLS encryption). This way anyone using a HTTP sniffer can get that data with little effort. If you are worried that someone may decompile your app and steal your code then there are some ways of making that harder (like obfluscation that you've mentioned) but you can never be 100% safe.
If I'm missing the point here please provide more details about your app vulnerability.

How fast or lightweight Is Protocol Buffer?

Is Protocol Buffer for .NET gonna be lightweight/faster than Remoting(the SerializationFormat.Binary)? Will there be a first class support for it in language/framework terms? i.e. is it handled transparently like with Remoting/WebServices?
I very much doubt that it will ever have direct language support or even framework support - it's the kind of thing which is handled perfectly well with 3rd party libraries.
My own port of the Java code is explicit - you have to call methods to serialize/deserialize. (There are RPC stubs which will automatically serialize/deserialize, but no RPC implementation yet.)
Marc Gravell's project fits in very nicely with WCF though - as far as I'm aware, you just need to tell it (once) to use protocol buffers for serialization, and the rest is transparent.
In terms of speed, you should look at Marc Gravell's benchmark page. My code tends to be slightly faster than his, but both are much, much faster than the other serialization/deserialization options in the framework. It should be pointed out that protocol buffers are much more limited as well - they don't try to serialize arbitrary types, only the supported ones. We're going to try to support more of the common data types (decimal, DateTime etc) in a portable way (as their own protocol buffer messages) in future.
Some performance and size metrics are on this page. I haven't got Jon's stats on there at the moment, just because the page is a little old (Jon: we must fix that!).
Re being transparent; protobuf-net can hook into WCF via the contract; note that it plays nicely with MTOM over basic-http too. This doesn't work with Silverlight, though, since Silverlight lacks the injection point. If you use svcutil, you also need to add an attribute to class (via a partial class).
Re BinaryFormatter (remoting); yes, this has full supprt; you can do this simply by a trivial ISerializable implementation (i.e. just call the Serializer method with the same args). If you use protogen to create your classes, then it can do it for you: you can enable this at the command line via arguments (it isn't enabled by default as BinaryFormatter doesn't work on all frameworks [CF, etc]).
Note that for very small objects (single instances, etc) on local remoting (IPC), the raw BinaryFormatter performance is actually better - but for non-trivial graphs or remote links (network remoting) protobuf-net can out-perform it pretty well.
I should also note that the protocol buffers wire format doesn't directly support inheritance; protobuf-net can spoof this (while retaining wire-compatibility), but like with XmlSerializer, you need to declare the sub-classes up-front.
Why are there two versions?
The joys of open source, I guess ;-p Jon and I have worked on joint projects before, and have discussed merging these two, but the fact is that they target two different scenarios:
dotnet-protobufs (Jon's) is a port of the existing java version. This means it has a very familiar API for anybody already using the java version, and it is built on typical java constructs (builder classes, immutable data classes, etc) - with a few C# twists.
protobuf-net (Marc's) is a ground-up re-implementation following the same binary format (indeed, a critical requirement is that you can interchange data between different formats), but using typical .NET idioms:
mutable data classes (no builders)
the serialization member specifics are expressed in attributes (comparable to XmlSerializer, DataContractSerializer, etc)
If you are working on java and .NET clients, Jon's is probably a good choice for the familiar API on both sides. If you are pure .NET, protobuf-net has advantages - the familiar .NET style API, but also:
you aren't forced to be contract-first (although you can, and a code-generator is supplied)
you can re-use your existing objects (in fact, [DataContract] and [XmlType] classes can often be used without any changes at all)
it has full support for inheritance (which it achieves on the wire by spoofing encapsulation) (possibly unique for a protocol buffers implementation? note that sub-classes have to be declared in advance)
it goes out of its way to plug into and exploit core .NET tools (BinaryFormatter, XmlSerializer, WCF, DataContractSerializer) - allowing it to work directly as a remoting engine. This would presumably be quite a big split from the main java trunk for Jon's port.
Re merging them; I think we'd both be open to it, but it seems unlikely you'd want both feature sets, since they target such different requirements.

Categories

Resources