Prevent memory leaks on reinitialise - c#

I have a class that can open memory mapped files, read and write to it :
public class Memory
{
protected bool _lock;
protected Mutex _locker;
protected MemoryMappedFile _descriptor;
protected MemoryMappedViewAccessor _accessor;
public void Open(string name, int size)
{
_descriptor = MemoryMappedFile.CreateOrOpen(name, size);
_accessor = _descriptor.CreateViewAccessor(0, size, MemoryMappedFileAccess.ReadWrite);
_locker = new Mutex(true, Guid.NewGuid().ToString("N"), out _lock);
}
public void Close()
{
_accessor.Dispose();
_descriptor.Dispose();
_locker.Close();
}
public Byte[] Read(int count, int index = 0, int position = 0)
{
Byte[] bytes = new Byte[count];
_accessor.ReadArray<Byte>(position, bytes, index, count);
return bytes;
}
public void Write(Byte[] data, int count, int index = 0, int position = 0)
{
_locker.WaitOne();
_accessor.WriteArray<Byte>(position, data, index, count);
_locker.ReleaseMutex();
}
Usually I use it this way :
var data = new byte[5];
var m = new Memory();
m.Open("demo", sizeof(data));
m.Write(data, 5);
m.Close();
I would like to implement some kind of lazy loading for opening and want to open file only when I am ready to write there something, e.g. :
public void Write(string name, Byte[] data, int count, int index = 0, int position = 0)
{
_locker.WaitOne();
Open(name, sizeof(byte) * count); // Now I don't need to call Open() before the write
_accessor.WriteArray<Byte>(position, data, index, count);
_locker.ReleaseMutex();
}
Question : when I call "Write" method several times (in a loop) it will cause member variables (like _locker) to reinitialise and I would like to know - is it safe to do it this way, can it cause memory leaks or unpredictable behavior with mutex?

If you open in the write method using a lock, it's safe to close before you release the mutex.
When you are dealing with unmanaged resources and disposable object, it's always better to implement IDispose interface correctly. Here is some more information.
Then you can initialse Memory instance in a using clause
using (var m = new Memory())
{
// Your read write
}

Related

C# dictionary Value is changing unintentionally, just when I thought I knew how dictionary worked

I created a dictionary in C# in an attempt to store the latest value from a serial device. The serial device continuously sends strings of values and each string contains an ID. There are only about 7 IDs and they repeat. The dictionary is meant to capture the current string of values and store based on the ID so the latest values can be retrieved by ID. I am only interested in the latest values. A timer tic (10mS) keeps the serial buffer empty and the data is processed by other methods randomly (once > 1 sec).
The issue I am having is the dictionary value of the key,value pair is a struct:
public struct Frame
{
public bool echoMsg;
public uint pgnID;
public byte can_dlc;
public byte[] data;
public uint timestamp_us;
}
Thanks in advance for any helping me understand this issue.
All the values above are being saved with the dictPGN.Add() and if I break during runtime and inspect the dictionary I can see that everything is correct. However during runtime several more messages come in and processed and when the dictionary is read at a later time the byte[] data valves have been overwritten by strings with different IDs. I am guessing it is the way I am declaring the byte[] array as the other values in the dictionary remain valid. Ii've tried several things and searched with Google but have not found an answer.
My code looks something like this:
class SerialHardware
{
public SerialHardware(int hardwareIndexParm)
{
hardwareIndex = hardwareIndexParm;
mySerialFrame = new SerialFrame();
mySerialFrame.data = new byte[8];
}
private static UsbSerialThing MySerialControl = new UsbSerialThing();
private static int hardwareIndex;
private static SerialDeviceThing canID = new SerialDeviceTHing();
private static UsbSerialThing.frame frameBuffer = new UsbSerialThing.frame();
private static SerialFrame mySerialFrame = new SerialFrame();
private static Dictionary <UInt32, SerialFrame> dictPGN = new Dictionary<UInt32, SerialFrame>(); //saves entire frame by PGN
public struct Frame
{
public bool echoMsg;
public uint pgnID;
public byte can_dlc;
public byte[] data;
public uint timestamp_us;
}
public SerialFrame GetPNGFromBuffer( UInt32 pgnIDValue)
{
SerialFrame returnFrame;// = new SerialFrame(); <-Just some of the things I've tried
//returnFrame.data = new byte[8];
if (dictPGN.ContainsKey(pgnIDValue))
{
//returnFrame.data appears to have been updated by other buffer reads**
returnFrame = dictPGN[pgnIDValue];
return returnFrame;
}
else
{
return new SerialFrame(); //return blank frame
}
}
private static bool ReadCANDevice()
{
bool result = false;
int bufferSize = System.Runtime.InteropServices.Marshal.SizeOf(frameBuffer);
byte[] buffer = new byte[bufferSize];
int readCount = 0; //return var how many bytes in buffer
int mSTimeout = 100; //time in mSec before timeout
SerialFrame localSerialFrame = new SerialFrame();
localSerialFrame.data = new byte[8];
//Read the device
result = MySerialControl.DeviceBuf(canID, buffer, bufferSize, readCount, mSTimeout);
if (result)
{
mySerialFrame.pgnID = 0x7fffffff & (BitConverter.ToUInt32(buffer, 4));
mySerialFrame.can_dlc = buffer[8];
mySerialFrame.data[0] = buffer[12];
mySerialFrame.data[1] = buffer[13];
mySerialFrame.data[2] = buffer[14];
mySerialFrame.data[3] = buffer[15];
mySerialFrame.data[4] = buffer[16];
mySerialFrame.data[5] = buffer[17];
mySerialFrame.data[6] = buffer[18];
mySerialFrame.data[7] = buffer[19];
mySerialFrame.timestamp_us = BitConverter.ToUInt32(buffer, 20); //(uint)buffer[20] | ((uint)buffer[21] << 8) | ((uint)buffer[22] << 16) | ((uint)buffer[23] << 24);
//save message by PGN_ID
if (dictPGN.ContainsKey(mySerialFrame.pgnID))
{
//dictPGN[mySerialFrame.pgnID] = mySerialFrame;
}
else
{
//byte buffer looks good here! When I break during runtime
dictPGN.Add(mySerialFrame.pgnID, mySerialFrame);
}
}
else
{
//nothing to read, buffer empty
initDevice();
}
return result;
}
//timer event triggers read from serial device
private static void OnTimedEvent(object source, System.Timers.ElapsedEventArgs e)
{
while (ReadCANDevice())
{
//empty the buffer
}
}
}
In your code, you are creating a static variable mySerialFrame and setting the reference field data to point to an allocated array in the constructor.
In the ReadCANDevice method you are re-using that buffer every time you process a message. When you store the struct in the Dictionary, the fields are copied since a struct is a value type, but the value of the data field is a pointer to the allocated array from the constructor. So all the entries in the Dictionary share the same single array you overwrite each time.
Remove all instances of mySerialFrame from the class and only using localSerialFrame should fix the issue, since you allocate a new array each time you process a message for the data field of localSerialFrame.

Gzip only after a threshold reached?

I have a requirement to archive all the data used to build a report everyday. I compress most of the the data using gzip, as some of the datasets can be very large (10mb+). I write each individual protobuf graph to a file. I also whitelist a fixed set of known small object types and added some code to detect if the file is gzipped or not, when I read it. This is because a small file, when compressed can actually be bigger then uncompressed.
Unfortunately, just due to the nature of the data, I may only have a few elements of a larger object type, and the whitelist approach can be problematic.
Is there anyway to write an object to a stream, and only if it reaches a threshold (like 8kb), then compress it? I don't know the size of the object beforehand, and sometimes I have an object graph with an IEnumerable<T> that might be considerable in size.
Edit:
The code is fairly basic. I did skim over the fact that I store this in a filestream db table. That shouldn't really matter for the implementation purpose. I removed some of the extraneous code.
public Task SerializeModel<T>(TransactionalDbContext dbConn, T Item, DateTime archiveDate, string name)
{
var continuation = (await dbConn
.QueryAsync<PathAndContext>(_getPathAndContext, new {archiveDate, model=name})
.ConfigureAwait(false))
.First();
var useGzip = !_whitelist.Contains(typeof(T));
using (var fs = new SqlFileStream(continuation.Path, continuation.Context, FileAccess.Write,
FileOptions.SequentialScan | FileOptions.Asynchronous, 64*1024))
using (var buffer = useGzip ? new GZipStream(fs, CompressionLevel.Optimal) : default(Stream))
{
_serializerModel.Serialize(stream ?? fs, item);
}
dbConn.Commit();
}
During the serialization, you can use an intermediate stream to accomplish what you are asking for. Something like this will do the job
class SerializationOutputStream : Stream
{
Stream outputStream, writeStream;
byte[] buffer;
int bufferedCount;
long position;
public SerializationOutputStream(Stream outputStream, int compressTreshold = 8 * 1024)
{
writeStream = this.outputStream = outputStream;
buffer = new byte[compressTreshold];
}
public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); }
public override void SetLength(long value) { throw new NotSupportedException(); }
public override int Read(byte[] buffer, int offset, int count) { throw new NotSupportedException(); }
public override bool CanRead { get { return false; } }
public override bool CanSeek { get { return false; } }
public override bool CanWrite { get { return writeStream != null && writeStream.CanWrite; } }
public override long Length { get { throw new NotSupportedException(); } }
public override long Position { get { return position; } set { throw new NotSupportedException(); } }
public override void Write(byte[] buffer, int offset, int count)
{
if (count <= 0) return;
var newPosition = position + count;
if (this.buffer == null)
writeStream.Write(buffer, offset, count);
else
{
int bufferCount = Math.Min(count, this.buffer.Length - bufferedCount);
if (bufferCount > 0)
{
Array.Copy(buffer, offset, this.buffer, bufferedCount, bufferCount);
bufferedCount += bufferCount;
}
int remainingCount = count - bufferCount;
if (remainingCount > 0)
{
writeStream = new GZipStream(outputStream, CompressionLevel.Optimal);
try
{
writeStream.Write(this.buffer, 0, this.buffer.Length);
writeStream.Write(buffer, offset + bufferCount, remainingCount);
}
finally { this.buffer = null; }
}
}
position = newPosition;
}
public override void Flush()
{
if (buffer == null)
writeStream.Flush();
else if (bufferedCount > 0)
{
try { outputStream.Write(buffer, 0, bufferedCount); }
finally { buffer = null; }
}
}
protected override void Dispose(bool disposing)
{
try
{
if (!disposing || writeStream == null) return;
try { Flush(); }
finally { writeStream.Close(); }
}
finally
{
writeStream = outputStream = null;
buffer = null;
base.Dispose(disposing);
}
}
}
and use it like this
using (var stream = new SerializationOutputStream(new SqlFileStream(continuation.Path, continuation.Context, FileAccess.Write,
FileOptions.SequentialScan | FileOptions.Asynchronous, 64*1024)))
_serializerModel.Serialize(stream, item);
datasets can be very large (10mb+)
On most devices, that is not very large. Is there a reason you can't read in the entire object before deciding whether to compress? Note also the suggestion from #Niklas to read in one buffer's worth of data (e.g. 8K) before deciding whether to compress.
This is because a small file, when compressed can actually be bigger then uncompressed.
The thing that makes a small file potentially larger is the ZIP header, in particular the dictionary. Some ZIP libraries allow you to use a custom dictionary known while compressing and uncompressing. I used SharpZipLib for this many years back.
It is more effort, in terms of coding and testing, to use this approach. If you feel that the benefit is worthwhile, it may provide the best approach.
Note no matter what path you take, you will physically store data using multiples of the block size of your storage device.
if the object is 1 byte or 100mb I have no idea
Note that protocol buffers is not really designed for large data sets
Protocol Buffers are not designed to handle large messages. As a general rule of thumb, if you are dealing in messages larger than a megabyte each, it may be time to consider an alternate strategy.
That said, Protocol Buffers are great for handling individual messages within a large data set. Usually, large data sets are really just a collection of small pieces, where each small piece may be a structured piece of data.
If your largest object can comfortably serialize into memory, first serialize it into a MemoryStream, then either write that MemoryStream to your final destination, or run it through a GZipStream and then to your final destination. If the largest object cannot comfortably serialize into memory, I'm not sure what further advice to give.

How to compute hash of a large file chunk?

I want to be able to compute the hashes of arbitrarily sized file chunks of a file in C#.
eg: Compute the hash of the 3rd gigabyte in 4gb file.
The main problem is that I don't want to load the entire file at memory, as there could be several files and the offsets could be quite arbitrary.
AFAIK, the HashAlgorithm.ComputeHash allows me to either use a byte buffer, of a stream. The stream would allow me to compute the hash efficiently, but for the entire file, not just for a specific chunk.
I was thinking to create aan alternate FileStream object and pass it to ComputeHash, where I would overload the FileStream methods and have read only for a certain chunk in a file.
Is there a better solution than this, preferably using the built in C# libraries ?
Thanks.
You should pass in either:
A byte array containing the chunk of data to compute the hash from
A stream that restricts access to the chunk you want to computer the hash from
The second option isn't all that hard, here's a quick LINQPad program I threw together. Note that it lacks quite a bit of error handling, such as checking that the chunk is actually available (ie. that you're passing in a position and length of the stream that actually exists and doesn't fall off the end of the underlying stream).
Needless to say, if this should end up as production code I would add a lot of error handling, and write a bunch of unit-tests to ensure all edge-cases are handled correctly.
You would construct the PartialStream instance for your file like this:
const long gb = 1024 * 1024 * 1024;
using (var fileStream = new FileStream(#"d:\temp\too_long_file.bin", FileMode.Open))
using (var chunk = new PartialStream(fileStream, 2 * gb, 1 * gb))
{
var hash = hashAlgorithm.ComputeHash(chunk);
}
Here's the LINQPad test program:
void Main()
{
var buffer = Enumerable.Range(0, 256).Select(i => (byte)i).ToArray();
using (var underlying = new MemoryStream(buffer))
using (var partialStream = new PartialStream(underlying, 64, 32))
{
var temp = new byte[1024]; // too much, ensure we don't read past window end
partialStream.Read(temp, 0, temp.Length);
temp.Dump();
// should output 64-95 and then 0's for the rest (64-95 = 32 bytes)
}
}
public class PartialStream : Stream
{
private readonly Stream _UnderlyingStream;
private readonly long _Position;
private readonly long _Length;
public PartialStream(Stream underlyingStream, long position, long length)
{
if (!underlyingStream.CanRead || !underlyingStream.CanSeek)
throw new ArgumentException("underlyingStream");
_UnderlyingStream = underlyingStream;
_Position = position;
_Length = length;
_UnderlyingStream.Position = position;
}
public override bool CanRead
{
get
{
return _UnderlyingStream.CanRead;
}
}
public override bool CanWrite
{
get
{
return false;
}
}
public override bool CanSeek
{
get
{
return true;
}
}
public override long Length
{
get
{
return _Length;
}
}
public override long Position
{
get
{
return _UnderlyingStream.Position - _Position;
}
set
{
_UnderlyingStream.Position = value + _Position;
}
}
public override void Flush()
{
throw new NotSupportedException();
}
public override long Seek(long offset, SeekOrigin origin)
{
switch (origin)
{
case SeekOrigin.Begin:
return _UnderlyingStream.Seek(_Position + offset, SeekOrigin.Begin) - _Position;
case SeekOrigin.End:
return _UnderlyingStream.Seek(_Length + offset, SeekOrigin.Begin) - _Position;
case SeekOrigin.Current:
return _UnderlyingStream.Seek(offset, SeekOrigin.Current) - _Position;
default:
throw new ArgumentException("origin");
}
}
public override void SetLength(long length)
{
throw new NotSupportedException();
}
public override int Read(byte[] buffer, int offset, int count)
{
long left = _Length - Position;
if (left < count)
count = (int)left;
return _UnderlyingStream.Read(buffer, offset, count);
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new NotSupportedException();
}
}
You can use TransformBlock and TransformFinalBlock directly. That's pretty similar to what HashAlgorithm.ComputeHash does internally.
Something like:
using(var hashAlgorithm = new SHA256Managed())
using(var fileStream = new File.OpenRead(...))
{
fileStream.Position = ...;
long bytesToHash = ...;
var buf = new byte[4 * 1024];
while(bytesToHash > 0)
{
var bytesRead = fileStream.Read(buf, 0, (int)Math.Min(bytesToHash, buf.Length));
hashAlgorithm.TransformBlock(buf, 0, bytesRead, null, 0);
bytesToHash -= bytesRead;
if(bytesRead == 0)
throw new InvalidOperationException("Unexpected end of stream");
}
hashAlgorithm.TransformFinalBlock(buf, 0, 0);
var hash = hashAlgorithm.Hash;
return hash;
};
Your suggestion - passing in a restricted access wrapper for your FileStream - is the cleanest solution. Your wrapper should defer everything to the wrapped Stream except the Length and Position properties.
How? Simply create a class that inherits from Stream. Make the constructor take:
Your source Stream (in your case, a FileStream)
The chunk start position
The chunk end position
As an extension - this is a list of all the Streams that are available http://msdn.microsoft.com/en-us/library/system.io.stream%28v=vs.100%29.aspx#inheritanceContinued
To easily compute the hash of a chunk of a larger stream, use these two methods:
HashAlgorithm.TransformBlock
HashAlgorithm.TransformFinalBlock
Here's a LINQPad program that demonstrates:
void Main()
{
const long gb = 1024 * 1024 * 1024;
using (var stream = new FileStream(#"d:\temp\largefile.bin", FileMode.Open))
{
stream.Position = 2 * gb; // 3rd gb-chunk
byte[] buffer = new byte[32768];
long amount = 1 * gb;
using (var hashAlgorithm = SHA1.Create())
{
while (amount > 0)
{
int bytesRead = stream.Read(buffer, 0,
(int)Math.Min(buffer.Length, amount));
if (bytesRead > 0)
{
amount -= bytesRead;
if (amount > 0)
hashAlgorithm.TransformBlock(buffer, 0, bytesRead,
buffer, 0);
else
hashAlgorithm.TransformFinalBlock(buffer, 0, bytesRead);
}
else
throw new InvalidOperationException();
}
hashAlgorithm.Hash.Dump();
}
}
}
To answer your original question ("Is there a better solution..."):
Not that I know of.
This seems to be a very special, non-trivial task, so a little extra work might be involved anyway. I think your approach of using a custom Stream-class goes in the right direction, I'd probably do exactly the same.
And Gusdor and xander have already provided very helpful information on how to implement that — good job guys!

How to add seek and position capabilities to CryptoStream

I was trying to use CryptoStream with AWS .NET SDk it failed as seek is not supported on CryptoStream. I read somewhere with content length known we should be able to add these capabilities to CryptoStream. I would like to know how to do this; any sample code will be useful too.
I have a method like this which is passed with a FieStream and returns a cryptoStream. I assign the returned Stream object to InputStream of AWS SDk PutObjectRequest object.
public static Stream GetEncryptStream(Stream existingStream,
SymmetricAlgorithm cryptoServiceProvider,
string encryptionKey, string encryptionIV)
{
Stream existingStream = this.dataStream;
cryptoServiceProvider.Key = ASCIIEncoding.ASCII.GetBytes(encryptionKey);
cryptoServiceProvider.IV = ASCIIEncoding.ASCII.GetBytes(encryptionIV);
CryptoStream cryptoStream = new CryptoStream(existingStream,
cryptoServiceProvider.CreateEncryptor(), CryptoStreamMode.Read);
return cryptoStream ;
}
Generally with encryption there isn't a 1:1 mapping between input bytes and output bytes, so in order to seek backwards (in particular) it would have to do a lot of work - perhaps even going right back to the start and moving forwards processing the data to consume [n] bytes from the decrypted stream. Even if it knew where each byte mapped to, the state of the encryption is dependent on the data that came before it (it isn't a decoder ring ;p), so again - it would either have to read from the start (and reset back to the initialisation-vector), or it would have to track snapshots of positions and crypto-states, and go back to the nearest snapshot, then walk forwards. Lots of work and storage.
This would apply to seeking relative to either end, too.
Moving forwards from the current position wouldn't be too bad, but again you'd have to process the data - not just jump the base-stream's position.
There isn't a good way to implement this that most consumers could use - normally if you get a true from CanSeek that means "random access", but that is not efficient in this case.
As a workaround - consider copying the decrypted data into a MemoryStream or a file; then you can access the fully decrypted data in a random-access fashion.
It is so simple, just generate a long key with the same size as data by the position of the stream (stream.Position) and use ECB or any other encryption methods you like and then apply XOR. It is seekable, very fast and 1 to 1 encryption, which the output length is exactly same as the input length. It is memory efficient and you can use it on huge files. I think this method is used in modern WinZip AES encryption too. The only thing that you MUST be careful is the salt
Use a unique salt for each stream otherwise there is no encryption.
public class SeekableAesStream : Stream
{
private Stream baseStream;
private AesManaged aes;
private ICryptoTransform encryptor;
public bool autoDisposeBaseStream { get; set; } = true;
/// <param name="salt">//** WARNING **: MUST be unique for each stream otherwise there is NO security</param>
public SeekableAesStream(Stream baseStream, string password, byte[] salt)
{
this.baseStream = baseStream;
using (var key = new PasswordDeriveBytes(password, salt))
{
aes = new AesManaged();
aes.KeySize = 128;
aes.Mode = CipherMode.ECB;
aes.Padding = PaddingMode.None;
aes.Key = key.GetBytes(aes.KeySize / 8);
aes.IV = new byte[16]; //useless for ECB
encryptor = aes.CreateEncryptor(aes.Key, aes.IV);
}
}
private void cipher(byte[] buffer, int offset, int count, long streamPos)
{
//find block number
var blockSizeInByte = aes.BlockSize / 8;
var blockNumber = (streamPos / blockSizeInByte) + 1;
var keyPos = streamPos % blockSizeInByte;
//buffer
var outBuffer = new byte[blockSizeInByte];
var nonce = new byte[blockSizeInByte];
var init = false;
for (int i = offset; i < count; i++)
{
//encrypt the nonce to form next xor buffer (unique key)
if (!init || (keyPos % blockSizeInByte) == 0)
{
BitConverter.GetBytes(blockNumber).CopyTo(nonce, 0);
encryptor.TransformBlock(nonce, 0, nonce.Length, outBuffer, 0);
if (init) keyPos = 0;
init = true;
blockNumber++;
}
buffer[i] ^= outBuffer[keyPos]; //simple XOR with generated unique key
keyPos++;
}
}
public override bool CanRead { get { return baseStream.CanRead; } }
public override bool CanSeek { get { return baseStream.CanSeek; } }
public override bool CanWrite { get { return baseStream.CanWrite; } }
public override long Length { get { return baseStream.Length; } }
public override long Position { get { return baseStream.Position; } set { baseStream.Position = value; } }
public override void Flush() { baseStream.Flush(); }
public override void SetLength(long value) { baseStream.SetLength(value); }
public override long Seek(long offset, SeekOrigin origin) { return baseStream.Seek(offset, origin); }
public override int Read(byte[] buffer, int offset, int count)
{
var streamPos = Position;
var ret = baseStream.Read(buffer, offset, count);
cipher(buffer, offset, count, streamPos);
return ret;
}
public override void Write(byte[] buffer, int offset, int count)
{
cipher(buffer, offset, count, Position);
baseStream.Write(buffer, offset, count);
}
protected override void Dispose(bool disposing)
{
if (disposing)
{
encryptor?.Dispose();
aes?.Dispose();
if (autoDisposeBaseStream)
baseStream?.Dispose();
}
base.Dispose(disposing);
}
}
Usage:
static void test()
{
var buf = new byte[255];
for (byte i = 0; i < buf.Length; i++)
buf[i] = i;
//encrypting
var uniqueSalt = new byte[16]; //** WARNING **: MUST be unique for each stream otherwise there is NO security
var baseStream = new MemoryStream();
var cryptor = new SeekableAesStream(baseStream, "password", uniqueSalt);
cryptor.Write(buf, 0, buf.Length);
//decrypting at position 200
cryptor.Position = 200;
var decryptedBuffer = new byte[50];
cryptor.Read(decryptedBuffer, 0, 50);
}
As an extension to Mark Gravell's answer, the seekability of a cipher depends on the Mode Of Operation you're using for the cipher. Most modes of operation aren't seekable, because each block of ciphertext depends in some way on the previous one. ECB is seekable, but it's almost universally a bad idea to use it. CTR mode is another one that can be accessed randomly, as is CBC.
All of these modes have their own vulnerabilities, however, so you should read carefully and think long and hard (and preferably consult an expert) before choosing one.

Suggestions for a thread safe non-blocking buffer manager

I've created a simple buffer manager class to be used with asyncroneous sockets. This will protect against memory fragmentation and improve performance. Any suggestions for further improvements or other approaches?
public class BufferManager
{
private int[] free;
private byte[] buffer;
private readonly int blocksize;
public BufferManager(int count, int blocksize)
{
buffer = new byte[count * blocksize];
free = new int[count];
this.blocksize = blocksize;
for (int i = 0; i < count; i++)
free[i] = 1;
}
public void SetBuffer(SocketAsyncEventArgs args)
{
for (int i = 0; i < free.Length; i++)
{
if (1 == Interlocked.CompareExchange(ref free[i], 0, 1))
{
args.SetBuffer(buffer, i * blocksize, blocksize);
return;
}
}
args.SetBuffer(new byte[blocksize], 0, blocksize);
}
public void FreeBuffer(SocketAsyncEventArgs args)
{
int offset = args.Offset;
byte[] buff = args.Buffer;
args.SetBuffer(null, 0, 0);
if (buffer == buff)
free[offset / blocksize] = 1;
}
}
Edit:
The orignal answer below addresses a code construction issue of overly tight coupling. However, considering the solution as whole I would avoid using just one large buffer and handing over slices of it in this way. You expose your code to buffer overrun (and shall we call it buffer "underrun" issues). Instead I would manage an array of byte arrays each being a discrete buffer. Offset handed over is always 0 and size is always the length of the buffer. Any bad code that attempts to read/write parts beyond the boundaries will be caught.
Original answer
You've coupled the class to SocketAsyncEventArgs where in fact all it needs is a function to assign the buffer, change SetBuffer to:-
public void SetBuffer(Action<byte[], int, int> fnSet)
{
for (int i = 0; i < free.Length; i++)
{
if (1 == Interlocked.CompareExchange(ref free[i], 0, 1))
{
fnSet(buffer, i * blocksize, blocksize);
return;
}
}
fnSet(new byte[blocksize], 0, blocksize);
}
Now you can call from consuming code something like this:-
myMgr.SetBuffer((buf, offset, size) => myArgs.SetBuffer(buf, offset, size));
I'm not sure that type inference is clever enough to resolve the types of buf, offset, size in this case. If not you will have to place the types in the argument list:-
myMgr.SetBuffer((byte[] buf, int offset, int size) => myArgs.SetBuffer(buf, offset, size));
However now your class can be used to allocate a buffer for all manner of requirements that also use the byte[], int, int pattern which is very common.
Of course you need to decouple the free operation to but thats:-
public void FreeBuffer(byte[] buff, int offset)
{
if (buffer == buff)
free[offset / blocksize] = 1;
}
This requires you to call SetBuffer on the EventArgs in consuming code in the case for SocketAsyncEventArgs. If you are concerned that this approach reduces the atomicity of freeing the buffer and removing it from the sockets use, then sub-class this adjusted buffer manager and include SocketAsyncEventArgs specific code in the sub-class.
I've created a new class with a completely different approach.
I have a server class that receives byte arrays. It will then invoke different delegates handing them the buffer objects so that other classes can process them. When those classes are done they need a way to push the buffers back to the stack.
public class SafeBuffer
{
private static Stack bufferStack;
private static byte[][] buffers;
private byte[] buffer;
private int offset, lenght;
private SafeBuffer(byte[] buffer)
{
this.buffer = buffer;
offset = 0;
lenght = buffer.Length;
}
public static void Init(int count, int blocksize)
{
bufferStack = Stack.Synchronized(new Stack());
buffers = new byte[count][];
for (int i = 0; i < buffers.Length; i++)
buffers[i] = new byte[blocksize];
for (int i = 0; i < buffers.Length; i++)
bufferStack.Push(new SafeBuffer(buffers[i]));
}
public static SafeBuffer Get()
{
return (SafeBuffer)bufferStack.Pop();
}
public void Close()
{
bufferStack.Push(this);
}
public byte[] Buffer
{
get
{
return buffer;
}
}
public int Offset
{
get
{
return offset;
}
set
{
offset = value;
}
}
public int Lenght
{
get
{
return buffer.Length;
}
}
}

Categories

Resources