problems with .Net File Mapping - c#

There are two processes:
Win32, C++ - writer
.Net 4.5, C# - reader
first process creates a buffer and sharing for second process.
(int)(buffer+0) - until when you can write.
(int)(buffer+4) - until when you can read.
... - block [size_mess][mess]
record circularity, ie when you reach the end of the buffer, seek to the beginning.
at some point occur error.
1 process waits for data to be read.
2 process reads a block, but reading the old data (which were recorded during the previous pass).
tried used MemoryMappedViewAccessor, MemoryMappedViewStream... without effect
possible of delay due to .NET?
unsafe public void LoadFromMemory(string name)
{
const UInt32 capacity = 1200;
const UInt32 maxsize = 1024;
MemoryMappedFile mf = MemoryMappedFile.OpenExisting(name,MemoryMappedFileRights.FullControl);
MemoryMappedViewStream stream = mf.CreateViewStream(0, capacity,MemoryMappedFileAccess.ReadWrite);
BinaryReader reader = new BinaryReader(stream);
byte* bytePtr = null;
stream.SafeMemoryMappedViewHandle.AcquirePointer(ref bytePtr);
int size = 0;
long pos_begin = 0x10;
long pos_max = Interlocked.CompareExchange(ref *((int*)(bytePtr + 4)), 0, 0);
while (<work>)
{
while (pos_begin >= pos_max)
{
pos_max = Interlocked.CompareExchange(ref *((int*)(bytePtr+4)), 0, 0);
}
size = (bytePtr[pos_begin + 1] << 8) + bytePtr[pos_begin];
stream.Seek(pos_begin + 2, SeekOrigin.Begin);
work(reader);
//if here put a breakpoint,
//in time of error size ! = Watch.bytePtr[pos_begin] and all other data
if (pos_begin + size > maxsize) pos_begin = 0x10; // to beginning
else pos_begin += size;
Interlocked.Exchange(ref *((int*)bytePtr), (int)pos_begin); // for first process
}
}

Related

In C#, what's the best way to deal with partially received messages using SocketAsyncEventArgs Buffer

In order to clean some messy code and get a better understanding of the SocketAsyncEventArgs class, I'd to know what's the most efficient technique to reassemble partially received messages from SocketAsyncEventArgs buffers.
To give you the big picture, I'm connected to a TCP server using a C# Socket client that will essentially receive data. The data received is message-based delimited by a \n character.
As you're probably already aware of, when using the ReceiveAsync method, this is almost a certitude that the last received message will be uncompleted such as you'll have to locate the index of the last complete message, copy the incomplete buffer section and keep it as start for the next received buffer and so on.
The thing is, I wish to abstract this operation from the upper layer and call the ProcessReceiveDataImpl as soon I get completed messages in the _tmpBuffer. I found that my Buffer.BlockCopy is not much readable (very old code also (-:) but anyway I wish to know what are you doing in this typical use case?
Code to reassemble messages:
public class SocketClient
{
private const int _receiveBufferSize = 8192;
private byte[] _remBuffer = new byte[2 * _receiveBufferSize];
private byte[] _tmpBuffer = new byte[2 * _receiveBufferSize];
private int _remBufferSize = 0;
private int _tmpBufferSize = 0;
private void ProcessReceiveData(SocketAsyncEventArgs e)
{
// the buffer to process
byte[] curBuffer = e.Buffer;
int curBufferSize = e.BytesTransferred;
int curBufferOffset = e.Offset;
int curBufferLastIndex = e.BytesTransferred - 1;
int curBufferLastSplitIndex = int.MinValue;
if (_remBufferSize > 0)
{
curBufferLastSplitIndex = GetLastSplitIndex(curBuffer, curBufferOffset, curBufferSize);
if (curBufferLastSplitIndex != curBufferLastIndex)
{
// copy the remain + part of the current into tmp
Buffer.BlockCopy(_remBuffer, 0, _tmpBuffer, 0, _remBufferSize);
Buffer.BlockCopy(curBuffer, curBufferOffset, _tmpBuffer, _remBufferSize, curBufferLastSplitIndex + 1);
_tmpBufferSize = _remBufferSize + curBufferLastSplitIndex + 1;
ProcessReceiveDataImpl(_tmpBuffer, _tmpBufferSize);
Buffer.BlockCopy(curBuffer, curBufferLastSplitIndex + 1, _remBuffer, 0, curBufferLastIndex - curBufferLastSplitIndex);
_remBufferSize = curBufferLastIndex - curBufferLastSplitIndex;
}
else
{
// copy the remain + entire current into tmp
Buffer.BlockCopy(_remBuffer, 0, _tmpBuffer, 0, _remBufferSize);
Buffer.BlockCopy(curBuffer, curBufferOffset, _tmpBuffer, _remBufferSize, curBufferSize);
ProcessReceiveDataImpl(_tmpBuffer, _remBufferSize + curBufferSize);
_remBufferSize = 0;
}
}
else
{
curBufferLastSplitIndex = GetLastSplitIndex(curBuffer, curBufferOffset, curBufferSize);
if (curBufferLastSplitIndex != curBufferLastIndex)
{
// we must copy the unused byte into remaining buffer
_remBufferSize = curBufferLastIndex - curBufferLastSplitIndex;
Buffer.BlockCopy(curBuffer, curBufferLastSplitIndex + 1, _remBuffer, 0, _remBufferSize);
// process the msg
ProcessReceiveDataImpl(curBuffer, curBufferLastSplitIndex + 1);
}
else
{
// we can process the entire msg
ProcessReceiveDataImpl(curBuffer, curBufferSize);
}
}
}
protected virtual void ProcessReceiveDataImpl(byte[] buffer, int bufferSize)
{
}
private int GetLastSplitIndex(byte[] buffer, int offset, int bufferSize)
{
for (int i = offset + bufferSize - 1; i >= offset; i--)
{
if (buffer[i] == '\n')
{
return i;
}
}
return -1;
}
}
Your input is very important and appreciated!
Thank you!
Updated:
Also, rather then calling the ProcessReceiveDataImpl and block further receive operations, will it be useful to queue completed messages and make them available to the consumer?

C# Filestream Read - Recycle array?

I am working with filestream read: https://msdn.microsoft.com/en-us/library/system.io.filestream.read%28v=vs.110%29.aspx
What I'm trying to do is read a large file in a loop a certain number of bytes at a time; not the whole file at once. The code example shows this for reading:
int n = fsSource.Read(bytes, numBytesRead, numBytesToRead);
The definition of "bytes" is: "When this method returns, contains the specified byte array with the values between offset and (offset + count - 1) replaced by the bytes read from the current source."
I want to only read in 1 mb at a time so I do this:
using (FileStream fsInputFile = new FileStream(strInputFileName, FileMode.Open, FileAccess.Read)) {
int intBytesToRead = 1024;
int intTotalBytesRead = 0;
int intInputFileByteLength = 0;
byte[] btInputBlock = new byte[intBytesToRead];
byte[] btOutputBlock = new byte[intBytesToRead];
intInputFileByteLength = (int)fsInputFile.Length;
while (intInputFileByteLength - 1 >= intTotalBytesRead)
{
if (intInputFileByteLength - intTotalBytesRead < intBytesToRead)
{
intBytesToRead = intInputFileByteLength - intTotalBytesRead;
}
// *** Problem is here ***
int n = fsInputFile.Read(btInputBlock, intTotalBytesRead, intBytesToRead);
intTotalBytesRead += n;
fsOutputFile.Write(btInputBlock, intTotalBytesRead - n, n);
}
fsOutputFile.Close(); }
Where the problem area is stated, btInputBlock works on the first cycle because it reads in 1024 bytes. But then on the second loop, it doesn't recycle this byte array. It instead tries to append the new 1024 bytes into btInputBlock. As far as I can tell, you can only specify the offset and length of the file you want to read and not the offset and length of btInputBlock. Is there a way to "re-use" the array that is being dumped into by Filestream.Read or should I find another solution?
Thanks.
P.S. The exception on the read is: "Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection."
Your code can be simplified somewhat
int num;
byte[] buffer = new byte[1024];
while ((num = fsInputFile.Read(buffer, 0, buffer.Length)) != 0)
{
//Do your work here
fsOutputFile.Write(buffer, 0, num);
}
Note that Read takes in the Array to fill, the offset (which is the offset of the array where the bytes should be placed, and the (max) number of bytes to read.
That's because you're incrementing intTotalBytesRead, which is an offset for the array, not for the filestream. In your case it should always be zero, which will overwrite previous byte data in the array, rather than append it at the end, using intTotalBytesRead.
int n = fsInputFile.Read(btInputBlock, intTotalBytesRead, intBytesToRead); //currently
int n = fsInputFile.Read(btInputBlock, 0, intBytesToRead); //should be
Filestream doesn't need an offset, every Read picks up where the last one left off.
See https://msdn.microsoft.com/en-us/library/system.io.filestream.read(v=vs.110).aspx
for details
Your Read call should be Read(btInputBlock, 0, intBytesToRead). The 2nd parameter is the offset into the array you want to start writing the bytes to. Similarly for Write you want Write(btInputBlock, 0, n) as the 2nd parameter is the offset in the array to start writing bytes from. Also you don't need to call Close as the using will clean up the FileStream for you.
using (FileStream fsInputFile = new FileStream(strInputFileName, FileMode.Open, FileAccess.Read))
{
int intBytesToRead = 1024;
byte[] btInputBlock = new byte[intBytesToRead];
while (fsInputFile.Postion < fsInputFile.Length)
{
int n = fsInputFile.Read(btInputBlock, 0, intBytesToRead);
intTotalBytesRead += n;
fsOutputFile.Write(btInputBlock, 0, n);
}
}

Can't read FileStream into byte[] correctly

I have some C# code to call as TF(true,"C:\input.txt","C:\noexistsyet.file"), but when I run it, it breaks on FileStream.Read() for reading the last chunk of the file into the buffer, getting an index-out-of-bounds ArgumentException.
To me, the code seems logical with no overflow for trying to write to the buffer. I thought I had all that set up with rdlen and _chunk, but maybe I'm looking at it wrong. Any help?
My error: ArgumentException was unhandled: Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
public static bool TF(bool tf, string filepath, string output)
{
long _chunk = 16 * 1024; //buffer count
long total_size = 0
long rdlen = 0;
long wrlen = 0;
long full_chunks = 0;
long end_remain_buf_len = 0;
FileInfo fi = new FileInfo(filepath);
total_size = fi.Length;
full_chunks = total_size / _chunk;
end_remain_buf_len = total_size % _chunk;
fi = null;
FileStream fs = new FileStream(filepath, FileMode.Open);
FileStream fw = new FileStream(output, FileMode.Create);
for (long chunk_pass = 0; chunk_pass < full_chunks; chunk_pass++)
{
int chunk = (int)_chunk * ((tf) ? (1 / 3) : 3); //buffer count for xbuffer
byte[] buffer = new byte[_chunk];
byte[] xbuffer = new byte[(buffer.Length * ((tf) ? (1 / 3) : 3))];
//Read chunk of file into buffer
fs.Read(buffer, (int)rdlen, (int)_chunk); //ERROR occurs here
//xbuffer = do stuff to make it *3 longer or *(1/3) shorter;
//Write xbuffer into chunk of completed file
fw.Write(xbuffer, (int)wrlen, chunk);
//Keep track of location in file, for index/offset
rdlen += _chunk;
wrlen += chunk;
}
if (end_remain_buf_len > 0)
{
byte[] buffer = new byte[end_remain_buf_len];
byte[] xbuffer = new byte[(buffer.Length * ((tf) ? (1 / 3) : 3))];
fs.Read(buffer, (int)rdlen, (int)end_remain_buf_len); //error here too
//xbuffer = do stuff to make it *3 longer or *(1/3) shorter;
fw.Write(xbuffer, (int)wrlen, (int)end_remain_buf_len * ((tf) ? (1 / 3) : 3));
rdlen += end_remain_buf_len;
wrlen += chunk;
}
//Close opened files
fs.Close();
fw.Close();
return false; //no functionality yet lol
}
The Read() method of Stream (the base class of FileStream) returns an int indicating the number of bytes read, and 0 when it has no more bytes to read, so you don't even need to know the file size beforehand:
public static void CopyFileChunked(int chunkSize, string filepath, string output)
{
byte[] chunk = new byte[chunkSize];
using (FileStream reader = new FileStream(filepath, FileMode.Open))
using (FileStream writer = new FileStream(output, FileMode.Create))
{
int bytes;
while ((bytes = reader.Read(chunk , 0, chunkSize)) > 0)
{
writer.Write(chunk, 0, bytes);
}
}
}
Or even File.Copy() may do the trick, if you can live with letting the framework decide about the chunk size.
I think it's failing on this line:
fw.Write(xbuffer, (int)wrlen, chunk);
You are declaring xbuffer as
byte[] xbuffer = new byte[(buffer.Length * ((tf) ? (1 / 3) : 3))];
Since 1 / 3 is an integer division, it returns 0.And you are declaring xbuffer with the size 0 hence the error.You can fix it by casting one of the operand to a floating point type or using literals.But then you still need to cast the result back to integer.
byte[] xbuffer = new byte[(int)(buffer.Length * ((tf) ? (1m / 3) : 3))];
The same problem also present in the chunk declaration.

Image sending through socket C++ server and C# client

I'm developing a windows store application, using C#. I would like to make TCP connection to receive images (for now) from a desktop server. the server is in C++ .
I have a client C++ to test the function and it is working perfectly. Now what i want is a similar client but in C# . I tried converting it but no luck, i tried to use the same logic but i had tons of errors and deleted everything.
Help is appreciated,thanks.
C++ Server
int size = 8192; //image size
char* bufferCMP;
bufferCMP = (char*)malloc(sizeof(char)* size);
FILE *p_file;
p_file = fopen("C:\\Program Files\\img1.png", "rb");
fread(bufferCMP, 1, size, p_file);
fclose(p_file);
int chunkcount = size / DEFAULT_BUFLEN;
int lastchunksize = size - (chunkcount * DEFAULT_BUFLEN);
int fileoffset = 0;
printf("Sending actual Chunk");
while (chunkcount > 0)
{
iResult = send(ClientSocket, bufferCMP + (fileoffset * DEFAULT_BUFLEN), DEFAULT_BUFLEN, 0);
fileoffset++;
chunkcount--;
if (iResult != DEFAULT_BUFLEN)
{
printf("Sending Buffer size <> Default buffer length ::: %d\n");
}
else
{
printf("Sending Buffer size = %d \n", iResult, fileoffset);
}
}
printf("Sending last Chunk", lastchunksize);
iResult = send(ClientSocket, bufferCMP + (fileoffset * DEFAULT_BUFLEN), lastchunksize, 0);
`
C++ Client (to be converted into C#)
int size = 8192;
int FileCounter = 0;
bool flg = true;
char * fileComplete;
char * filesizeBuffer;
FILE *temp;
int receiveBuffer = 0;
int desiredRecBuffer = size;
//int desiredRecBuffer = DEFAULT_BUFLEN ;
fileComplete = (char*)malloc(sizeof(char)* size);
while (desiredRecBuffer > 0)
{
iResult = recv(ConnectSocket, fileComplete + receiveBuffer, desiredRecBuffer, 0);
//iResult = recv( ClientSocket, fileComplete + receiveBuffer , fileSize , 0 );
if (iResult < 1)
{
printf("Reveive Buffer Error %d \n", WSAGetLastError());
}
else
{
receiveBuffer += iResult;
desiredRecBuffer = size - receiveBuffer;
printf("Reveived Data size : %d \n", desiredRecBuffer);
}
}
FILE *File = fopen("C:\\Users\\amirk_000\\Pictures\\img1b.png", "wb");
fwrite(fileComplete, 1, size, File);
//flg = true;
free(fileComplete);
fclose(File);
Full example of C# client socket is available at MSDN
Modify the given SocketSendReceive method to write the received buffer (bytesReceived array) to a file stream.
Something like the following should do it:
using (var file = File.OpenWrite("myimage.png"))
{
do
{
bytes = s.Receive(bytesReceived, bytesReceived.Length, 0);
file.Write(bytesReceived, 0, bytes);
}
while (bytes > 0);
}

How do I reduce the memory usage in this c# File Transfer?

I have a simple file transfer app written in c# using TCP to send the data.
This is how I send files:
Socket clientSock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
byte[] fileName = Encoding.UTF8.GetBytes(fName); //file name
byte[] fileData = new byte[1000*1024];
byte[] fileNameLen = BitConverter.GetBytes(fileName.Length); //length of file name
FileStream fs = new FileStream(textBox1.Text, FileMode.Open);
try
{
clientData = new byte[4 + fileName.Length];
}
catch (OutOfMemoryException exc)
{
MessageBox.Show("Out of memory");
return;
}
fileNameLen.CopyTo(clientData, 0);
fileName.CopyTo(clientData, 4);
clientSock.Connect("172.16.12.91", 9050);
clientSock.Send(clientData, 0, clientData.Length, SocketFlags.None);
progressBar1.Maximum = (int)fs.Length;
while (true)
{
int index = 0;
while (index < fs.Length)
{
int bytesRead = fs.Read(fileData, index, fileData.Length - index);
if (bytesRead == 0)
{
break;
}
index += bytesRead;
}
if (index != 0)
{
clientSock.Send(fileData, index, SocketFlags.None);
if ((progressBar1.Value + (1024 * 1000)) > fs.Length)
{
progressBar1.Value += ((int)fs.Length - progressBar1.Value);
}
else
progressBar1.Value += (1024 * 1000);
}
if (index != fileData.Length)
{
progressBar1.Value = 0;
clientSock.Close();
fs.Close();
break;
}
}
}
In taskmanager,the Release version of this app's memory usage goes 13 MB when I use the OpenFileDialog, then goes up to 16 MB when it sends data then stays there. Is there anything I can do to reduce the memory usage? Or is there a good tool that I can use to monitor the total allocation memory in the app?
And while we're there, is 16 MB really that high?
16MB doesn't sound that much of memory usage.
You could use the inbuilt visual studio profiler to see what's costing the most performance. See link below for more information about the profiler:
http://blogs.msdn.com/b/profiler/archive/2009/06/10/write-faster-code-with-vs-2010-profiler.aspx
I noticed that when I minimize any app, the memory usage goes down quite significantly. I eventually looked for a way to replicate this effect programmatically and this is what I found:
[DllImport("kernel32.dll")]
public static extern bool SetProcessWorkingSetSize(IntPtr proc, int min, int max);
public void ReleaseMemory()
{
GC.Collect();
GC.WaitForPendingFinalizers();
if (Environment.OSVersion.Platform == PlatformID.Win32NT)
{
SetProcessWorkingSetSize(System.Diagnostics.Process.GetCurrentProcess().Handle, -1, -1);
}
}
I don't know the cons in using this but so far, it managed to save at least 13MB of memory.

Categories

Resources