Bass.net DOWNLOADPROC only record 5 seconds - c#

I'm trying to record an audio streaming using bass.net but using the documentation example i'm only able to record 5 seconds. How can i record more time?
Following is my code:
class Program
{
private static FileStream _fs = null;
private static DOWNLOADPROC _myDownloadProc;
private static byte[] _data;
static void Main(string[] args)
{
Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT, IntPtr.Zero);
_myDownloadProc = new DOWNLOADPROC(MyDownload);
int stream = Bass.BASS_StreamCreateURL("http://m2.fabricahost.com.br:8704/;stream.mp3", 0,
BASSFlag.BASS_STREAM_BLOCK | BASSFlag.BASS_SAMPLE_MONO | BASSFlag.BASS_STREAM_STATUS, _myDownloadProc, IntPtr.Zero);
}
private static void MyDownload(IntPtr buffer, int length, IntPtr user)
{
if (_fs == null)
{
// create the file
_fs = File.OpenWrite("output.mp3");
}
if (buffer == IntPtr.Zero)
{
// finished downloading
_fs.Flush();
_fs.Close();
}
else
{
// increase the data buffer as needed
if (_data == null || _data.Length < length)
_data = new byte[length];
// copy from managed to unmanaged memory
Marshal.Copy(buffer, _data, 0, length);
// write to file
_fs.Write(_data, 0, length);
}
}
}
Thanks

I found out why, the bass.net default net buffer size is 5000ms(5 seconds). I just changed the size using
Bass.BASS_SetConfig(BASSConfig.BASS_CONFIG_NET_BUFFER, 10000); to record as much as i wanted.

Related

Read asynchronously data from NetworkStream with huge amount of packets

In my application every packet has 2 bytes length on the start. However after some time application starts receiving length less than zero. In synchronous client everything works correctly, but it's too slow. I'm 100% sure in Server everything is correct.
Connect:
public void Connect(IPAddress ip, int port)
{
tcpClient.Connect(ip, port);
stream = tcpClient.GetStream();
byte[] len_buffer = new byte[2];
stream.BeginRead(len_buffer, 0, len_buffer.Length, OnDataRead, len_buffer);
}
OnDataRead:
private void OnDataRead(IAsyncResult ar)
{
byte[] len = ar.AsyncState as byte[];
int length = BitConverter.ToInt16(len, 0);
byte[] buffer = new byte[length];
int remaining = length;
int pos = 0;
while (remaining != 0)
{
int add = stream.Read(buffer, pos, remaining);
pos += add;
remaining -= add;
}
Process(buffer);
len = new byte[2];
stream.EndRead(ar);
stream.BeginRead(len, 0, len.Length, OnDataRead, len);
}
As I can see, you're mixing up synchronious and asynchronious. That's a bad practice.
What you want is something like:
var header = ReadHeader(); // 2 bytes
var data = ReadData(header.DataSize);
I didn't use the network stream, but....
Here's an example of my async SocketReader:
public static class SocketReader
{
// This method will continues read until count bytes are read. (or socket is closed)
private static void DoReadFromSocket(Socket socket, int bytesRead, int count, byte[] buffer, Action<ArraySegment<byte>> endRead)
{
// Start a BeginReceive.
try
{
socket.BeginReceive(buffer, bytesRead, count - bytesRead, SocketFlags.None, (asyncResult) =>
{
// Get the bytes read.
int read = 0;
try
{
// if this goes wrong, the read remains 0
read = socket.EndReceive(asyncResult);
}
catch (ObjectDisposedException) { }
catch (Exception exception)
{
Trace.TraceError(exception.Message);
}
// if zero bytes received, the socket isn't available anymore.
if (read == 0)
{
endRead(new ArraySegment<byte>(buffer, 0, 0));
return;
}
// increase the bytesRead, (position within the buffer)
bytesRead += read;
// if all bytes are read, call the endRead with the buffer.
if (bytesRead == count)
// All bytes are read. Invoke callback.
endRead(new ArraySegment<byte>(buffer, 0, count));
else
// if not all bytes received, start another BeginReceive.
DoReadFromSocket(socket, bytesRead, count, buffer, endRead);
}, null);
}
catch (Exception exception)
{
Trace.TraceError(exception.Message);
endRead(new ArraySegment<byte>(buffer, 0, 0));
}
}
public static void ReadFromSocket(Socket socket, int count, Action<ArraySegment<byte>> endRead)
{
// read from socket, construct a new buffer.
DoReadFromSocket(socket, 0, count, new byte[count], endRead);
}
public static void ReadFromSocket(Socket socket, int count, byte[] buffer, Action<ArraySegment<byte>> endRead)
{
// if you do have a buffer available, you can pass that one. (this way you do not construct new buffers for receiving and able to reuse buffers)
// if the buffer is too small, raise an exception, the caller should check the count and size of the buffer.
if (count > buffer.Length)
throw new ArgumentOutOfRangeException(nameof(count));
DoReadFromSocket(socket, 0, count, buffer, endRead);
}
}
Usage:
SocketReader.ReadFromSocket(socket, 2, (headerData) =>
{
if(headerData.Count == 0)
{
// nothing/closed
return;
}
// Read the length of the data.
int length = BitConverter.ToInt16(headerData.Array, headerData.Offset);
SocketReader.ReadFromSocket(socket, length, (dataBufferSegment) =>
{
if(dataBufferSegment.Count == 0)
{
// nothing/closed
return;
}
Process(dataBufferSegment);
// extra: if you need a binaryreader..
using(var stream = new MemoryStream(dataBufferSegment.Array, dataBufferSegment.Offset, dataBufferSegment.Count))
using(var reader = new BinaryReader(stream))
{
var whatever = reader.ReadInt32();
}
}
});
You can optimize the receive buffer by passing a buffer (look at the overloads)
Continues receiving: (reusing receivebuffer)
public class PacketReader
{
private byte[] _receiveBuffer = new byte[2];
// This will run until the socket is closed.
public void StartReceiving(Socket socket, Action<ArraySegment<byte>> process)
{
SocketReader.ReadFromSocket(socket, 2, _receiveBuffer, (headerData) =>
{
if(headerData.Count == 0)
{
// nothing/closed
return;
}
// Read the length of the data.
int length = BitConverter.ToInt16(headerData.Array, headerData.Offset);
// if the receive buffer is too small, reallocate it.
if(_receiveBuffer.Length < length)
_receiveBuffer = new byte[length];
SocketReader.ReadFromSocket(socket, length, _receiveBuffer, (dataBufferSegment) =>
{
if(dataBufferSegment.Count == 0)
{
// nothing/closed
return;
}
try
{
process(dataBufferSegment);
}
catch { }
StartReceiving(socket, process);
});
});
}
}
Usage:
private PacketReader _reader;
public void Start()
{
_reader = new PacketReader(socket, HandlePacket);
}
private void HandlePacket(ArraySegment<byte> packet)
{
// do stuff.....
}

How to use SharpPcap to dump packets fast

I'm using SharpPcap to dump packets to a .pcap file. My problem is, that it's working to slow to capture any amount of network traffic and I run out of memory eventually.
How can i speed up the file writing process?
Here is the code I'm using:
private void WriteToPCAPThread(object o)
{
this.WritePcapThreadDone.Reset();
string captureFileName = (string)o;
CaptureFileWriterDevice captureFileWriter = new CaptureFileWriterDevice(this.device, captureFileName);
captureFileWriter.Open();
RawCapture packet;
bool success;
while (this.capturing)
{
success = this.captures.TryDequeue(out packet);
if (success)
{
captureFileWriter.Write(packet);
}
else
{
// Queue emptied
Thread.Sleep(50);
}
}
}
Thanks in advance for any ideas.
I ended up writing my own StreamWriter. Now I get 100% performance out of my SSD.
public PcapStream
{
private Stream BaseStream;
public PcapStream(Stream BaseStream)
{
this.BaseStream=BaseStream;
}
public void Write(RawCapture packet)
{
byte[] arr = new byte[packet.Data.Length + 16];
byte[] sec = BitConverter.GetBytes((uint)packet.Timeval.Seconds);
byte[] msec = BitConverter.GetBytes((uint)packet.Timeval.MicroSeconds);
byte[] incllen = BitConverter.GetBytes((uint)packet.Data.Length);
byte[] origlen = BitConverter.GetBytes((uint)packet.Data.Length);
Array.Copy(sec, arr, sec.Length);
int offset = sec.Length;
Array.Copy(msec, 0, arr, offset, msec.Length);
offset += msec.Length;
Array.Copy(incllen, 0, arr, offset, incllen.Length);
offset += incllen.Length;
Array.Copy(origlen, 0, arr, offset, origlen.Length);
offset += origlen.Length;
Array.Copy(packet.Data, 0, arr, offset, packet.Data.Length);
BaseStream.Write(arr, 0, arr.Length);
}

C# Socket ReceiveAsync

I am used to sync sockets and had a few headaches to get to the point where I am now, especially with Socket.Receive(..) not always receiveing all bytes
Here is my code what I used to use
public byte[] Receive(int size)
{
var buffer = new byte[size];
var r = 0;
do
{
// ReSharper disable once InconsistentlySynchronizedField
var c = _clientSocket.Receive(buffer, r, size - r, SocketFlags.None);
if (c == 0)
{
throw new SocketExtendedException();
}
r += c;
} while (r != buffer.Length);
return buffer;
}
Now I started to use sockets in Windows Phone BUT .Receive(..) is not available and I managed to get Socket.ReceiveAsync(..) working but I am concerned (no problems happened so far) here is my new code, I have not implemented the checking if all bytes has been recieved or not nor do I know if I have to with the following code
private byte[] ReadBySize(int size = 4)
{
var readEvent = new AutoResetEvent(false);
var buffer = new byte[size];
var recieveArgs = new SocketAsyncEventArgs()
{
UserToken = readEvent
};
recieveArgs.SetBuffer(buffer, 0, size);
recieveArgs.Completed += recieveArgs_Completed;
_connecter.ReceiveAsync(recieveArgs);
readEvent.WaitOne();
if (recieveArgs.BytesTransferred == 0)
{
if (recieveArgs.SocketError != SocketError.Success)
throw new SocketException((int)recieveArgs.SocketError);
throw new CommunicationException();
}
return buffer;
}
void recieveArgs_Completed(object sender, SocketAsyncEventArgs e)
{
var are = (AutoResetEvent)e.UserToken;
are.Set();
}
This is my first use of ReceiveAsync can someone point out anything I might have done wrong or need to change
Ok I went and took a large buffer and send it in batches with a sleep interval in between to replicate 'not all bytes received' So my code above doesn't recieve all bytes. for those who also use ReceiveAsync(..) here is my code that works
private byte[] ReadBySize(int size = 4)
{
var readEvent = new AutoResetEvent(false);
var buffer = new byte[size]; //Receive buffer
var totalRecieved = 0;
do
{
var recieveArgs = new SocketAsyncEventArgs()
{
UserToken = readEvent
};
recieveArgs.SetBuffer(buffer, totalRecieved, size - totalRecieved);//Receive bytes from x to total - x, x is the number of bytes already recieved
recieveArgs.Completed += recieveArgs_Completed;
_connecter.ReceiveAsync(recieveArgs);
readEvent.WaitOne();//Wait for recieve
if (recieveArgs.BytesTransferred == 0)//If now bytes are recieved then there is an error
{
if (recieveArgs.SocketError != SocketError.Success)
throw new ReadException(ReadExceptionCode.UnexpectedDisconnect,"Unexpected Disconnect");
throw new ReadException(ReadExceptionCode.DisconnectGracefully);
}
totalRecieved += recieveArgs.BytesTransferred;
} while (totalRecieved != size);//Check if all bytes has been received
return buffer;
}
void recieveArgs_Completed(object sender, SocketAsyncEventArgs e)
{
var are = (AutoResetEvent)e.UserToken;
are.Set();
}
The way I work with my Socket applications is to send a Buffer that consist of some variables
[0] -> 0,1,2 0 is keep alive, 1 means there are data, 2 means a type off error occured
[1,2,3,4] size of the actual buffer I am sending
[x(size of 1,2,3,4)] the actual 'Serialized' data buffer
You could create a socket extension like:
public static Task<int> ReceiveAsync(this Socket socket,
byte[] buffer, int offset, int size, SocketFlags socketFlags)
{
if (socket == null) throw new ArgumentNullException(nameof(socket));
var tcs = new TaskCompletionSource<int>();
socket.BeginReceive(buffer, offset, size, socketFlags, ar =>
{
try { tcs.TrySetResult(socket.EndReceive(ar)); }
catch (Exception e) { tcs.TrySetException(e); }
}, state: null);
return tcs.Task;
}
And then a method to read the size you want like this:
public static async Task<byte[]> ReadFixed(Socket socket, int bufferSize)
{
byte[] ret = new byte[bufferSize];
for (int read = 0; read < bufferSize; read += await socket.ReceiveAsync(ret, read, ret.Length - read, SocketFlags.None)) ;
return ret;
}

Multithreading file compress

I've just started to work with threads,
I want to write simple file compressor. It should create two background threads - one for reading and other one for writing. The first one should read file by small chunks and put them into Queue, where int - is chunkId. The second thread should dequeue chunks and write them down in order(using chunkId) into output stream (file, which this thread created in begin).
I did it. But I cant understand why after my program ends and I open my gziped file - I see, that my chunks mixed, and file doesn't have previous order.
public static class Reader
{
private static readonly object Locker = new object();
private const int ChunkSize = 1024*1024;
private static readonly int MaxThreads;
private static readonly Queue<KeyValuePair<int, byte[]>> ChunksQueue;
private static int _chunksComplete;
static Reader()
{
MaxThreads = Environment.ProcessorCount;
ChunksQueue = new Queue<KeyValuePair<int,byte[]>>(MaxThreads);
}
public static void Read(string filename)
{
_chunksComplete = 0;
var tRead = new Thread(Reading) { IsBackground = true };
var tWrite = new Thread(Writing) { IsBackground = true };
tRead.Start(filename);
tWrite.Start(filename);
tRead.Join();
tWrite.Join();
Console.WriteLine("Finished");
}
private static void Writing(object threadContext)
{
var filename = (string) threadContext;
using (var s = File.Create(filename + ".gz"))
{
while (true)
{
var dataPair = DequeueSafe();
if (dataPair.Value == null)
return;
while (dataPair.Key != _chunksComplete)
{
Thread.Sleep(1);
}
Console.WriteLine("write chunk {0}", dataPair.Key);
using (var gz = new GZipStream(s, CompressionMode.Compress, true))
{
gz.Write(dataPair.Value, 0, dataPair.Value.Length);
}
_chunksComplete++;
}
}
}
private static void Reading(object threadContext)
{
var filename = (string) threadContext;
using (var s = File.OpenRead(filename))
{
var counter = 0;
var buffer = new byte[ChunkSize];
while (s.Read(buffer, 0, buffer.Length) != 0)
{
while (ChunksQueue.Count == MaxThreads)
{
Thread.Sleep(1);
}
Console.WriteLine("read chunk {0}", counter);
var dataPair = new KeyValuePair<int, byte[]>(counter, buffer);
EnqueueSafe(dataPair);
counter++;
}
EnqueueSafe(new KeyValuePair<int, byte[]>(0, null));
}
}
private static void EnqueueSafe(KeyValuePair<int, byte[]> dataPair)
{
lock (ChunksQueue)
{
ChunksQueue.Enqueue(dataPair);
}
}
private static KeyValuePair<int, byte[]> DequeueSafe()
{
while (true)
{
lock (ChunksQueue)
{
if (ChunksQueue.Count > 0)
{
return ChunksQueue.Dequeue();
}
}
Thread.Sleep(1);
}
}
}
UPD:
I can use only .NET 3.5
Stream.Read() returns the actual number of bytes it consumed. Use it to limit the size of chunk for the writer. And, since there is concurrent reading and writing involved you'll need more than one buffer.
Try 4096 as the chunk size.
Reader:
var buffer = new byte[ChunkSize];
int bytesRead = s.Read(buffer, 0, buffer.Length);
while (bytesRead != 0)
{
...
var dataPair = new KeyValuePair<int, byte[]>(bytesRead, buffer);
buffer = new byte[ChunkSize];
bytesRead = s.Read(buffer, 0, buffer.Length);
}
Writer:
gz.Write(dataPair.Value, 0, dataPair.Key)
PS: The performance can be improved with adding a pool of free data buffers instead of allocating new each time and using events (e.g. ManualResetEvent) to signal queue is empty, queue is full instead of using Thread.Sleep().
While alexm's answer does bring up a very important point that Stream.Read may fill buffer with less bytes than you requested, the main problem you have is you only have one byte[] you keep using over and over again.
When your reading loop goes to read a 2nd value it overwrites the byte[] that is sitting inside the dataPair you passed to the queue. You must have a buffer = new byte[ChunkSize]; inside the loop to solve this problem. You also must record how many bytes where read in and only write the same number of bytes.
You don't need to keep the counter in the pair as a Queue will maintain the order, use the int in the pair to store the number of bytes recorded as in alexm's example.

How to alloc more than MaxInteger bytes of memory in C#

I wish to allocate more than MaxInteger bytes of memory.
Marshall.AllocHGlobal() expects an integer - so I cannot use this. Is there another way?
Update
I changed the platform to x64, and then I ran the code below.
myp appears to have the right length: about 3.0G. But stubbornly "buffer" maxes out at 2.1G.
Any idea why?
var fileStream = new FileStream(
"C:\\big.BC2",
FileMode.Open,
FileAccess.Read,
FileShare.Read,
16 * 1024,
FileOptions.SequentialScan);
Int64 length = fileStream.Length;
Console.WriteLine(length);
Console.WriteLine(Int64.MaxValue);
IntPtr myp = new IntPtr(length);
//IntPtr buffer = Marshal.AllocHGlobal(myp);
IntPtr buffer = VirtualAllocEx(
Process.GetCurrentProcess().Handle,
IntPtr.Zero,
new IntPtr(length),
AllocationType.Commit | AllocationType.Reserve,
MemoryProtection.ReadWrite);
unsafe
{
byte* pBytes = (byte*)myp.ToPointer();
var memoryStream = new UnmanagedMemoryStream(pBytes, (long)length, (long)length, FileAccess.ReadWrite);
fileStream.CopyTo(memoryStream);
That's not possible on current mainstream hardware. Memory buffers are restricted to 2 gigabytes, even on 64-bit machines. Indexed addressing of the buffer is still done with a 32-bit signed offset. It is technically possible to generate machine code that can index more, using a register to store the offset, but that's expensive and slows down all array indexing, even for the ones that aren't larger than 2 GB.
Furthermore, you can't get a buffer larger than about 650MB out of the address space available to a 32-bit process. There aren't enough contiguous memory pages available because virtual memory contains both code and data at various addresses.
Companies like IBM and Sun sell hardware that can do this.
I have been involved in one of the other questions you asked, and I honestly think you are fighting a losing battle here. You need to possibly explore other avenues of processing this data other than reading everything into memory.
If I understand correctly, you have multiple threads that process the data concurrently and that is why you do not want to work off the file directly because of I/O contention I assume.
Have you considered or would the possibility exist to reading a block of data into memory, have the threads process the block and then read the next block or processing by the threads? This way, at any one time, you never have more than a block in memory, but all threads can access the block. This is not optimal, but I put it out there as a starting point. If this is feasible then options to optimize this can be explored.
Update: Example using platform invoke to allocate unmanaged memory and use it from .NET.
Since you are are so certain you need to load this much data into memory I thought I would write a small test application to verify that this will work. For this you will need the following
Compile with the /unsafe compile option
If you want to allocate more that 2 GB you will also need to switch your target platform to x64
*Point 2 above is a little more complicated, on a 64-bit OS you could still target the x86 platform and get access to the full 4 GB memory. This will require you to use a tool like EDITBIN.EXE to set the LargeAddressAware flag in the PE header.
This code uses VirtualAllocEx to allocate unmanaged memory and UnmanagedMemoryStream to access the unmanaged memory using the .NET stream metaphor. Note this code has only had some very basic quick tests done and only on the target 64-bit environment with 4 GB RAM. And most importantly I only went up to about 2.6 GB memory utilization for the process.
using System;
using System.IO;
using System.Runtime.InteropServices;
using System.Diagnostics;
using System.ComponentModel;
namespace MemoryMappedFileTests
{
class Program
{
static void Main(string[] args)
{
IntPtr ptr = IntPtr.Zero;
try
{
// Allocate and Commit the memory directly.
ptr = VirtualAllocEx(
Process.GetCurrentProcess().Handle,
IntPtr.Zero,
new IntPtr(0xD0000000L),
AllocationType.Commit | AllocationType.Reserve,
MemoryProtection.ReadWrite);
if (ptr == IntPtr.Zero)
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
// Query some information about the allocation, used for testing.
MEMORY_BASIC_INFORMATION mbi = new MEMORY_BASIC_INFORMATION();
IntPtr result = VirtualQueryEx(
Process.GetCurrentProcess().Handle,
ptr,
out mbi,
new IntPtr(Marshal.SizeOf(mbi)));
if (result == IntPtr.Zero)
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
// Use unsafe code to get a pointer to the unmanaged memory.
// This requires compiling with /unsafe option.
unsafe
{
// Pointer to the allocated memory
byte* pBytes = (byte*)ptr.ToPointer();
// Create Read/Write stream to access the memory.
UnmanagedMemoryStream stm = new UnmanagedMemoryStream(
pBytes,
mbi.RegionSize.ToInt64(),
mbi.RegionSize.ToInt64(),
FileAccess.ReadWrite);
// Create a StreamWriter to write to the unmanaged memory.
StreamWriter sw = new StreamWriter(stm);
sw.Write("Everything seems to be working!\r\n");
sw.Flush();
// Reset the stream position and create a reader to check that the
// data was written correctly.
stm.Position = 0;
StreamReader rd = new StreamReader(stm);
Console.WriteLine(rd.ReadLine());
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
finally
{
if (ptr != IntPtr.Zero)
{
VirtualFreeEx(
Process.GetCurrentProcess().Handle,
ptr,
IntPtr.Zero,
FreeType.Release);
}
}
Console.ReadKey();
}
[DllImport("kernel32.dll", SetLastError = true, ExactSpelling = true)]
static extern IntPtr VirtualAllocEx(
IntPtr hProcess,
IntPtr lpAddress,
IntPtr dwSize,
AllocationType dwAllocationType,
MemoryProtection flProtect);
[DllImport("kernel32.dll", SetLastError = true, ExactSpelling = true)]
static extern bool VirtualFreeEx(
IntPtr hProcess,
IntPtr lpAddress,
IntPtr dwSize,
FreeType dwFreeType);
[DllImport("kernel32.dll", SetLastError = true, ExactSpelling = true)]
static extern IntPtr VirtualQueryEx(
IntPtr hProcess,
IntPtr lpAddress,
out MEMORY_BASIC_INFORMATION lpBuffer,
IntPtr dwLength);
[StructLayout(LayoutKind.Sequential)]
public struct MEMORY_BASIC_INFORMATION
{
public IntPtr BaseAddress;
public IntPtr AllocationBase;
public int AllocationProtect;
public IntPtr RegionSize;
public int State;
public int Protect;
public int Type;
}
[Flags]
public enum AllocationType
{
Commit = 0x1000,
Reserve = 0x2000,
Decommit = 0x4000,
Release = 0x8000,
Reset = 0x80000,
Physical = 0x400000,
TopDown = 0x100000,
WriteWatch = 0x200000,
LargePages = 0x20000000
}
[Flags]
public enum MemoryProtection
{
Execute = 0x10,
ExecuteRead = 0x20,
ExecuteReadWrite = 0x40,
ExecuteWriteCopy = 0x80,
NoAccess = 0x01,
ReadOnly = 0x02,
ReadWrite = 0x04,
WriteCopy = 0x08,
GuardModifierflag = 0x100,
NoCacheModifierflag = 0x200,
WriteCombineModifierflag = 0x400
}
[Flags]
public enum FreeType
{
Decommit = 0x4000,
Release = 0x8000
}
}
}
This is not possible from managed code without a pinvoke call and for good reason. Allocating that much memory is usually a sign of a bad solution that needs revisiting.
Can you tell us why you think you need this much memory?
Use Marshal.AllocHGlobal(IntPtr). This overload treats the value of the IntPtr as the amount of memory to allocate and IntPtr can hold a 64 bit value.
From a comment:
How do I create a second binaryreader that can read the same memorystream independantly?
var fileStream = new FileStream("C:\\big.BC2",
FileMode.Open,
FileAccess.Read,
FileShare.Read,
16 * 1024,
FileOptions.SequentialScan);
Int64 length = fileStream.Length;
IntPtr buffer = Marshal.AllocHGlobal(length);
unsafe
{
byte* pBytes = (byte*)myp.ToPointer();
var memoryStream = new UnmanagedMemoryStream(pBytes, (long)length, (long)length, FileAccess.ReadWrite);
var binaryReader = new BinaryReader(memoryStream);
fileStream.CopyTo(memoryStream);
memoryStream.Seek(0, SeekOrigin.Begin);
// Create a second UnmanagedMemoryStream on the _same_ memory buffer
var memoryStream2 = new UnmanagedMemoryStream(pBytes, (long)length, (long)length, FileAccess.Read);
var binaryReader2 = new BinaryReader(memoryStream);
}
If you can't make it work the way you want it to directly, create a class to provide the type of behaviour you want. So, to use big arrays:
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
namespace BigBuffer
{
class Storage
{
public Storage (string filename)
{
m_buffers = new SortedDictionary<int, byte []> ();
m_file = new FileStream (filename, FileMode.Open, FileAccess.Read, FileShare.Read);
}
public byte [] GetBuffer (long address)
{
int
key = GetPageIndex (address);
byte []
buffer;
if (!m_buffers.TryGetValue (key, out buffer))
{
System.Diagnostics.Trace.WriteLine ("Allocating a new array at " + key);
buffer = new byte [1 << 24];
m_buffers [key] = buffer;
m_file.Seek (address, SeekOrigin.Begin);
m_file.Read (buffer, 0, buffer.Length);
}
return buffer;
}
public void FillBuffer (byte [] destination_buffer, int offset, int count, long position)
{
do
{
byte []
source_buffer = GetBuffer (position);
int
start = GetPageOffset (position),
length = Math.Min (count, (1 << 24) - start);
Array.Copy (source_buffer, start, destination_buffer, offset, length);
position += length;
offset += length;
count -= length;
} while (count > 0);
}
public int GetPageIndex (long address)
{
return (int) (address >> 24);
}
public int GetPageOffset (long address)
{
return (int) (address & ((1 << 24) - 1));
}
public long Length
{
get { return m_file.Length; }
}
public int PageSize
{
get { return 1 << 24; }
}
FileStream
m_file;
SortedDictionary<int, byte []>
m_buffers;
}
class BigStream : Stream
{
public BigStream (Storage source)
{
m_source = source;
m_position = 0;
}
public override bool CanRead
{
get { return true; }
}
public override bool CanSeek
{
get { return true; }
}
public override bool CanTimeout
{
get { return false; }
}
public override bool CanWrite
{
get { return false; }
}
public override long Length
{
get { return m_source.Length; }
}
public override long Position
{
get { return m_position; }
set { m_position = value; }
}
public override void Flush ()
{
}
public override long Seek (long offset, SeekOrigin origin)
{
switch (origin)
{
case SeekOrigin.Begin:
m_position = offset;
break;
case SeekOrigin.Current:
m_position += offset;
break;
case SeekOrigin.End:
m_position = Length + offset;
break;
}
return m_position;
}
public override void SetLength (long value)
{
}
public override int Read (byte [] buffer, int offset, int count)
{
int
bytes_read = (int) (m_position + count > Length ? Length - m_position : count);
m_source.FillBuffer (buffer, offset, bytes_read, m_position);
m_position += bytes_read;
return bytes_read;
}
public override void Write(byte[] buffer, int offset, int count)
{
}
Storage
m_source;
long
m_position;
}
class IntBigArray
{
public IntBigArray (Storage storage)
{
m_storage = storage;
m_current_page = -1;
}
public int this [long index]
{
get
{
int
value = 0;
index <<= 2;
for (int offset = 0 ; offset < 32 ; offset += 8, ++index)
{
int
page = m_storage.GetPageIndex (index);
if (page != m_current_page)
{
m_current_page = page;
m_array = m_storage.GetBuffer (m_current_page);
}
value |= (int) m_array [m_storage.GetPageOffset (index)] << offset;
}
return value;
}
}
Storage
m_storage;
int
m_current_page;
byte []
m_array;
}
class Program
{
static void Main (string [] args)
{
Storage
storage = new Storage (#"<some file>");
BigStream
stream = new BigStream (storage);
StreamReader
reader = new StreamReader (stream);
string
line = reader.ReadLine ();
IntBigArray
array = new IntBigArray (storage);
int
value = array [0];
BinaryReader
binary = new BinaryReader (stream);
binary.BaseStream.Seek (0, SeekOrigin.Begin);
int
another_value = binary.ReadInt32 ();
}
}
}
I split the problem into three classes:
Storage - where the actual data is stored, uses a paged system
BigStream - a stream class that uses the Storage class for its data source
IntBigArray - a wrapper around the Storage type that provides an int array interface
The above can be improved significantly but it should give you ideas about how to solve your problems.

Categories

Resources