I have a MemoryStream which is created from a File at runtime.
Then the MemoryStream is edited and some bytes are removed.
Now I have to maintain a Constant Filesize so I have to fill the MemoryStream with 0xFF bytes..
What is the Fastest way to Do this Operation?
I know, that I always can loop through the MemoryStream sizes and add 0xFF's but I need to know a faster and more efficient way to do it!
If you have many bytes to write to the stream, it may be more efficient to write a array rather than each byte individually:
static void Fill(this Stream stream, byte value, int count)
{
var buffer = new byte[64];
for (int i = 0; i < buffer.Length; i++)
{
buffer[i] = value;
}
while (count > buffer.Length)
{
stream.Write(buffer, 0, buffer.Length);
count -= buffer.Length;
}
stream.Write(buffer, 0, count);
}
Related
I am trying to upload large files to 3rd part service by chunks. But I have problem with last chunk. Last chunk would be always smaller then 5mb, but all chunks incl. the last have all the same size - 5mb
My code:
int chunkSize = 1024 * 1024 * 5;
using (Stream streamx = new FileStream(file.Path, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[chunkSize];
int bytesRead = 0;
long bytesToRead = streamx.Length;
while (bytesToRead > 0)
{
int n = streamx.Read(buffer, 0, chunkSize);
if (n == 0) break;
// do work on buffer...
// uploading chunk ....
var partRequest = HttpHelpers.InvokeHttpRequestStream
(
new Uri(endpointUri + "?partNumber=" + i + "&uploadId=" + UploadId),
"PUT",
partHeaders,
buffer
); // upload buffer
bytesRead += n;
bytesToRead -= n;
}
streamx.Dispose();
}
buffer is uploaded on 3rd party service.
Solved, someone posted updated code in comment, but after some seconds deleted this comment. But there was solution. I added this part after
if (n == 0)
this code, which resizes last chunk on the right size
// Let's resize the last incomplete buffer
if (n != buffer.Length)
Array.Resize(ref buffer, n);
Thank you all.
I post full working code:
int chunkSize = 1024 * 1024 * 5;
using (Stream streamx = new FileStream(file.Path, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[chunkSize];
int bytesRead = 0;
long bytesToRead = streamx.Length;
while (bytesToRead > 0)
{
int n = streamx.Read(buffer, 0, chunkSize);
if (n == 0) break;
// Let's resize the last incomplete buffer
if (n != buffer.Length)
Array.Resize(ref buffer, n);
// do work on buffer...
// uploading chunk ....
var partRequest = HttpHelpers.InvokeHttpRequestStream
(
new Uri(endpointUri + "?partNumber=" + i + "&uploadId=" + UploadId),
"PUT",
partHeaders,
buffer
); // upload buffer
bytesRead += n;
bytesToRead -= n;
}
}
Send an array of bytes
public static void SendFile(string path)
{
byte[] data = File.ReadAllBytes(path);
stream.Write(data, 0, data.Length);
}
Getting the byte array
List<byte> list = new List<byte>();
byte[] data = new byte[64];
int bytes = 0;
do
{
bytes = stream.Read(data, 0, data.Length);
for (int i = 0; i < data.Length; i++)
{
list.Add(data[i]);
}
} while (stream.DataAvailable);
return list.ToArray();
Creating a file
byte[] file = ReciveFile().ToArray();
File.WriteAllBytes(message, file);
Appear extra characters like these
before
after
How to fix? Thanks
You need to check how many bytes were read. Otherwise the end of your buffer may contain garbage if the file's length isn't an exact multiple of 64.
do
{
bytes = stream.Read(data, 0, data.Length);
for (int i = 0; i < bytes; i++) //use bytes, not data.Length
{
list.Add(data[i]);
}
} while (bytes > 0);
I'm looking for an efficient way of reading multiple arrays of a specific type from a stream.
So far I'm using a class like this below to read single values like: int, byte, sbyte, uint, short, ushort, ...
but also for arrays like: ushort[], short[], uint[], int[], byte[], sbyte[], ...
public byte[] ReadBytes(int count)
{
byte[] buffer = new byte[count];
int retValue = _Stream.Read(buffer, 0, count);
return buffer;
}
public ushort ReadUshort()
{
byte[] b = ReadBytes(2);
if (BitConverter.IsLittleEndian) // for motorola (big endian)
Array.Reverse(b);
return BitConverter.ToUInt16(b, 0);
}
public ushort[] ReadUshorts(int count)
{
ushort[] data = new ushorts[count];
for (int i = 0; i < count; i++)
{
data[i] = ReadUshort();
}
return data;
}
public uint ReadUint()
{
byte[] b = ReadBytes(4);
if (BitConverter.IsLittleEndian) // for motorola (big endian)
Array.Reverse(b);
return BitConverter.ToUInt32(b, 0);
}
public uint[] ReadUints(int count)
{
// ...
}
Is there a more efficient way compared to code snippet I've shared here to read the arrays?
I have a feeling that a combination of for-loop and each time a single read call is not so efficient. But the problem is that I need to check for IsLittleEndian each time and reverse if needed, so I can read many bytes at ones. Not sure if this could be rewritten more efficiently.
You could write a generic method, and use Buffer.BlockCopy to copy the data into the target array:
public static T[] ReadElements<T>(Stream input, int count)
{
int bytesPerElement = Marshal.SizeOf(typeof(T));
byte[] buffer = new byte[bytesPerElement * count];
int remaining = buffer.Length;
int offset = 0;
while (remaining > 0)
{
int read = input.Read(buffer, offset, remaining);
if (read == 0) throw new EndOfStreamException();
offset += read;
remaining -= read;
}
if (BitConverter.IsLittleEndian)
{
for (int i = 0; i < buffer.Length; i += bytesPerElement)
{
Array.Reverse(buffer, i, bytesPerElement);
}
}
T[] result = new T[count];
Buffer.BlockCopy(buffer, 0, result, 0, buffer.Length);
return result;
}
Hi i currently use the following code to split a file into muliple 2mb smaller parts.
const int BUFFER_SIZE = 20 * 1024;
byte[] buffer = new byte[BUFFER_SIZE];
using (Stream input = File.OpenRead(inputFile)) {
int index = 0;
while (input.Position < input.Length) {
using (Stream output = File.Create(path)) {
int remaining = chunkSize, bytesRead;
while (remaining > 0 && (bytesRead = input.Read(buffer, 0,
Math.Min(remaining, BUFFER_SIZE))) > 0) {
output.Write(buffer, 0, bytesRead);
remaining -= bytesRead;
}
}
}
index++;
}
This works perfectly, and will split a 10mb file into 5 x 2mb files 0.part,2.part ect...
I would like to know how I would generate just part 3 again knowing the chunkSize always stays at 2mb. I can achieve this by wrapping in an if,else and evaluating index, but with a 1GB file this process can take a while to loop through. I'd like to understand this function more and how I can just get the part of the file I require?
input.Position property is settable. If you know that you need part 3, set Position to 2*chunkSize to skip the first two chunks, and do the innermost while loop once to copy from that position to the output:
int desiredChunkNumber = 3;
using (Stream input = File.OpenRead(inputFile)) {
input.Position = (desiredChunkNumber - 1) * chunkSize;
using (Stream output = File.Create(path)) {
int remaining = chunkSize, bytesRead;
while (remaining > 0 && (bytesRead = input.Read(buffer, 0,
Math.Min(remaining, BUFFER_SIZE))) > 0) {
output.Write(buffer, 0, bytesRead);
remaining -= bytesRead;
}
}
}
I'm trying to recieve a tcp packet in C# but I don't know when can I stop reading from the stream.
Here's what I've tried:
for(int i = 0; i < stm.Length; i += chunkSize)
{
bb = new byte[chunkSize];
k = stm.Read(bb, 0, bb.Length);
ns.Write(bb, 0, k);
}
But it threw me an error about that the stream is not seekable.
So I've tried this:
int k = chunkSize;
while (k == chunkSize)
{
bb = new byte[chunkSize];
k = stm.Read(bb, 0, bb.Length);
ns.Write(bb, 0, k);
}
Is there anything to do?
Thanks :)
Here we go:
int read;
while((read = stm.Read(bb, 0, bb.Length)) > 0) {
// process "read"-many bytes from bb
ns.Write(bb, 0, read);
}
"read" will be non-positive at the end of the stream, and only at the end of the stream.
Or more simply (in 4.0):
stm.CopyTo(ns);
a binary reader is what you would require since it knows exactly how many bytes to read!
It prefixes the length of the bytes and so it knows how much to read!