How to Divide an array on c#? - c#

I have to do a program that read and image and puts it into a byte array
var Imagenoriginal = File.ReadAllBytes("10M.bmp");
And Divide That byte Array into 3 Diferent Arrays in order to send each one of this new arrays to other computer ( Using Pipes ) to process them there and finally take them back to the origial computer and finally give the result.
But my question Is how do I do an Algorithm able to divide the byte array in three different bytes arrays if the image selected can have diferent size.
Thanks for your help, have a nice day. =)

You can divide length of array, so you have three integers n1, n2 and n3 with all of them summing up to array.Length. Then, this snippet, using LINQ should be of help:
var arr1 = sourceArray.Take(n1).ToArray();
var arr2 = sourceArray.Skip(n1).Take(n2).ToArray();
var arr3 = sourceArray.Skip(n1+n2).Take(n3).ToArray();
Now, in arr1,arr2 and arr3 you will have three parts of your source array. You need to use LINQ, so in the beginning of the code don't forget using System.Linq;.

You can try like this:
public static IEnumerable<IEnumerable<T>> DivideArray<T>(this T[] array, int size)
{
for (var i = 0; i < (float)array.Length / size; i++)
{
yield return array.Skip(i * size).Take(size);
}
}
The call like this:
var arr = new byte[] {1, 2, 3, 4, 5,6};
var dividedArray = arr.DivideArray(3);
Here is a LINQ approach:
public List<List<byte>> DivideArray(List<byte> arr)
{
return arr
.Select((x, i) => new { Index = i, Value = x })
.GroupBy(x => x.Index / 100)
.Select(x => x.Select(v => v.Value).ToList())
.ToList();
}

Have you considered using Streams? You could extend the Stream class to provide the desired behavior, as follows:
public static class Utilities
{
public static IEnumerable<byte[]> ReadBytes(this Stream input, long bufferSize)
{
byte[] buffer = new byte[bufferSize];
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
using (MemoryStream tempStream = new MemoryStream())
{
tempStream.Write(buffer, 0, read);
yield return tempStream.ToArray();
}
}
}
public static IEnumerable<byte[]> ReadBlocks(this Stream input, int nblocks)
{
long bufferSize = (long)Math.Ceiling((double)input.Length / nblocks);
return input.ReadBytes(bufferSize);
}
}
The ReadBytes extension method reads the input Stream and returns its data as a sequence of byte arrays, using the specified bufferSize.
The ReadBlocks extension method calls ReadBytes with the appropriate buffer size, so that the number of elements in the sequence equals nblocks.
You could then use ReadBlocks to achieve what you want:
public class Program
{
static void Main()
{
FileStream inputStream = File.Open(#"E:\10M.bmp", FileMode.Open);
const int nblocks = 3;
foreach (byte[] block in inputStream.ReadBlocks(nblocks))
{
Console.WriteLine("{0} bytes", block.Length);
}
}
}
Note how ReadBytes uses tempStream and read to write in memory the bytes read from the input stream before converting them into an array of bytes, it solves the problem with the leftover bytes mentioned in the comments.

Related

What is different with the writing in FileStream?

When I searched the method about decompress the file by using SharpZipLib, I found lot of methods like this:
public static void TarWriteCharacters(string tarfile, string targetDir)
{
using (TarInputStream s = new TarInputStream(File.OpenRead(tarfile)))
{
//some codes here
using (FileStream fileWrite = File.Create(targetDir + directoryName + fileName))
{
int size = 2048;
byte[] data = new byte[2048];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0)
{
fileWrite.Write(data, 0, size);
}
else
{
break;
}
}
fileWrite.Close();
}
}
}
The format FileStream.Write is:
FileStream.Write(byte[] array, int offset, int count)
Now I try to separate part of read and write because I want to use thread to speed up the decompress rate in write function, and I use dynamic array byte[] and int[] to deposit the file's data and size like below
Read:
public static void TarWriteCharacters(string tarfile, string targetDir)
{
using (TarInputStream s = new TarInputStream(File.OpenRead(tarfile)))
{
//some codes here
using (FileStream fileWrite= File.Create(targetDir + directoryName + fileName))
{
int size = 2048;
List<int> SizeList = new List<int>();
List<byte[]> mydatalist = new List<byte[]>();
while (true)
{
byte[] data = new byte[2048];
size = s.Read(data, 0, data.Length);
if (size > 0)
{
mydatalist.Add(data);
SizeList.Add(size);
}
else
{
break;
}
}
test = new Thread(() =>
FileWriteFun(pathToTar, args, SizeList, mydatalist)
);
test.Start();
streamWriter.Close();
}
}
}
Write:
public static void FileWriteFun(string pathToTar , string[] args, List<int> SizeList, List<byte[]> mydataList)
{
//some codes here
using (FileStream fileWrite= File.Create(targetDir + directoryName + fileName))
{
for (int i = 0; i < mydataList.Count; i++)
{
fileWrite.Write(mydataList[i], 0, SizeList[i]);
}
fileWrite.Close();
}
}
Edit
(1)byte[] data = new byte[2048] into while loop to assign data to new array.
(2)change int[] SizeList = new int[2048] to List<int> SizeList = new List<int>() because of int range
As read on a stream is only guarantied to return one byte (typically it will be more, but you can't rely on the full requested length each time), your solution can theoretically fail after 2048 bytes as your SizeList can only hold 2048 entries.
You could use a List to hold the sizes.
Or use a MemoryStream instead of inventing your own.
But the two main problems are:
1) You keep reading into the same byte array, overwriting previously read data. When you add your data byte array to mydatalist, you must assign data to a new byte array.
2) you close your stream before the second thread is done writing.
In general threading is difficult and should only be used where you know it will improve performance. Simply reading and writing data is typically IO bound in performance, not cpu bound, so introducing a second thread will just give a small performance penalty and no gain in speed. You could use multithreading to ensure concurrent read/write operations, but most likely the disk cache will do this for you if you stick to the first solution - amd if not, using async is easier than multithreaded to achieve this.

Joining byte arrays using a separator in C#

I want something similar to String.Join(separator_string, list_of_strings) but for byte arrays.
I need it because I am implementing a file format writer, and the specification says
"Each annotation must be encoded as UTF-8 and separated by the ASCII byte 20."
Then I would need something like:
byte separator = 20;
List<byte[]> encoded_annotations;
joined_byte_array = Something.Join(separator, encoded_annotations);
I don't believe there's anything built in, but it's easy enough to write. Here's a generic version, but you could make it do just byte arrays easily enough.
public static T[] Join<T>(T separator, IEnumerable<T[]> arrays)
{
// Make sure we only iterate over arrays once
List<T[]> list = arrays.ToList();
if (list.Count == 0)
{
return new T[0];
}
int size = list.Sum(x => x.Length) + list.Count - 1;
T[] ret = new T[size];
int index = 0;
bool first = true;
foreach (T[] array in list)
{
if (!first)
{
ret[index++] = separator;
}
Array.Copy(array, 0, ret, index, array.Length);
index += array.Length;
first = false;
}
return ret;
}
I ended up using this, tuned to my specific case of a separator consisting of one single byte (instead of an array of size one), but the general idea would apply to a separator consisting of a byte array:
public byte[] ArrayJoin(byte separator, List<byte[]> arrays)
{
using (MemoryStream result = new MemoryStream())
{
byte[] first = arrays.First();
result.Write(first, 0, first.Length);
foreach (var array in arrays.Skip(1))
{
result.WriteByte(separator);
result.Write(array, 0, array.Length);
}
return result.ToArray();
}
}
You just need to specify the desired type for the join command.
String.join<byte>(separator_string, list_of_strings);

C# - Cast a byte array to an array of struct and vice-versa (reverse)

I would like to save a Color[] to a file. To do so, I found that saving a byte array to a file using "System.IO.File.WriteAllBytes" should be very efficient.
I would like to cast my Color[] (array of struct) to a byte array into a safe way considering:
Potential problem of little endian / big endian (I think it is impossible to happen but want to be sure)
Having 2 differents pointer to the same memory which point to different type. Does the garbage collection will know what to do - moving objects - deleting a pointer ???
If it is possible, it would be nice to have a generic way to cast array of byte to array of any struct (T struct) and vice-versa.
If not possible, why ?
Thanks,
Eric
I think that those 2 solutions make a copy that I would like to avoid and also they both uses Marshal.PtrToStructure which is specific to structure and not to array of structure:
Reading a C/C++ data structure in C# from a byte array
How to convert a structure to a byte array in C#?
Since .NET Core 2.1, yes we can! Enter MemoryMarshal.
We will treat our Color[] as a ReadOnlySpan<Color>. We reinterpret that as a ReadOnlySpan<byte>. Finally, since WriteAllBytes has no span-based overload, we use a FileStream to write the span to disk.
var byteSpan = MemoryMarshal.AsBytes(colorArray.AsSpan());
fileStream.Write(byteSpan);
As an interesting side note, you can also experiment with the [StructLayout(LayoutKind.Explicit)] as an attribute on your fields. It allows you to specify overlapping fields, effectively allowing the concept of a union.
Here is a blog post on MSDN that illustrates this. It shows the following code:
[StructLayout(LayoutKind.Explicit)]
public struct MyUnion
{
[FieldOffset(0)]
public UInt16 myInt;
[FieldOffset(0)]
public Byte byte1;
[FieldOffset(1)]
public Byte byte2;
}
In this example, the UInt16 field overlaps with the two Byte fields.
This seems to be strongly related to what you are trying to do. It gets you very close, except for the part of writing all the bytes (especially of multiple Color objects) efficiently. :)
Regarding Array Type Conversion
C# as a language intentionally makes the process of flattening objects or arrays into byte arrays difficult because this approach goes against the principals of .NET strong typing. The conventional alternatives include several serialization tools which are generally seen a safer and more robust, or manual serialization coding such as BinaryWriter.
Having two variables of different types point to the same object in memory can only be performed if the types of the variables can be cast, implicitly or explicitly. Casting from an array of one element type to another is no trivial task: it would have to convert the internal members that keep track of things such as array length, etc.
A simple way to write and read Color[] to file
void WriteColorsToFile(string path, Color[] colors)
{
BinaryWriter writer = new BinaryWriter(File.OpenWrite(path));
writer.Write(colors.Length);
foreach(Color color in colors)
{
writer.Write(color.ToArgb());
}
writer.Close();
}
Color[] ReadColorsFromFile(string path)
{
BinaryReader reader = new BinaryReader(File.OpenRead(path));
int length = reader.ReadInt32();
Colors[] result = new Colors[length];
for(int n=0; n<length; n++)
{
result[n] = Color.FromArgb(reader.ReadInt32());
}
reader.Close();
}
You could use pointers if you really want to copy each byte and not have a copy but the same object, similar to this:
var structPtr = (byte*)&yourStruct;
var size = sizeof(YourType);
var memory = new byte[size];
fixed(byte* memoryPtr = memory)
{
for(int i = 0; i < size; i++)
{
*(memoryPtr + i) = *structPtr++;
}
}
File.WriteAllBytes(path, memory);
I just tested this and after adding the fixed block and some minor corrections it looks like it is working correctly.
This is what I used to test it:
public static void Main(string[] args)
{
var a = new s { i = 1, j = 2 };
var sPtr = (byte*)&a;
var size = sizeof(s);
var mem = new byte[size];
fixed (byte* memPtr = mem)
{
for (int i = 0; i < size; i++)
{
*(memPtr + i) = *sPtr++;
}
}
File.WriteAllBytes("A:\\file.txt", mem);
}
struct s
{
internal int i;
internal int j;
}
The result is the following:
(I manually resolved the hex bytes in the second line, only the first line was produced by the program)
public struct MyX
{
public int IntValue;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 3, ArraySubType = UnmanagedType.U1)]
public byte[] Array;
MyX(int i, int b)
{
IntValue = b;
Array = new byte[3];
}
public MyX ToStruct(byte []ar)
{
byte[] data = ar;//= { 1, 0, 0, 0, 9, 8, 7 }; // IntValue = 1, Array = {9,8,7}
IntPtr ptPoit = Marshal.AllocHGlobal(data.Length);
Marshal.Copy(data, 0, ptPoit, data.Length);
MyX x = (MyX)Marshal.PtrToStructure(ptPoit, typeof(MyX));
Marshal.FreeHGlobal(ptPoit);
return x;
}
public byte[] ToBytes()
{
Byte[] bytes = new Byte[Marshal.SizeOf(typeof(MyX))];
GCHandle pinStructure = GCHandle.Alloc(this, GCHandleType.Pinned);
try
{
Marshal.Copy(pinStructure.AddrOfPinnedObject(), bytes, 0, bytes.Length);
return bytes;
}
finally
{
pinStructure.Free();
}
}
}
void function()
{
byte[] data = { 1, 0, 0, 0, 9, 8, 7 }; // IntValue = 1, Array = {9,8,7}
IntPtr ptPoit = Marshal.AllocHGlobal(data.Length);
Marshal.Copy(data, 0, ptPoit, data.Length);
var x = (MyX)Marshal.PtrToStructure(ptPoit, typeof(MyX));
Marshal.FreeHGlobal(ptPoit);
var MYstruc = x.ToStruct(data);
Console.WriteLine("x.IntValue = {0}", x.IntValue);
Console.WriteLine("x.Array = ({0}, {1}, {2})", x.Array[0], x.Array[1], x.Array[2]);
}
Working code for reference (take care, I did not need the alpha channel in my case):
// ************************************************************************
// If someday Microsoft make Color serializable ...
//public static void SaveColors(Color[] colors, string path)
//{
// BinaryFormatter bf = new BinaryFormatter();
// MemoryStream ms = new MemoryStream();
// bf.Serialize(ms, colors);
// byte[] bytes = ms.ToArray();
// File.WriteAllBytes(path, bytes);
//}
// If someday Microsoft make Color serializable ...
//public static Colors[] LoadColors(string path)
//{
// Byte[] bytes = File.ReadAllBytes(path);
// BinaryFormatter bf = new BinaryFormatter();
// MemoryStream ms2 = new MemoryStream(bytes);
// return (Colors[])bf.Deserialize(ms2);
//}
// ******************************************************************
public static void SaveColorsToFile(Color[] colors, string path)
{
var formatter = new BinaryFormatter();
int count = colors.Length;
using (var stream = File.OpenWrite(path))
{
formatter.Serialize(stream, count);
for (int index = 0; index < count; index++)
{
formatter.Serialize(stream, colors[index].R);
formatter.Serialize(stream, colors[index].G);
formatter.Serialize(stream, colors[index].B);
}
}
}
// ******************************************************************
public static Color[] LoadColorsFromFile(string path)
{
var formatter = new BinaryFormatter();
Color[] colors;
using (var stream = File.OpenRead(path))
{
int count = (int)formatter.Deserialize(stream);
colors = new Color[count];
for (int index = 0; index < count; index++)
{
byte r = (byte)formatter.Deserialize(stream);
byte g = (byte)formatter.Deserialize(stream);
byte b = (byte)formatter.Deserialize(stream);
colors[index] = Color.FromRgb(r, g, b);
}
}
return colors;
}
// ******************************************************************

Byte array to int16 array

Is there a more efficient way to convert byte array to int16 array ?? or is there a way to use Buffer.BlockCopy to copy evry two byte to int16 array ???
public static int[] BYTarrToINT16arr(string fileName)
{
try
{
int bYte = 2;
byte[] buf = File.ReadAllBytes(fileName);
int bufPos = 0;
int[] data = new int[buf.Length/2];
byte[] bt = new byte[bYte];
for (int i = 0; i < buf.Length/2; i++)
{
Array.Copy(buf, bufPos, bt, 0, bYte);
bufPos += bYte;
Array.Reverse(bt);
data[i] = BitConverter.ToInt16(bt, 0);
}
return data;
}
catch
{
return null;
}
}
Use a FileStream and a BinaryReader. Something like this:
var int16List = List<Int16>();
using (var stream = new FileStream(filename, FileMode.Open))
using (var reader = new BinaryReader(stream))
{
try
{
while (true)
int16List.Add(reader.ReadInt16());
}
catch (EndOfStreamException ex)
{
// We've read the whole file
}
}
return int16List.ToArray();
You can also read the whole file into a byte[], and then use a MemoryStream instead of the FileStream if you want.
If you do this then you'll also be able to size the List approrpriately up front and make it a bit more efficient.
Apart from having an off-by-one possibility in case the number of bytes is odd (you'll miss the last byte) your code is OK. You can optimize it by dropping the bt array altogether, swapping i*2 and i*2+1 bytes before calling BitConverter.ToInt16, and passing i*2 as the starting index to the BitConverter.ToInt16 method.
This works if you don't mind using interopservices. I assume it is faster than the other techniques.
using System.Runtime.InteropServices;
public Int16[] Copy_Byte_Buffer_To_Int16_Buffer(byte[] buffer)
{
Int16[] result = new Int16[1];
int size = buffer.Length;
if ((size % 2) != 0)
{
/* Error here */
return result;
}
else
{
result = new Int16[size/2];
IntPtr ptr_src = Marshal.AllocHGlobal (size);
Marshal.Copy (buffer, 0, ptr_src, size);
Marshal.Copy (ptr_src, result, 0, result.Length);
Marshal.FreeHGlobal (ptr_src);
return result;
}
}
var bytes = File.ReadAllBytes(path);
var ints = bytes.TakeWhile((b, i) => i % 2 == 0).Select((b, i) => BitConverter.ToInt16(bytes, i));
if (bytes.Length % 2 == 1)
{
ints = ints.Union(new[] {BitConverter.ToInt16(new byte[] {bytes[bytes.Length - 1], 0}, 0)});
}
return ints.ToArray();
try...
int index = 0;
var my16s = bytes.GroupBy(x => (index++) / 2)
.Select(x => BitConverter.ToInt16(x.Reverse().ToArray(),0)).ToList();

Best way to combine two or more byte arrays in C#

I have 3 byte arrays in C# that I need to combine into one. What would be the most efficient method to complete this task?
For primitive types (including bytes), use System.Buffer.BlockCopy instead of System.Array.Copy. It's faster.
I timed each of the suggested methods in a loop executed 1 million times using 3 arrays of 10 bytes each. Here are the results:
New Byte Array using System.Array.Copy - 0.2187556 seconds
New Byte Array using System.Buffer.BlockCopy - 0.1406286 seconds
IEnumerable<byte> using C# yield operator - 0.0781270 seconds
IEnumerable<byte> using LINQ's Concat<> - 0.0781270 seconds
I increased the size of each array to 100 elements and re-ran the test:
New Byte Array using System.Array.Copy - 0.2812554 seconds
New Byte Array using System.Buffer.BlockCopy - 0.2500048 seconds
IEnumerable<byte> using C# yield operator - 0.0625012 seconds
IEnumerable<byte> using LINQ's Concat<> - 0.0781265 seconds
I increased the size of each array to 1000 elements and re-ran the test:
New Byte Array using System.Array.Copy - 1.0781457 seconds
New Byte Array using System.Buffer.BlockCopy - 1.0156445 seconds
IEnumerable<byte> using C# yield operator - 0.0625012 seconds
IEnumerable<byte> using LINQ's Concat<> - 0.0781265 seconds
Finally, I increased the size of each array to 1 million elements and re-ran the test, executing each loop only 4000 times:
New Byte Array using System.Array.Copy - 13.4533833 seconds
New Byte Array using System.Buffer.BlockCopy - 13.1096267 seconds
IEnumerable<byte> using C# yield operator - 0 seconds
IEnumerable<byte> using LINQ's Concat<> - 0 seconds
So, if you need a new byte array, use
byte[] rv = new byte[a1.Length + a2.Length + a3.Length];
System.Buffer.BlockCopy(a1, 0, rv, 0, a1.Length);
System.Buffer.BlockCopy(a2, 0, rv, a1.Length, a2.Length);
System.Buffer.BlockCopy(a3, 0, rv, a1.Length + a2.Length, a3.Length);
But, if you can use an IEnumerable<byte>, DEFINITELY prefer LINQ's Concat<> method. It's only slightly slower than the C# yield operator, but is more concise and more elegant.
IEnumerable<byte> rv = a1.Concat(a2).Concat(a3);
If you have an arbitrary number of arrays and are using .NET 3.5, you can make the System.Buffer.BlockCopy solution more generic like this:
private byte[] Combine(params byte[][] arrays)
{
byte[] rv = new byte[arrays.Sum(a => a.Length)];
int offset = 0;
foreach (byte[] array in arrays) {
System.Buffer.BlockCopy(array, 0, rv, offset, array.Length);
offset += array.Length;
}
return rv;
}
*Note: The above block requires you adding the following namespace at the the top for it to work.
using System.Linq;
To Jon Skeet's point regarding iteration of the subsequent data structures (byte array vs. IEnumerable<byte>), I re-ran the last timing test (1 million elements, 4000 iterations), adding a loop that iterates over the full array with each pass:
New Byte Array using System.Array.Copy - 78.20550510 seconds
New Byte Array using System.Buffer.BlockCopy - 77.89261900 seconds
IEnumerable<byte> using C# yield operator - 551.7150161 seconds
IEnumerable<byte> using LINQ's Concat<> - 448.1804799 seconds
The point is, it is VERY important to understand the efficiency of both the creation and the usage of the resulting data structure. Simply focusing on the efficiency of the creation may overlook the inefficiency associated with the usage. Kudos, Jon.
Many of the answers seem to me to be ignoring the stated requirements:
The result should be a byte array
It should be as efficient as possible
These two together rule out a LINQ sequence of bytes - anything with yield is going to make it impossible to get the final size without iterating through the whole sequence.
If those aren't the real requirements of course, LINQ could be a perfectly good solution (or the IList<T> implementation). However, I'll assume that Superdumbell knows what he wants.
(EDIT: I've just had another thought. There's a big semantic difference between making a copy of the arrays and reading them lazily. Consider what happens if you change the data in one of the "source" arrays after calling the Combine (or whatever) method but before using the result - with lazy evaluation, that change will be visible. With an immediate copy, it won't. Different situations will call for different behaviour - just something to be aware of.)
Here are my proposed methods - which are very similar to those contained in some of the other answers, certainly :)
public static byte[] Combine(byte[] first, byte[] second)
{
byte[] ret = new byte[first.Length + second.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
return ret;
}
public static byte[] Combine(byte[] first, byte[] second, byte[] third)
{
byte[] ret = new byte[first.Length + second.Length + third.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
Buffer.BlockCopy(third, 0, ret, first.Length + second.Length,
third.Length);
return ret;
}
public static byte[] Combine(params byte[][] arrays)
{
byte[] ret = new byte[arrays.Sum(x => x.Length)];
int offset = 0;
foreach (byte[] data in arrays)
{
Buffer.BlockCopy(data, 0, ret, offset, data.Length);
offset += data.Length;
}
return ret;
}
Of course the "params" version requires creating an array of the byte arrays first, which introduces extra inefficiency.
I took Matt's LINQ example one step further for code cleanliness:
byte[] rv = a1.Concat(a2).Concat(a3).ToArray();
In my case, the arrays are small, so I'm not concerned about performance.
If you simply need a new byte array, then use the following:
byte[] Combine(byte[] a1, byte[] a2, byte[] a3)
{
byte[] ret = new byte[a1.Length + a2.Length + a3.Length];
Array.Copy(a1, 0, ret, 0, a1.Length);
Array.Copy(a2, 0, ret, a1.Length, a2.Length);
Array.Copy(a3, 0, ret, a1.Length + a2.Length, a3.Length);
return ret;
}
Alternatively, if you just need a single IEnumerable, consider using the C# 2.0 yield operator:
IEnumerable<byte> Combine(byte[] a1, byte[] a2, byte[] a3)
{
foreach (byte b in a1)
yield return b;
foreach (byte b in a2)
yield return b;
foreach (byte b in a3)
yield return b;
}
I actually ran into some issues with using Concat... (with arrays in the 10-million, it actually crashed).
I found the following to be simple, easy and works well enough without crashing on me, and it works for ANY number of arrays (not just three) (It uses LINQ):
public static byte[] ConcatByteArrays(params byte[][] arrays)
{
return arrays.SelectMany(x => x).ToArray();
}
The memorystream class does this job pretty nicely for me. I couldn't get the buffer class to run as fast as memorystream.
using (MemoryStream ms = new MemoryStream())
{
ms.Write(BitConverter.GetBytes(22),0,4);
ms.Write(BitConverter.GetBytes(44),0,4);
ms.ToArray();
}
public static byte[] Concat(params byte[][] arrays) {
using (var mem = new MemoryStream(arrays.Sum(a => a.Length))) {
foreach (var array in arrays) {
mem.Write(array, 0, array.Length);
}
return mem.ToArray();
}
}
public static bool MyConcat<T>(ref T[] base_arr, ref T[] add_arr)
{
try
{
int base_size = base_arr.Length;
int size_T = System.Runtime.InteropServices.Marshal.SizeOf(base_arr[0]);
Array.Resize(ref base_arr, base_size + add_arr.Length);
Buffer.BlockCopy(add_arr, 0, base_arr, base_size * size_T, add_arr.Length * size_T);
}
catch (IndexOutOfRangeException ioor)
{
MessageBox.Show(ioor.Message);
return false;
}
return true;
}
Can use generics to combine arrays. Following code can easily be expanded to three arrays. This way you never need to duplicate code for different type of arrays. Some of the above answers seem overly complex to me.
private static T[] CombineTwoArrays<T>(T[] a1, T[] a2)
{
T[] arrayCombined = new T[a1.Length + a2.Length];
Array.Copy(a1, 0, arrayCombined, 0, a1.Length);
Array.Copy(a2, 0, arrayCombined, a1.Length, a2.Length);
return arrayCombined;
}
/// <summary>
/// Combine two Arrays with offset and count
/// </summary>
/// <param name="src1"></param>
/// <param name="offset1"></param>
/// <param name="count1"></param>
/// <param name="src2"></param>
/// <param name="offset2"></param>
/// <param name="count2"></param>
/// <returns></returns>
public static T[] Combine<T>(this T[] src1, int offset1, int count1, T[] src2, int offset2, int count2)
=> Enumerable.Range(0, count1 + count2).Select(a => (a < count1) ? src1[offset1 + a] : src2[offset2 + a - count1]).ToArray();
Here's a generalization of the answer provided by #Jon Skeet.
It is basically the same, only it is usable for any type of array, not only bytes:
public static T[] Combine<T>(T[] first, T[] second)
{
T[] ret = new T[first.Length + second.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
return ret;
}
public static T[] Combine<T>(T[] first, T[] second, T[] third)
{
T[] ret = new T[first.Length + second.Length + third.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
Buffer.BlockCopy(third, 0, ret, first.Length + second.Length,
third.Length);
return ret;
}
public static T[] Combine<T>(params T[][] arrays)
{
T[] ret = new T[arrays.Sum(x => x.Length)];
int offset = 0;
foreach (T[] data in arrays)
{
Buffer.BlockCopy(data, 0, ret, offset, data.Length);
offset += data.Length;
}
return ret;
}
All you need to pass list of Byte Arrays and this function will return you the Array of Bytes (Merged).
This is the best solution i think :).
public static byte[] CombineMultipleByteArrays(List<byte[]> lstByteArray)
{
using (var ms = new MemoryStream())
{
using (var doc = new iTextSharp.text.Document())
{
using (var copy = new PdfSmartCopy(doc, ms))
{
doc.Open();
foreach (var p in lstByteArray)
{
using (var reader = new PdfReader(p))
{
copy.AddDocument(reader);
}
}
doc.Close();
}
}
return ms.ToArray();
}
}
Concat is the right answer, but for some reason a handrolled thing is getting the most votes. If you like that answer, perhaps you'd like this more general solution even more:
IEnumerable<byte> Combine(params byte[][] arrays)
{
foreach (byte[] a in arrays)
foreach (byte b in a)
yield return b;
}
which would let you do things like:
byte[] c = Combine(new byte[] { 0, 1, 2 }, new byte[] { 3, 4, 5 }).ToArray();

Categories

Resources