How do I use a short[] to a BinaryWriter? - c#

short[] sArray = new short[100];
Many 16-bit data in sArray[100], So I want to write using BinaryWriter class.
But BinaryWriter has write(byte[]) and write(char[]) only.
How to write 16-bit(short[]) data to the file?

You need to write each value individually using the BinaryWriter.Write(short) Method.
Writing:
binaryWriter.Write(sArray.Length); // BinaryWriter.Write(Int32) overload
for (int i = 0; i < sArray.Length; i++)
{
binaryWriter.Write(sArray[i]); // BinaryWriter.Write(Int16) overload
}
Reading:
short[] result = new short[binaryReader.ReadInt32()];
for (int i = 0; i < result.Length; i++)
{
result[i] = binaryReader.ReadInt16();
}

Related

NPOI AutoSizeColumn not resizing correctly

I've put the .AutoSizeColumn right before the write Method
int numberOfColumns = sheet.GetRow(rowcount - 1).PhysicalNumberOfCells;
for (int i = 0; i <= numberOfColumns; i++)
{
sheet.AutoSizeColumn(i);
GC.Collect();
}
using (var fileData = new FileStream(#"C:\Temp\Contatti.xlsx", FileMode.Create))
{
wb.Write(fileData);
}
this is an example of the result
The problem also migh be, that PhysicalNumberOfCells can return 1, even if you have a cell lets say in 'Z' column. There is LastCellNum property,you i instead of PhysicalNumberOfCells:
int lastColumNum = sheet.GetRow(0).LastCellNum;
for (int i = 0; i <= lastColumNum; i++)
{
sheet.AutoSizeColumn(i);
GC.Collect();
}
using (var fileData = new FileStream(#"D:\Contatti.xlsx", FileMode.Create))
{
wb.Write(fileData);
}

Serialization and deserialization of an int matrix c#

I'm using a c# WCF service and I need to return a int[,] to the client in one of my method. The problem is multidimensional arrays are not supported by WCF so I think my only option is returning a byte array in this way:
public byte[] DistanceMatrix()
{
int[,] matrix;
//DOING THINGS HERE
IFormatter formatter = new BinaryFormatter();
var ms = new MemoryStream();
formatter.Serialize(ms, matrix);
return ms.ToArray();
}
But I don't know how to deserialize the byte[] back to an int[,].
This excellent blog post by Josh Reuben can help you:
Extension methods that will allow you to convert prior to serialization and convert back after deserialization:
public static T[,] ToMultiD<T>(this T[][] jArray)
{
int i = jArray.Count();
int j = jArray.Select(x => x.Count()).Aggregate(0, (current, c) => (current > c) ? current : c);
var mArray = new T[i, j];
for (int ii = 0; ii < i; ii++)
{
for (int jj = 0; jj < j; jj++)
{
mArray[ii, jj] = jArray[ii][jj];
}
}
return mArray;
}
public static T[][] ToJagged<T>(this T[,] mArray)
{
var cols = mArray.GetLength(0);
var rows = mArray.GetLength(1);
var jArray = new T[cols][];
for (int i = 0; i < cols; i++)
{
jArray[i] = new T[rows];
for (int j = 0; j < rows; j++)
{
jArray[i][j] = mArray[i, j];
}
}
return jArray;
}

C++ to C# Reading Binary File into a two dimensional float array

I have been assigned to convert a C++ app to C#.
I want to convert the following code in C# where rate_buff is a double[3,9876] two dimensional array.
if ((fread((char*) rate_buff,
(size_t) record_size,
(size_t) record_count,
stream)) == (size_t) record_count)
If I correctly guessed your requirements, this is what you want:
int record_size = 9876;
int record_count = 3;
double[,] rate_buff = new double[record_count, record_size];
// open the file
using (Stream stream = File.OpenRead("some file path"))
{
// create byte buffer for stream reading that is record_size * sizeof(double) in bytes
byte[] buffer = new byte[record_size * sizeof(double)];
for (int i = 0; i < record_count; i++)
{
// read one record
if (stream.Read(buffer, 0, buffer.Length) != buffer.Length)
throw new InvalidDataException();
// copy the doubles out of the byte buffer into the two dimensional array
// note this assumes machine-endian byte order
for (int j = 0; j < record_size; j++)
rate_buff[i, j] = BitConverter.ToDouble(buffer, j * sizeof(double));
}
}
Or more concisely with a BinaryReader:
int record_size = 9876;
int record_count = 3;
double[,] rate_buff = new double[record_count, record_size];
// open the file
using (BinaryReader reader = new BinaryReader(File.OpenRead("some file path")))
{
for (int i = 0; i < record_count; i++)
{
// read the doubles out of the byte buffer into the two dimensional array
// note this assumes machine-endian byte order
for (int j = 0; j < record_size; j++)
rate_buff[i, j] = reader.ReadDouble();
}
}

Adding a text file into a list and then into a Two Dimensional array

I have a text file that has the following in it: (Without quotation marks and "Empty Space")
##############
# Empty Space#
# Empty Space#
# Empty Space#
# Empty Space#
##############
I want to add this whole file row by row into a list:
FileStream FS = new FileStream(#"FilePath",FileMode.Open);
StreamReader SR = new StreamReader(FS);
List<string> MapLine = new List<string>();
foreach (var s in SR.ReadLine())
{
MapLine.Add(s.ToString());
}
foreach (var x in MapLine)
{
Console.Write(x);
}
Here comes my problem: I want to add this into a Two dimensional array. I tried:
string[,] TwoDimentionalArray = new string[100, 100];
for (int i = 0; i < MapLine.Count; i++)
{
for (int j = 0; j < MapLine.Count; j++)
{
TwoDimentionalArray[j, i] = MapLine[j].Split('\n').ToString();
}
}
I am still new to C# so please any help will be appreciated.
You can try with this:
// File.ReadAllLines method serves exactly the purpose you need
List<string> lines = File.ReadAllLines(#"Data.txt").ToList();
// lines.Max(line => line.Length) is used to find out the length of the longest line read from the file
string[,] twoDim = new string[lines.Count, lines.Max(line => line.Length)];
for(int lineIndex = 0; lineIndex < lines.Count; lineIndex++)
for(int charIndex = 0; charIndex < lines[lineIndex].Length; charIndex++)
twoDim[lineIndex,charIndex] = lines[lineIndex][charIndex].ToString();
for (int lineIndex = 0; lineIndex < lines.Count; lineIndex++)
{
for (int charIndex = 0; charIndex < lines[lineIndex].Length; charIndex++)
Console.Write(twoDim[lineIndex, charIndex]);
Console.WriteLine();
}
Console.ReadKey();
This is going to save each character of the file content into it's own position in a 2-dimensional array. For that purpose char[,] might have also been used.
Currently, you are going through all the lines of your file, and for every line of your file, you go through all the lines of your file again, to split them on \n which is already done by you putting them in MapLine.
If you want every single char of a line array'd, and that in an array again, it should approximately look like this:
string[,] TwoDimentionalArray = new string[100, 100];
for (int i = 0; i < MapLine.Count; i++)
{
for (int j = 0; j < MapLine[i].length(); j++)
{
TwoDimentionalArray[i, j] = MapLine[i].SubString(j,j);
}
}
I did this without testing so it might be faulty. The point is you need to iterate through each line first, then through each letter in that line. From there, you can use SubString.
Also, I hope I've understood your question correctly.

C# DeflateStream vs Java DeflaterOutputStream

In Java, this works as expected:
public static void testwrite(String filename) throws IOException {
FileOutputStream fs = new FileOutputStream(new File(filename), false);
DeflaterOutputStream fs2 = new DeflaterOutputStream(fs, new Deflater(3));
for (int i = 0; i < 50; i++)
for (int j = 0; j < 40; j++)
fs2.write((byte) (i + 0x30));
fs2.close();
}
public static void testread(String filename) throws IOException {
FileInputStream fs = new FileInputStream(new File(filename));
InflaterInputStream fs2 = new InflaterInputStream(fs);
int c, n = 0;
while ((c = fs2.read()) >= 0) {
System.out.print((char) c);
if (n++ % 40 == 0) System.out.println("");
}
fs2.close();
}
The first method compresses 2000 chars in a 106 bytes file, the second reads it ok.
The equivalent in C# would seem to be
private static void testwritecs(String filename) {
FileStream fs = new FileStream(filename, FileMode.OpenOrCreate);
DeflateStream fs2 = new DeflateStream(fs,CompressionMode.Compress,false);
for (int i = 0; i < 50; i++) {
for(int j = 0; j < 40; j++)
fs2.WriteByte((byte)(i+0x30));
}
fs2.Flush();
fs2.Close();
}
But it generates a file of 2636 bytes (larger than the raw data, even though it has low entropy) and is not readable with the Java testread() method above. Any ideas?
Edited: The implementation is indeed not standard/portable (this bit of the docs: "an industry standard algorithm" seems a joke), and very crippled. Among other things, its behaviour changes radically if one writes the bytes one at a time or in blocks (which goes against the concept of a "stream"); if I change the above
for(int j = 0; j < 40; j++)
fs2.WriteByte((byte)(i+0x30));
by
byte[] buf = new byte{}[40;
for(int j = 0; j < 40; j++)
buf[j]=(byte)(i+0x30));
fs2.Write(buf,0,buf.Length);
the compression gets (slightly) reasonable. Shame.
Don't use DeflateStream on anything except plain ASCII text, because it uses
statically defined, hardcoded Huffman trees built for plain ASCII text. See my prior answer for more detail, or just use SharpZipLib and forget it.

Categories

Resources