Write Java ObjectOutputStream.writeInt() read with BinaryReader in C#? - c#

Okay, so I've got a game map creater programmed in Java which writes the map out to file using ObjectOutputStream.writeInt()
Now I'm converting the game engine to C# XNA and I'm trying to load the map. I'm getting numerical errors though, so I'm wondering if anyone knows what I'm doing wrong?
Java writes as int 32 Big Endian I believe (I could be wrong though).
Here is the code I'm using to read the height and width of the map in C#.
Edit: br is BinaryReader.
width = (int)IPAddress.NetworkToHostOrder(BitConverter.ToInt32(br.ReadBytes(sizeof(int)), 0));
height = (int)IPAddress.NetworkToHostOrder(BitConverter.ToInt32(br.ReadBytes(sizeof(int)), 0));
Can anyone please tell me what I'm doing wrong? Or how to read the bytes from ObjectOutputStream.writeInt() properly in C#?
Edit: 2nd try failed. here is the current code:
public byte[] ReadBigEndianBytes(int count, BinaryReader br)
{
byte[] bytes = new byte[count];
for (int i = count - 1; i >= 0; i--)
bytes[i] = br.ReadByte();
return bytes;
}
public void loadFile(int level)
{
FileStream fs = new FileStream("map" + level + ".lv", FileMode.Open, FileAccess.Read);
BinaryReader br = new BinaryReader(fs, System.Text.Encoding.BigEndianUnicode);
width = BitConverter.ToInt32(ReadBigEndianBytes(4, br), 0);
height = BitConverter.ToInt32(ReadBigEndianBytes(4, br), 0);
tile = new int[width, height];
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
tile[x, y] = BitConverter.ToInt32(ReadBigEndianBytes(4, br), 0);
}
}
}
}

ObjectOutputStream.writeInt()
Don't use that. Use DataOutputStream.writeInt(). It does the same thing, in network byte order, but it doesn't add the Serialziation header that ObjectOutputStream adds, so you won't have to skip it at the .NET end.

Absolutely correct:
Java writes as int 32 Big Endian I believe (I could be wrong though).
Remember, though: a .Net Int32 is Little-Endian ;)
[Edit] SOLUTION:
1) Here is Java code that writes 10 integers (Java int's are 32-bit, Big-endian))
import java.io.*;
public class WriteBinary {
public static void main (String[] args) {
int[] data = {
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
};
String fname = "myfile.bin";
try
{
System.out.println("Opening " + fname + "...");
FileOutputStream fos =
new FileOutputStream(fname);
int ibyte;
for (int i = 0; i < data.length; i++) {
ibyte = ((data[i] >>> 24) & 0xff); fos.write(ibyte);
ibyte = ((data[i] >>> 16) & 0xff); fos.write(ibyte);
ibyte = ((data[i] >>> 8) & 0xff); fos.write(ibyte);
ibyte = (data[i] & 0xff); fos.write(ibyte);
}
fos.close();
System.out.println("File write complete.");
}
catch (IOException e) {
System.out.println ("I/O error: " + e.getMessage());
}
}
}
2) Here is the C# code that reads it. You'll notice the "using System.Net", in order to get .Net's equivalent of "ntohl()":
using System;
using System.IO;
using System.Net;
namespace ReadBinary
{
class Program
{
static void Main(string[] args)
{
string fname = "myfile.bin";
try
{
Console.WriteLine("Opening " + fname + "...");
BinaryReader br =
new BinaryReader(
File.Open(fname, FileMode.Open));
for (int i = 0; i < (int)(br.BaseStream.Length / 4); i++)
{
int j =
System.Net.IPAddress.NetworkToHostOrder (br.ReadInt32());
Console.WriteLine("array[" + i + "]=" + j + "...");
}
br.Close();
Console.WriteLine("Read complete.");
}
catch (IOException ex)
{
Console.WriteLine("I/O error" + ex.Message);
}
}
}
}

I think a proper way is to use IPAddress.NetworkToHostOrder(Int32) method.
public void loadFile(int level)
{
...
width = IPAddress.NetworkToHostOrder(br.ReadInt32());
height = IPAddress.NetworkToHostOrder(br.ReadInt32());
...
}

Related

C# i use Modbus/TCP to get Data, Data type is 「LReal」. i want turn to double

i use Modbus/TCP to get data, data type is LReal.
but i want LReal to int.
this is my data.
staic_13 data is 「4.232」
but i get [80] 16400 [81]-2098 [82] -9962 [83] -30933.
i don't know how turn to double
Based on this experimental Python code,
>>> x = [16400, -2098, -9962, -30933]
>>> struct.unpack(">d", struct.pack(">4h", *x))
(4.242,)
it looks like you'd need to concatenate those 4 16-byte integers in big-endian format, then interpret those 8 bytes as a single big-endian double.
In .NET 6 (see this fiddle):
using System.Buffers.Binary;
using System;
short[] values = {16400, -2098, -9962, -30933};
byte[] buf = new byte[values.Length * sizeof(short)];
for (int i = 0; i < values.Length; i++)
{
BinaryPrimitives.WriteInt16BigEndian(buf.AsSpan(i * sizeof(short)), values[i]);
}
double result = BinaryPrimitives.ReadDoubleBigEndian(buf);
Console.WriteLine(result);
In .NET 4 (see this fiddle):
using System;
public class Program
{
public static void Main()
{
short[] values = {16400, -2098, -9962, -30933};
byte[] buf = new byte[8];
for (int i = 0; i < 4; i++)
{
byte[] sh_buf = BitConverter.GetBytes(values[i]);
if(BitConverter.IsLittleEndian) {
// Flip the bytes around if we're little-endian
buf[(3 - i) * 2] = sh_buf[0];
buf[(3 - i) * 2 + 1] = sh_buf[1];
} else {
buf[i * 2] = sh_buf[0];
buf[i * 2 + 1] = sh_buf[1];
}
}
double result = BitConverter.ToDouble(buf, 0);
Console.WriteLine(result);
}
}

How to get the "pixel" data values from a Photon Focus camera using the Pleora eBUS SDK c# or python?

I have a 3D Photon Focus camera ( MV1-D2048x1088-3D06-760-G2-8) and I am using C# with the Pleora eBUS SDK version 5.1.1 on a Windows 10 machine. The camera is set to scan a laser line in LineFinder Mode, DataFormat3D = 2 and is returning the data (buffer Payload = 2 x 2048 = 4096 bytes). The payload seems correct. I want to save this data but I am having difficulty. How can I get the buffer into an array (or some structure) to save it to a file stream?
My code is using the .DataPointer parameter from the Pleora eBUS SDK but I am not understanding what it is doing. The Manual I have included HERE - MAN075_PhotonFocus
private unsafe static void ThreadProc(object aParameters)
{
object[] lParameters = (object[])aParameters;
MainForm lThis = (MainForm)lParameters[0];
for (;;)
{
if (lThis.mIsStopping)
{
// Signaled to terminate thread, return.
return;
}
PvBuffer lBuffer = null;
PvResult lOperationResult = new PvResult(PvResultCode.OK);
// Retrieve next buffer from acquisition pipeline
PvResult lResult = lThis.mStream.RetrieveBuffer(ref lBuffer, ref lOperationResult, 100);
if (lResult.IsOK)
{
// Operation result of buffer is OK, display.
if (lOperationResult.IsOK)
{
//lThis.displayControl.Display(lBuffer);
uint bSize = lBuffer.GetPayloadSize();
PvImage image1 = lBuffer.Image;
uint height1 = image1.Height;
uint width1 = image1.Width;
uint offx1 = image1.OffsetX;
uint offy1 = image1.OffsetY;
PvPixelType imgpixtype = image1.PixelType;
image1.Alloc(width1, (uint)2, imgpixtype);
byte *data_pnt = image1.DataPointer ;
byte[] MSB_array = new byte[(int)width1];
int buff_size = 2 * (int)width1;
byte[] pix_array = new byte[buff_size];
ulong tStamp = lBuffer.Timestamp;
string msgOut = (bSize.ToString() + " TimeStamp " + tStamp.ToString() + " width " + width1.ToString());
Console.WriteLine(msgOut);
for (int i = 0; i < width1; i++)
{
data_pnt += 0;
Console.Write((uint)*data_pnt);
MSB_array[i] = *data_pnt;
data_pnt += 1;
}
data_pnt += 1;
Console.WriteLine(height1.ToString());
for (int i = 0; i < width1; i++)
{
ushort msb1 = MSB_array[i];
ushort last_4 = (ushort)(*data_pnt & 0x0F);
int integer1 = (msb1 << 4)+(ushort)(*data_pnt>>4);
double dec_part = (float)last_4 / (float)16;
double val1 = (float)integer1 + dec_part;
Console.WriteLine(val1.ToString());
data_pnt += 1;
}
Console.WriteLine(height1.ToString());
}
else
{
uint bSize = lBuffer.GetPayloadSize();
ulong tStamp = lBuffer.Timestamp;
string msgOut = (bSize.ToString() + " BAD RESULT TimeStamp " + tStamp.ToString());
Console.WriteLine(msgOut);
}
// We have an image - do some processing (...) and VERY IMPORTANT,
// re-queue the buffer in the stream object.
lThis.mStream.QueueBuffer(lBuffer);
}
}
}
My current solution is to loop through the buffer by incrementing the pointer and save the bytes into a new array (MSB_array). The way this data is packed (see the attached image in the question) I had to read the next line and bitshift it over and add it to the byte in the MSB_array to get a
for (int i = 0; i < width1; i++)
{
data_pnt += 0;
Console.Write((uint)*data_pnt);
MSB_array[i] = *data_pnt;
data_pnt += 1;
}
data_pnt += 1;
Console.WriteLine(height1.ToString());
for (int i = 0; i < width1; i++)
{
ushort msb1 = MSB_array[i];
ushort last_4 = (ushort)(*data_pnt & 0x0F);
int integer1 = (msb1 << 4)+(ushort)(*data_pnt>>4);
double dec_part = (float)last_4 / (float)16;
double val1 = (float)integer1 + dec_part;
Console.WriteLine(val1.ToString());
data_pnt += 1;
}
I am only writing it out to the console now but the data is correct. There may be a better/faster way than the for loop using the pointer. That post would be appreciated.

How convert a array object saved as string(base64?) to C# array or sql Table?

I have a legacy code that was developed in Ruby, it saves an Array as a string in the database that I would like to decode in the migration to the new environment.
Legacy Ruby Code:
class CatalogoConjunto
include DataMapper::Resource
...
property :coordenadas, Object
...
...
def save_depois
if (coordenadas.length > 1 and coordenadas.include?([0, 0]) rescue false)
self.coordenadas = coordenadas - [[0, 0]]
save!
end
end
def save
save = super
save_depois
save
end
...
end
Samples what saves in the column coordenadas:
"BAhbCFsHaQHYaQIDAVsHaQLZAWkCwwFbB2kB8WkCXQI= "
Other Samples:
"BAhbBlsHaQHRaQI6AQ== ", "BAhbBlsHaQLMAmkB3A== ", "BAhbB1sHaQKmAmkB81sHaQIkA2kBvQ== "
How do I find the encode method?
The new application uses C# and the data migration is being done in SQL ... but any light that shows me the way forward already helps me ...
Ruby makes me pretty lost.
Edit
The string "BAhbBlsHaQJ6A2kCIwI=" stand for this values (890, 547).
but using The following code i get the string "egMAACMCAAA="
Trying convert using C#
int[,] test = new int[,] { { 890, 547 } };
byte[] result = new byte[test.Length * sizeof(int)];
Buffer.BlockCopy(test, 0, result, 0, result.Length);
var anotherString = Convert.ToBase64String(result);
Console.WriteLine(anotherString);
var stringFromDatabase = "BAhbBlsHaQJ6A2kCIwI= ";
byte[] byteArray = Convert.FromBase64String(stringFromDatabase);
//Don't work
int[,] newArr = new int[byteArray.Length / sizeof(int)/2 + ((byteArray.Length / sizeof(int))%2), 2];
for (int ctr = 0; ctr < byteArray.Length / sizeof(int); ctr++)
{
if (ctr % 2 != 0)
{
newArr[ctr/2, 0] = BitConverter.ToInt32(byteArray, ctr * sizeof(int));
}
else
{
newArr[ctr/2, 1] = BitConverter.ToInt32(byteArray, ctr * sizeof(int));
}
}
The string generated by Ruby looks like Base64 but the values don't match
The DataMapper has a default bahavior to marshall the data type Object:
https://github.com/datamapper/dm-core/blob/master/lib/dm-core/property/object.rb
After some research found a post of how Marshall in Ruby work here:
https://ilyabylich.svbtle.com/ruby-marshalling-from-a-to-z
I just need the numbers them i did a simple decode just looking for the Integers
public static List<int> Conversor(string stringFromDatabase)
{
byte[] byteArray = Convert.FromBase64String(stringFromDatabase);
List<int> retorno = new List<int>();
for (int i = 0; i < byteArray.Length; i++)
{
if ((char)byteArray[i] == (char)105)
{
int valInt = 0;
int primeiroByte = Convert.ToInt32(byteArray[i + 1]);
if (primeiroByte == 0)
retorno.Add(0);
else if (primeiroByte > 4)
retorno.Add(primeiroByte - 5);
else if (primeiroByte > 0 && primeiroByte < 5)
{
valInt = byteArray[i + 2];
for (int y = 1; y < primeiroByte; y++)
{
valInt = valInt | (byteArray[i + 2 + y] << 8 * y);
}
retorno.Add(valInt);
}
}
}
return retorno;
}

Is StreamReader.Readline() really the fastest method to count lines in a file?

While looking around for a while I found quite a few discussions on how to figure out the number of lines in a file.
For example these three:
c# how do I count lines in a textfile
Determine the number of lines within a text file
How to count lines fast?
So, I went ahead and ended up using what seems to be the most efficient (at least memory-wise?) method that I could find:
private static int countFileLines(string filePath)
{
using (StreamReader r = new StreamReader(filePath))
{
int i = 0;
while (r.ReadLine() != null)
{
i++;
}
return i;
}
}
But this takes forever when the lines themselves from the file are very long. Is there really not a faster solution to this?
I've been trying to use StreamReader.Read() or StreamReader.Peek() but I can't (or don't know how to) make the either of them move on to the next line as soon as there's 'stuff' (chars? text?).
Any ideas please?
CONCLUSION/RESULTS (After running some tests based on the answers provided):
I tested the 5 methods below on two different files and I got consistent results that seem to indicate that plain old StreamReader.ReadLine() is still one of the fastest ways... To be honest, I'm perplexed after all the comments and discussion in the answers.
File #1:
Size: 3,631 KB
Lines: 56,870
Results in seconds for File #1:
0.02 --> ReadLine method.
0.04 --> Read method.
0.29 --> ReadByte method.
0.25 --> Readlines.Count method.
0.04 --> ReadWithBufferSize method.
File #2:
Size: 14,499 KB
Lines: 213,424
Results in seconds for File #1:
0.08 --> ReadLine method.
0.19 --> Read method.
1.15 --> ReadByte method.
1.02 --> Readlines.Count method.
0.08 --> ReadWithBufferSize method.
Here are the 5 methods I tested based on all the feedback I received:
private static int countWithReadLine(string filePath)
{
using (StreamReader r = new StreamReader(filePath))
{
int i = 0;
while (r.ReadLine() != null)
{
i++;
}
return i;
}
}
private static int countWithRead(string filePath)
{
using (StreamReader _reader = new StreamReader(filePath))
{
int c = 0, count = 0;
while ((c = _reader.Read()) != -1)
{
if (c == 10)
{
count++;
}
}
return count;
}
}
private static int countWithReadByte(string filePath)
{
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
int b;
b = s.ReadByte();
while (b >= 0)
{
if (b == 10)
{
i++;
}
b = s.ReadByte();
}
return i;
}
}
private static int countWithReadLinesCount(string filePath)
{
return File.ReadLines(filePath).Count();
}
private static int countWithReadAndBufferSize(string filePath)
{
int bufferSize = 512;
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
byte[] b = new byte[bufferSize];
int n = 0;
n = s.Read(b, 0, bufferSize);
while (n > 0)
{
i += countByteLines(b, n);
n = s.Read(b, 0, bufferSize);
}
return i;
}
}
private static int countByteLines(byte[] b, int n)
{
int i = 0;
for (int j = 0; j < n; j++)
{
if (b[j] == 10)
{
i++;
}
}
return i;
}
No, it is not. Point is - it materializes the strings, which is not needed.
To COUNT it you are much better off to ignore the "string" Part and to go the "line" Part.
a LINE is a seriees of bytes ending with \r\n (13, 10 - CR LF) or another marker.
Just run along the bytes, in a buffered stream, counting the number of appearances of your end of line marker.
The best way to know how to do this fast is to think about the fastest way to do it without using C/C++.
In assembly there is a CPU level operation that scans memory for a character so in assembly you would do the following
Read big part (or all) of the file into memory
Execute the SCASB command
Repeat as needed
So, in C# you want the compiler to get as close to that as possible.
I tried multiple methods and tested their performance:
The one that reads a single byte is about 50% slower than the other methods. The other methods all return around the same amount of time. You could try creating threads and doing this asynchronously, so while you are waiting for a read you can start processing a previous read. That sounds like a headache to me.
I would go with the one liner: File.ReadLines(filePath).Count(); it performs as well as the other methods I tested.
private static int countFileLines(string filePath)
{
using (StreamReader r = new StreamReader(filePath))
{
int i = 0;
while (r.ReadLine() != null)
{
i++;
}
return i;
}
}
private static int countFileLines2(string filePath)
{
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
int b;
b = s.ReadByte();
while (b >= 0)
{
if (b == 10)
{
i++;
}
b = s.ReadByte();
}
return i + 1;
}
}
private static int countFileLines3(string filePath)
{
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
byte[] b = new byte[bufferSize];
int n = 0;
n = s.Read(b, 0, bufferSize);
while (n > 0)
{
i += countByteLines(b, n);
n = s.Read(b, 0, bufferSize);
}
return i + 1;
}
}
private static int countByteLines(byte[] b, int n)
{
int i = 0;
for (int j = 0; j < n; j++)
{
if (b[j] == 10)
{
i++;
}
}
return i;
}
private static int countFileLines4(string filePath)
{
return File.ReadLines(filePath).Count();
}
public static int CountLines(Stream stm)
{
StreamReader _reader = new StreamReader(stm);
int c = 0, count = 0;
while ((c = _reader.Read()) != -1)
{
if (c == '\n')
{
count++;
}
}
return count;
}
Yes, reading lines like that is the fastest and easiest way in any practical sense.
There are no shortcuts here. Files are not line based, so you have to read every single byte from the file to determine how many lines there are.
As TomTom pointed out, creating the strings is not strictly needed to count the lines, but a vast majority of the time spent will be waiting for the data to be read from the disk. Writing a much more complicated algorithm would perhaps shave off a percent of the execution time, and it would dramatically increase the time for writing and testing the code.
There are numerous ways to read a file. Usually, the fastest way is the simplest:
using (StreamReader sr = File.OpenText(fileName))
{
string s = String.Empty;
while ((s = sr.ReadLine()) != null)
{
//do what you gotta do here
}
}
This page does a great performance comparison between several different techniques including using BufferedReaders, reading into StringBuilder objects, and into an entire array.
StreamReader is not the fastest way to read files in general because of the small overhead from encoding the bytes to characters, so reading the file in a byte array is faster.
The results I get are a bit different each time due to caching and other processes, but here is one of the results I got (in milliseconds) with a 16 MB file :
75 ReadLines
82 ReadLine
22 ReadAllBytes
23 Read 32K
21 Read 64K
27 Read 128K
In general File.ReadLines should be a little bit slower than a StreamReader.ReadLine loop.
File.ReadAllBytes is slower with bigger files and will throw out of memory exception with huge files.
The default buffer size for FileStream is 4K, but on my machine 64K seemed the fastest.
private static int countWithReadLines(string filePath)
{
int count = 0;
var lines = File.ReadLines(filePath);
foreach (var line in lines) count++;
return count;
}
private static int countWithReadLine(string filePath)
{
int count = 0;
using (var sr = new StreamReader(filePath))
while (sr.ReadLine() != null)
count++;
return count;
}
private static int countWithFileStream(string filePath, int bufferSize = 1024 * 4)
{
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
int count = 0;
byte[] array = new byte[bufferSize];
while (true)
{
int length = fs.Read(array, 0, bufferSize);
for (int i = 0; i < length; i++)
if(array[i] == 10)
count++;
if (length < bufferSize) return count;
}
} // end of using
}
and tested with:
var path = "1234567890.txt"; Stopwatch sw; string s = "";
File.WriteAllLines(path, Enumerable.Repeat("1234567890abcd", 1024 * 1024 )); // 16MB (16 bytes per line)
sw = Stopwatch.StartNew(); countWithReadLines(path) ; sw.Stop(); s += sw.ElapsedMilliseconds + " ReadLines \n";
sw = Stopwatch.StartNew(); countWithReadLine(path) ; sw.Stop(); s += sw.ElapsedMilliseconds + " ReadLine \n";
sw = Stopwatch.StartNew(); countWithReadAllBytes(path); sw.Stop(); s += sw.ElapsedMilliseconds + " ReadAllBytes \n";
sw = Stopwatch.StartNew(); countWithFileStream(path, 1024 * 32); sw.Stop(); s += sw.ElapsedMilliseconds + " Read 32K \n";
sw = Stopwatch.StartNew(); countWithFileStream(path, 1024 * 64); sw.Stop(); s += sw.ElapsedMilliseconds + " Read 64K \n";
sw = Stopwatch.StartNew(); countWithFileStream(path, 1024 *128); sw.Stop(); s += sw.ElapsedMilliseconds + " Read 128K \n";
MessageBox.Show(s);

What is the correct way to get a byte array from a FileStream?

The Microsoft website has the code snippet:
using (FileStream fsSource = new FileStream(pathSource,
FileMode.Open, FileAccess.Read))
{
// Read the source file into a byte array.
byte[] bytes = new byte[fsSource.Length];
int numBytesToRead = (int)fsSource.Length;
int numBytesRead = 0;
while (numBytesToRead > 0)
{
// Read may return anything from 0 to numBytesToRead.
int n = fsSource.Read(bytes, numBytesRead, numBytesToRead);
// Break when the end of the file is reached.
if (n == 0)
break;
numBytesRead += n;
numBytesToRead -= n;
}
}
What concerns me is that fsSource.Length is a long, whereas numBytesRead is an int so at most only 2 * int.MaxValue can be read into bytes (the head and the tail of the stream). So my questions are:
Is there some reason that this is OK?
If not, how should you read a FileStream into a byte[].
In this situation I wouldn't even bother processing the FileStream manually; use File.ReadAllBytes instead:
byte[] bytes = File.ReadAllBytes(pathSource);
To answer your question:
The sample code is good for most of applications where we are not reaching extremes.
If you have really long stream like say a video, use BufferedStream. Sample code is available at MSDN site
Example using ReadAllBytes:
private byte[] m_cfgBuffer;
m_cfgBuffer = File.ReadAllBytes(m_FileName);
StringBuilder PartNbr = new StringBuilder();
StringBuilder Version = new StringBuilder();
int i, j;
byte b;
i = 356; // We know that the cfg file header ends at position 356 (1st hex(80))
b = m_cfgBuffer[i];
while (b != 0x80) // Scan for 2nd hex(80)
{
i++;
b = m_cfgBuffer[i];
}
// Now extract the part number - 6 bytes after hex(80)
m_PartNbrPos = i + 5;
for (j = m_PartNbrPos; j < m_PartNbrPos + 6; j++)
{
char cP = (char)m_cfgBuffer[j];
PartNbr.Append(cP);
}
m_PartNbr = PartNbr.ToString();
// Now, extract version number - 6 bytes after part number
m_VersionPos = (m_PartNbrPos + 6) + 6;
for (j = m_VersionPos; j < m_VersionPos + 2; j++)
{
char cP = (char)m_cfgBuffer[j];
Version.Append(cP);
}
m_Version = Version.ToString();

Categories

Resources