Code change vb.net to c# - c#

I have no idea where to ask a question like this, so probably should say sorry right away.
Private Function RCON_Command(ByVal Command As String, ByVal ServerData As Integer) As Byte()
Dim Packet As Byte() = New Byte(CByte((13 + Command.Length))) {}
Packet(0) = Command.Length + 9 'Packet Size (Integer)
Packet(4) = 0 'Request Id (Integer)
Packet(8) = ServerData 'SERVERDATA_EXECCOMMAND / SERVERDATA_AUTH (Integer)
For X As Integer = 0 To Command.Length - 1
Packet(12 + X) = System.Text.Encoding.Default.GetBytes(Command(X))(0)
Next
Return Packet
End Function
Can someone tell me how should this code look like in c#? Tried my self but always getting error Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?)
Tried to cast, then getting error about no need to cast
My code:
private byte[] RCON_Command(string command, int serverdata)
{
byte[] packet = new byte[command.Length + 13];
packet[0] = command.Length + 9;
packet[4] = 0;
packet[8] = serverdata;
for (int i = 0; i < command.Length; i++)
{
packet[12 + i] = System.Text.Encoding.UTF8.GetBytes(command[i])[0];
}
return packet;
}
error is in packet[0] and packet [8] line

You need to cast the two items to byte before assigning them. Another option I've done below is to change the method to accept serverdata as a byte instead of int - there's no point in taking the extra bytes only to throw them away.
Another problem is in the for loop - the indexer of string returns a char, which UTF8.GetBytes() can't accept. I think my translation should work, but you'll need to test it.
private byte[] RCON_Command(string command, byte serverdata)
{
byte[] packet = new byte[command.Length + 13];
packet[0] = (byte)(command.Length + 9);
packet[4] = 0;
packet[8] = serverdata;
for (int i = 0; i < command.Length; i++)
{
packet[12 + i] = System.Text.Encoding.UTF8.GetBytes(command)[i];
}
return packet;
}

Here you go. The Terik converter was no use - that code wouldn't compile.
This code runs...
private byte[] RCON_Command(string Command, int ServerData)
{
byte[] commandBytes = System.Text.Encoding.Default.GetBytes(Command);
byte[] Packet = new byte[13 + commandBytes.Length + 1];
for (int i = 0; i < Packet.Length; i++)
{
Packet[i] = (byte)0;
}
int index = 0;
//Packet Size (Integer)
byte[] bytes = BitConverter.GetBytes(Command.Length + 9);
foreach (var byt in bytes)
{
Packet[index++] = byt;
}
//Request Id (Integer)
bytes = BitConverter.GetBytes((int)0);
foreach (var byt in bytes)
{
Packet[index++] = byt;
}
//SERVERDATA_EXECCOMMAND / SERVERDATA_AUTH (Integer)
bytes = BitConverter.GetBytes(ServerData);
foreach (var byt in bytes)
{
Packet[index++] = byt;
}
foreach (var byt in commandBytes)
{
Packet[index++] = byt;
}
return Packet;
}

In addition to the need for casting, you need to be aware that C# uses array sizes when creating the array, not the upper bound that VB uses - so you need "14 + Command.Length":
private byte[] RCON_Command(string Command, int ServerData)
{
byte[] Packet = new byte[Convert.ToByte(14 + Command.Length];
Packet[0] = Convert.ToByte(Command.Length + 9); //Packet Size (Integer)
Packet[4] = 0; //Request Id (Integer)
Packet[8] = Convert.ToByte(ServerData); //SERVERDATA_EXECCOMMAND / SERVERDATA_AUTH (Integer)
for (int X = 0; X < Command.Length; X++)
{
Packet[12 + X] = System.Text.Encoding.Default.GetBytes(Command[X])[0];
}
return Packet;
}

Just add the explicit casts. You might want to make sure that it's safe to down cast from a 32-bit value type to an 8-bit type.
packet[0] = (byte)(command.Length + 9);
...
packet[8] = (byte)serverdata;
EDIT:
TheEvilPenguin is also right that you will have a problem with your call to GetBytes().
This is how I would fix it to make sure I don't change the meaning of the existing VB.NET code:
packet[12 + i] = System.Text.Encoding.UTF8.GetBytes(new char[] {command[i]})[0];
And also, one more detail:
When you declare an array in VB.NET, you define the maximum array index. In C#, the number in the array declaration represents the number of elements in the array. This means that in the translation from VB.NET to C#, to keep equivalent behavior, you need to add + 1 to the number in the array declaration:
byte[] packet = new byte[command.Length + 13 + 1]; // or + 14 if you want

Related

Converting binary file reading (fread) code from MATLAB to C#

I need to reproduce in C# a MATLAB code I found, which reads a binary file. The code is:
% Skip header
fread(fid, 1, 'int32=>double', 0, 'b');
% Read one property at the time
i = 0;
while ~feof(fid)
i = i + 1;
% Read field name (keyword) and array size
keyword = deblank(fread(fid, 8, 'uint8=>char')');
keyword = strrep(keyword, '+', '_');
num = fread(fid, 1, 'int32=>double', 0, 'b');
% Read and interpret data type
dtype = fread(fid, 4, 'uint8=>char')';
End
fclose(fid)
I’ve tried several methods of reading binary files in C#, but I haven’t got the right results. How should I proceed?
this is what i've done, that seems to kind of work so far
FileStream fs = new FileStream(filename, FileMode.Open);
BinaryReader binreader = new BinaryReader(fs,Encoding.Default);
//skip head
binreader.ReadInt32();
for (int i = 0; i < 8; i++)
{
keyword = keyword + binreader.ReadChar();
}
keyword = keyword.TrimEnd();
keyword = keyword.Replace("+", "_");
num = binreader.ReadInt32();
for (int i = 0; i < 4; i++)
{
dtype = dtype + binreader.ReadChar();
}
the problem is that i should obtein: keyword=INTERHEAD, num=411 and dtype=INTE
but what Im getting is: keyword=INTERHEAD, num=-1694433280 and dtype=INTE
the problem is in getting the num variable right.
I've changed readint32 to readdouble, readUint32 and so on but never got 411.
Any help?

checksum calculation using ArraySegment<byte>

I have issue with the following method - I don't understand why it behaves the way it does
private static bool chksumCalc(ref byte[] receive_byte_array)
{
Console.WriteLine("receive_byte_array -> " + receive_byte_array.Length); //ok,151 bytes in my case
ArraySegment<byte> segment = new ArraySegment<byte>(receive_byte_array, 0, 149);
Console.WriteLine("segment # -> " + segment.Count); //ok,149 bytes
BitArray resultBits = new BitArray(8); //hold the result
Console.WriteLine("resultBits.Length -> " + resultBits.Length); //ok, 8bits
//now loop through the 149 bytes
for (int i = segment.Offset; i < (segment.Offset + segment.Count); ++i)
{
BitArray curBits = new BitArray(segment.Array[i]);
Console.WriteLine("curBits.Length -> " + curBits.Length); //gives me 229 not 8?
resultBits = resultBits.Xor(curBits);
}
//some more things to do ... return true...
//or else
return false;
}
I need to XOR 149 bytes and I don't understand why segment.Array[i] doesn't give me 1 byte. If I have array of 149 bytes if I use for example segment.Array[1] it has to yield the 2nd byte or am I that wrong? Where does the 229 come from? Can someone please clarify? Thank you.
This is the constructor you're calling: BitArray(int length)
Initializes a new instance of the BitArray class that can hold the specified number of bit values, which are initially set to false.
If you look, all of the constructors for BitArray read like that. I don't see why you need to use the BitArray class at all, though. Just use a byte to store your XOR result:
private static bool chksumCalc(ref byte[] receive_byte_array)
{
var segment = new ArraySegment<byte>(receive_byte_array, 0, 149);
byte resultBits = 0;
for (var i = segment.Offset; i < (segment.Offset + segment.Count); ++i)
{
var curBits = segment.Array[i];
resultBits = (byte)(resultBits ^ curBits);
}
//some more things to do ... return true...
//or else
return false;
}
I don't think you need the ArraySegment<T> either (not for the code presented), but I left it as is since it's beside the point of the question.

Optimization - Encode a string and get hexadecimal representation of 3 bytes

I am currently working in an environment where performance is critical and this is what I am doing :
var iso_8859_5 = System.Text.Encoding.GetEncoding("iso-8859-5");
var dataToSend = iso_8859_5.GetBytes(message);
The I need to group the bytes by 3 so I have a for loop that does this (i being the iterator of the loop):
byte[] dataByteArray = { dataToSend[i], dataToSend[i + 1], dataToSend[i + 2], 0 };
I then get an integer out of these 4 bytes
BitConverter.ToUInt32(dataByteArray, 0)
and finally the integer is converted to a hexadecimal string that I can place in a network packet.
The last two lines repeat about 150 times
I am currently hitting 50 milliseconds of execution times and ideally I would want to reach 0... Is there a faster way to do this that I am not aware of?
UPDATE
Just tried
string hex = BitConverter.ToString(dataByteArray);
hex.Replace("-", "")
to get the hex string directly but it is 3 times slower
Ricardo Silva's answer adapted
public byte[][] GetArrays(byte[] fullMessage, int size)
{
var returnArrays = new byte[(fullMessage.Length / size)+1][];
int i, j;
for (i = 0, j = 0; i < (fullMessage.Length - 2); i += size, j++)
{
returnArrays[j] = new byte[size + 1];
Buffer.BlockCopy(
src: fullMessage,
srcOffset: i,
dst: returnArrays[j],
dstOffset: 0,
count: size);
returnArrays[j][returnArrays[j].Length - 1] = 0x00;
}
switch ((fullMessage.Length % i))
{
case 0: {
returnArrays[j] = new byte[] { 0, 0, EOT, 0 };
} break;
case 1: {
returnArrays[j] = new byte[] { fullMessage[i], 0, EOT, 0 };
} break;
case 2: {
returnArrays[j] = new byte[] { fullMessage[i], fullMessage[i + 1], EOT, 0 };
} break;
}
return returnArrays;
}
After the line below you will get the total byte array.
var dataToSend = iso_8859_5.GetBytes(message);
My sugestion is work with Buffer.BlockCopy and test to see if this will be faster than your current method.
Try the code below and tell us if is faster than your current code:
public byte[][] GetArrays(byte[] fullMessage, int size)
{
var returnArrays = new byte[fullMessage.Length/size][];
for(int i = 0, j = 0; i < fullMessage.Length; i += size, j++)
{
returnArrays[j] = new byte[size + 1];
Buffer.BlockCopy(
src: fullMessage,
srcOffset: i,
dst: returnArrays[j],
dstOffset: 0,
count: size);
returnArrays[j][returnArrays[j].Length - 1] = 0x00;
}
return returnArrays;
}
EDIT1: I run the test below and the output was 245900ns (or 0,2459ms).
[TestClass()]
public class Form1Tests
{
[TestMethod()]
public void GetArraysTest()
{
var expected = new byte[] { 0x30, 0x31, 0x32, 0x00 };
var size = 3;
var stopWatch = new Stopwatch();
stopWatch.Start();
var iso_8859_5 = System.Text.Encoding.GetEncoding("iso-8859-5");
var target = iso_8859_5.GetBytes("012");
var arrays = Form1.GetArrays(target, size);
BitConverter.ToUInt32(arrays[0], 0);
stopWatch.Stop();
foreach(var array in arrays)
{
for(int i = 0; i < expected.Count(); i++)
{
Assert.AreEqual(expected[i], array[i]);
}
}
Console.WriteLine(string.Format("{0}ns", stopWatch.Elapsed.TotalMilliseconds * 1000000));
}
}
EDIT 2
I looked to your code and I have only one suggestion. I understood that you need to add EOF message and the length of input array will not be Always multiple of size that you want to break.
BUT, now the code below has TWO responsabilities, that break the S of SOLID concept.
The S talk about Single Responsability - Each method has ONE, and only ONE responsability.
The code you posted has TWO responsabilities (break input array into N smaller arrays and add EOF). Try think a way to create two totally independente methods (one to break an array into N other arrays, and other to put EOF in any array that you pass). This will allow you to create unit tests for each method (and guarantee that they Works and will never be breaked for any changed), and call the two methods from your class that make the system integration.

Efficiently convert byte array to Decimal

If I have a byte array and want to convert a contiguous 16 byte block of that array, containing .net's representation of a Decimal, into a proper Decimal struct, what is the most efficient way to do it?
Here's the code that showed up in my profiler as the biggest CPU consumer in a case that I'm optimizing.
public static decimal ByteArrayToDecimal(byte[] src, int offset)
{
using (MemoryStream stream = new MemoryStream(src))
{
stream.Position = offset;
using (BinaryReader reader = new BinaryReader(stream))
return reader.ReadDecimal();
}
}
To get rid of MemoryStream and BinaryReader, I thought feeding an array of BitConverter.ToInt32(src, offset + x)s into the Decimal(Int32[]) constructor would be faster than the solution I present below, but the version below is, strangely enough, twice as fast.
const byte DecimalSignBit = 128;
public static decimal ByteArrayToDecimal(byte[] src, int offset)
{
return new decimal(
BitConverter.ToInt32(src, offset),
BitConverter.ToInt32(src, offset + 4),
BitConverter.ToInt32(src, offset + 8),
src[offset + 15] == DecimalSignBit,
src[offset + 14]);
}
This is 10 times as fast as the MemoryStream/BinaryReader combo, and I tested it with a bunch of extreme values to make sure it works, but the decimal representation is not as straightforward as that of other primitive types, so I'm not yet convinced it works for 100% of the possible decimal values.
In theory however, there could be a way to copy those 16 contiguous byte to some other place in memory and declare that to be a Decimal, without any checks. Is anyone aware of a method to do this?
(There's only one problem: Although decimals are represented as 16 bytes, some of the possible values do not constitute valid decimals, so doing an uncheckedmemcpy could potentially break things...)
Or is there any other faster way?
Even though this is an old question, I was a bit intrigued, so decided to run some experiments. Let's start with the experiment code.
static void Main(string[] args)
{
byte[] serialized = new byte[16 * 10000000];
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < 10000000; ++i)
{
decimal d = i;
// Serialize
using (var ms = new MemoryStream(serialized))
{
ms.Position = (i * 16);
using (var bw = new BinaryWriter(ms))
{
bw.Write(d);
}
}
}
var ser = sw.Elapsed.TotalSeconds;
sw = Stopwatch.StartNew();
decimal total = 0;
for (int i = 0; i < 10000000; ++i)
{
// Deserialize
using (var ms = new MemoryStream(serialized))
{
ms.Position = (i * 16);
using (var br = new BinaryReader(ms))
{
total += br.ReadDecimal();
}
}
}
var dser = sw.Elapsed.TotalSeconds;
Console.WriteLine("Time: {0:0.00}s serialization, {1:0.00}s deserialization", ser, dser);
Console.ReadLine();
}
Result: Time: 1.68s serialization, 1.81s deserialization. This is our baseline. I also tried Buffer.BlockCopy to an int[4], which gives us 0.42s for deserialization. Using the method described in the question, deserialization goes down to 0.29s.
In theory however, there could be a way to copy those 16 contiguous
byte to some other place in memory and declare that to be a Decimal,
without any checks. Is anyone aware of a method to do this?
Well yes, the fastest way to do this is to use unsafe code, which is okay here because decimals are value types:
static unsafe void Main(string[] args)
{
byte[] serialized = new byte[16 * 10000000];
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < 10000000; ++i)
{
decimal d = i;
fixed (byte* sp = serialized)
{
*(decimal*)(sp + i * 16) = d;
}
}
var ser = sw.Elapsed.TotalSeconds;
sw = Stopwatch.StartNew();
decimal total = 0;
for (int i = 0; i < 10000000; ++i)
{
// Deserialize
decimal d;
fixed (byte* sp = serialized)
{
d = *(decimal*)(sp + i * 16);
}
total += d;
}
var dser = sw.Elapsed.TotalSeconds;
Console.WriteLine("Time: {0:0.00}s serialization, {1:0.00}s deserialization", ser, dser);
Console.ReadLine();
}
At this point, our result is: Time: 0.07s serialization, 0.16s deserialization. Pretty sure that's the fastest this is going to get... still, you have to accept unsafe here, and I assume stuff is written the same way as it's read.
#Eugene Beresovksy read from a stream is very costly. MemoryStream is certainly a powerful and versatile tool, but it has a pretty high cost to a direct reading a binary array. Perhaps because of this the second method performs better.
I have a 3rd solution for you, but before I write it, it is necessary to say that I haven't tested the performance of it.
public static decimal ByteArrayToDecimal(byte[] src, int offset)
{
var i1 = BitConverter.ToInt32(src, offset);
var i2 = BitConverter.ToInt32(src, offset + 4);
var i3 = BitConverter.ToInt32(src, offset + 8);
var i4 = BitConverter.ToInt32(src, offset + 12);
return new decimal(new int[] { i1, i2, i3, i4 });
}
This is a way to make the building based on a binary without worrying about the canonical of System.Decimal. It is the inverse of the default .net bit extraction method:
System.Int32[] bits = Decimal.GetBits((decimal)10);
EDITED:
This solution perhaps don't peform better but also don't have this problem: "(There's only one problem: Although decimals are represented as 16 bytes, some of the possible values do not constitute valid decimals, so doing an uncheckedmemcpy could potentially break things...)".

C# compress a byte array

I do not know much about compression algorithms. I am looking for a simple compression algorithm (or code snippet) which can reduce the size of a byte[,,] or byte[]. I cannot make use of System.IO.Compression. Also, the data has lots of repetition.
I tried implementing the RLE algorithm (posted below for your inspection). However, it produces array's 1.2 to 1.8 times larger.
public static class RLE
{
public static byte[] Encode(byte[] source)
{
List<byte> dest = new List<byte>();
byte runLength;
for (int i = 0; i < source.Length; i++)
{
runLength = 1;
while (runLength < byte.MaxValue
&& i + 1 < source.Length
&& source[i] == source[i + 1])
{
runLength++;
i++;
}
dest.Add(runLength);
dest.Add(source[i]);
}
return dest.ToArray();
}
public static byte[] Decode(byte[] source)
{
List<byte> dest = new List<byte>();
byte runLength;
for (int i = 1; i < source.Length; i+=2)
{
runLength = source[i - 1];
while (runLength > 0)
{
dest.Add(source[i]);
runLength--;
}
}
return dest.ToArray();
}
}
I have also found a java, string and integer based, LZW implementation. I have converted it to C# and the results look good (code posted below). However, I am not sure how it works nor how to make it work with bytes instead of strings and integers.
public class LZW
{
/* Compress a string to a list of output symbols. */
public static int[] compress(string uncompressed)
{
// Build the dictionary.
int dictSize = 256;
Dictionary<string, int> dictionary = new Dictionary<string, int>();
for (int i = 0; i < dictSize; i++)
dictionary.Add("" + (char)i, i);
string w = "";
List<int> result = new List<int>();
for (int i = 0; i < uncompressed.Length; i++)
{
char c = uncompressed[i];
string wc = w + c;
if (dictionary.ContainsKey(wc))
w = wc;
else
{
result.Add(dictionary[w]);
// Add wc to the dictionary.
dictionary.Add(wc, dictSize++);
w = "" + c;
}
}
// Output the code for w.
if (w != "")
result.Add(dictionary[w]);
return result.ToArray();
}
/* Decompress a list of output ks to a string. */
public static string decompress(int[] compressed)
{
int dictSize = 256;
Dictionary<int, string> dictionary = new Dictionary<int, string>();
for (int i = 0; i < dictSize; i++)
dictionary.Add(i, "" + (char)i);
string w = "" + (char)compressed[0];
string result = w;
for (int i = 1; i < compressed.Length; i++)
{
int k = compressed[i];
string entry = "";
if (dictionary.ContainsKey(k))
entry = dictionary[k];
else if (k == dictSize)
entry = w + w[0];
result += entry;
// Add w+entry[0] to the dictionary.
dictionary.Add(dictSize++, w + entry[0]);
w = entry;
}
return result;
}
}
Have a look here. I used this code as a basis to compress in one of my work projects. Not sure how much of the .NET Framework is accessbile in the Xbox 360 SDK, so not sure how well this will work for you.
The problem with that RLE algorithm is that it is too simple. It prefixes every byte with how many times it is repeated, but that does mean that in long ranges of non-repeating bytes, each single byte is prefixed with a "1". On data without any repetitions this will double the file size.
This can be avoided by using Code-type RLE instead; the 'Code' (also called 'Token') will be a byte that can have two meanings; either it indicates how many times the single following byte is repeated, or it indicates how many non-repeating bytes follow that should be copied as they are. The difference between those two codes is made by enabling the highest bit, meaning there are still 7 bits available for the value, meaning the amount to copy or repeat per such code can be up to 127.
This means that even in worst-case scenarios, the final size can only be about 1/127th larger than the original file size.
A good explanation of the whole concept, plus full working (and, in fact, heavily optimised) C# code, can be found here:
http://www.shikadi.net/moddingwiki/RLE_Compression
Note that sometimes, the data will end up larger than the original anyway, simply because there are not enough repeating bytes in it for RLE to work. A good way to deal with such compression failures is by adding a header to your final data. If you simply add an extra byte at the start that's on 0 for uncompressed data and 1 for RLE compressed data, then, when RLE fails to give a smaller result, you just save it uncompressed, with the 0 in front, and your final data will be exactly one byte larger than the original. The system at the other side can then read that starting byte and use that to determine if the following data should be uncompressed or just copied.
Look into Huffman codes, it's a pretty simple algorithm. Basically, use fewer bits for patterns that show up more often, and keep a table of how it's encoded. And you have to account in your codewords that there are no separators to help you decode.

Categories

Resources