I have an imported C++ method that receives a byte parameter, but according to the documentation, I can send a letter to that parameter, this is the C++ and C# method:
int WINAPI Sys_InitType(HID_DEVICE device, BYTE type)
public static extern int Sys_InitType(IntPtr device, byte type);
This causes me a syntax error in C#, how do I send a letter in that parameter?
My code (A bit random):
//CRASHES
byte random = Convert.ToByte("A");
_ = RFIDReader.Sys_SetAntenna(g_hDevice, 0);
int lol = RFIDReader.Sys_InitType(g_hDevice, random);
_ = RFIDReader.Sys_SetAntenna(g_hDevice, 1);
CError.Text = lol.ToString();
Convert.ToByte(string); doesn't do what you think it does, according to the documentation
Converts the specified string representation of a number to an equivalent 8-bit unsigned integer.
This would work byte random = Conver.ToByte("52"); which will return the byte 52.
See here:
https://learn.microsoft.com/en-us/dotnet/api/system.convert.tobyte?view=net-6.0#system-convert-tobyte(system-string)
As was pointed out in the comment already, you will have to use character instead of string, so either this
byte random = Convert.ToByte('A');
or a simple cast to byte
byte random = (byte)'A'
In case it is unknown to you, which I didn't assume, a byte can only contain values of the range 0 - 255, while a character can contain everything within the specs of UTF-16.
So this will not work
byte random = Convert.ToByte('\u4542');
And result in the error:
Value was either too large or too small for an unsigned byte.
https://dotnetfiddle.net/anjxt5
Related
I am trying to convert a byte received from a database query.
EF Core returns nullable tinyint as byte? and I need to convert it to decimal.
Is there any way to convert it OnModelCreating with model builder in the DbContext?
I am not very familiar with EF Core. So far I only managed to do this - after I already got my object in handler:
decimal? newDecimal = Convert.ToDecimal(BitConverter.ToDouble(AddByteToArray(priceByte), 0)));
private static byte[] AddByteToArray(byte? newByte)
{
if (newByte != null)
{
if(newByte == 0)
{
return Enumerable.Repeat((byte)0x0, 8).ToArray();
}
byte[] bArray = new byte[1];
// Not sure how to convert a non null and byte > 0 to byte[]?? As double requires byte[] while the tinyint return byte from the database
return bArray;
}
return null;
}
I think you are getting a little confused by the types here. The DB returns a byte? for a tinyint because a tinyint has only 8 bits of data. But otherwise it is an integer. If you want to convert it to a decimal, you would use the same mechanism as you would to convert an int or a long to a decimal: cast it. You do not want to convert a byte array to a decimal as that will try to interpret the data in the array as a binary representation of a decimal (see my last paragraph). So this code should suffice to do the conversion.
decimal? d = newByte == null ? null : (decimal)newByte;
See that such a conversion is possible here: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/numeric-conversions
Note the remarks section here that indicates we are dealing with a binary representation of the number, where care must be taken in dealing with endianness, etc.
https://learn.microsoft.com/en-us/dotnet/api/system.bitconverter?view=net-6.0#remarks
Basically, numbers larger than a byte are technically stored as an array of bytes (since all memory is byte addressable in x86) but the interpration of those bytes into a number depends on the type of the number. For floating point numbers especially the structure of data inside the byte array is complex, broken into fields that represent the base, exponent and sign. And those are not always interpreted in a straightforward way. If you just give a byte array with 27 as the first byte, you don't know where that ends up in the several fields that make up the binary representation of a double. It may well work, but probably not.
Instead of
byte[] bArray = new byte[1];
You can use
byte[] bArray = {(byte)newByte};
I cant figure out how to pass a char * to this C++ function from C#.
extern "C" __declspec(dllexport)
unsigned int extractSegment(char * startPoint, unsigned int sizeToExtract)
{
//do stuff
shared_ptr<std::vector<char>> content(new std::vector<char>(startPoint,startPoint+sizeToExtract));
//do more stuff
return content->size();
}
This function is used to read a segment from a file and do some binary operations on it (why i use the vector of chars). The startPoint is the start of the segment i want to read. I cannot change this function.
In C# I tried reading the file into a byte[] array and defining the DllImport to use StringBuilder where the export had char *. I tried to call it as such:
byte[] arr = File.ReadAllBytes(filename);
StringBuilder sb = new StringBuilder(System.Text.Encoding.Unicode.GetString(arr, startPoint,arr.Length - startPoiunt));
extractSegment(sb,200);
This resulted in SEHException.
A char * can have several different meanings. In your case it appears to be a preallocated and filled array of bytes that is used as an input parameter for the extractSegment function.
The equivalent C# method would then take a byte[] parameter, i.e.
[DllImport(...)]
public static extern int extractSegment(byte[] startPoint, uint sizeToExtract);
I use byte[] because you mention binary operations, however if it is actually a string then you can also marshal it as such, setting the correct encoding in the DllImport attribute.
And just for further information, other possible options for char * that I can think of right now would be ref byte, out byte, ref char, out char or string. StringBuilder on the other hand is used when the calling function allocates and returns a string, i.e. char **.
Little bit stuck on this, I have a var called PORTBhex holding a value in the range 0x00 to 0x3F which is written to an external device via USB. The problem I am having is getting the value into this bit of code:
public bool PORTBwrite()
{
Byte[] outputBuffer = new Byte[65];
outputBuffer[0] = 0;
outputBuffer[1] = 0x00; //Command tells PIC18F4550 we want to write a byte
outputBuffer[0] = 0;
//Must be set to 0
outputBuffer[2] = IO.PORTBhex;
//Hex value 0x00 - 0x3F to write to PORTB
//above line gives the error cannot implicity convert string - byte
//IO.PORTBhex is returned from the code in second snippet
if (writeRawReportToDevice(outputBuffer))
{
return true; //command completed OK
}else{
return false; //command failed .... error stuff goes here
}
}
Now the problem is the value i have is an integer that is converted to hex using:
public static string ToHex(this int value)
{
return string.Format("0x{0:X}", value);
}
The value starts off as an integer and is converted to hex however I cannot use the converted value as its of the wrong type I am getting Cannot implicitly convert type 'string' to 'byte'.
Any idea what I can do to get around this please?
Thanks
EDIT:
I think I might have poorly described what I'm trying to achieve, I have an int variable holding a value in the range 0-255 which I have to convert to Hex which must be formatted to be in the range 0x00 to 0xFF and then set outputBuffer[2] to that value to send to the microcontroller.
The integer var has some maths performed on it before it needs to be converted so i cannot solely use byte vars and has to be converted to a hex byte afterwards.
Thanks
To solution is to change PORTBhex to be of type byte and don't use that ToHex method at all:
Instead of IO.PORTBhex = ToHex(yourIntValue) use this:
IO.PORTBhex = checked((byte)yourIntValue);
It would be even better if you could make yourIntValue to be of type byte, too.
outputBuffer[2] = Convert.ToByte(IO.PORTBhex, 16);
Although personally I'd probably try to avoid strings here in the first place, and just store the byte
I am having a data conversion issue that need your help.
My project is an InterOp between C and C#, all the data from C is char * type, the data itself could be binary or displayable chars, I.e. each byte is in 0x00 to 0xFF range.
I am using Data marshal::PtrToStringAnsi to convert the char* to String^ in CLI code, but I found some bytes value changed. for example C382 converted to C32C. I guess it is possibly because ANSI is only capable of converting 7-bit char, but 82 is over the range? Can anyone explain why and what is the best way?
Basically what I want to do is, I don't need any encoding conversion, I just want to convert any char * face value to a string, e.g. if char *p = "ABC" I want to String ^s="ABC" as well, if *p="C382"(represents binary value) I also want ^s="C382".
Inside my .NET code, two subclasses will take the input string that either represents binary data or real string, if it is binary it will convert "C382" to byte[]=0xC3 0x82;
When reading back the data, C382 will be fetched from database as binary data, eventually it need be converted to char* "C382".
Does anybody have similar experience how to do these in both directions? I tried many ways, they all seem to be encode ways.
The Marshal class will do this for you.
When converting from char* to byte[] you need to pass the pointer and the buffer length to the managed code. Then you can do this:
byte[] FromNativeArray(IntPtr nativeArray, int len)
{
byte[] retval = new byte[len];
Marshal.Copy(nativeArray, retval, 0, len);
return retval;
}
And in the other direction there's nothing much to do. If you have a byte[] then you can simply pass that to your DLL function that expects to receive a char*.
C++
void ReceiveBuffer(char* arr, int len);
C#
[DllImport(...)]
static extern void ReceiveBuffer(byte[] arr, int len);
....
ReceiveBuffer(arr, arr.Length);
How can I convert a System.GUID (in C#) to a string in decimal base (aka to a huge, comma delimited integer, in base ten)?
Something like 433,352,133,455,122,445,557,129,...
Guid.ToString converts GUIDs to hexadecimal representations.
I'm using C# and .Net 2.0.
Please be aware that guid.ToByteAray() will NOT return an array that can be passed to BigInteger's constructor. To use the array a re-order is needed and a trailing zero to ensure that Biginteger sees the byteArray as a positive number (see MSDN docs). A simple but less performing function is:
private static string GuidToStringUsingStringAndParse(Guid value)
{
var guidBytes = string.Format("0{0:N}", value);
var bigInteger = BigInteger.Parse(guidBytes, NumberStyles.HexNumber);
return bigInteger.ToString("N0", CultureInfo.InvariantCulture);
}
As Victor Derks pointed out in his answer, you should append a 00 byte to the end of the array to ensure the resulting BigInteger is positive.
According to the the BigInteger Structure (System.Numerics) MSDN Documentation:
To prevent the BigInteger(Byte[]) constructor from confusing the two's complement representation of a negative value with the sign and magnitude representation of a positive value, positive values in which the most significant bit of the last byte in the byte array would ordinarily be set should include an additional byte whose value is 0.
(see also: byte[] to unsigned BigInteger?)
Here's code to do it:
var guid = Guid.NewGuid();
return String.Format("{0:N0}",
new BigInteger(guid.ToByteArray().Concat(new byte[] { 0 }).ToArray()));
using System;
using System.Numerics;
Guid guid = Guid.NewGuid();
byte[] guidAsBytes = guid.ToByteArray();
BigInteger guidAsInt = new BigInteger(guidAsBytes);
string guidAsString = guidAsInt.ToString("N0");
Note that the byte order in the byte array reflects endian-ness of the GUID sub-components.
In the interest of brevity, you can accomplish the same work with one line of code:
string GuidToInteger = (new BigInteger(Guid.NewGuid().ToByteArray())).ToString("N0");
Keep in mind that .ToString("N0") is not "NO"... see the difference?
Enjoy