It is written
byte[][] getImagesForFields(java.lang.String[] fieldnames)
Gets an array of images for the given fields.
On the other hand, as long as I use the method in the web application project built on asp.net 2.o using c#;
the provided web method declared above, returns sbyte;
Have a look my code below;
formClearanceService.openSession(imageServiceUser);
formClearanceService.prepareInstance(formId);
byte[][] fieldImagesList = formClearanceService.getImagesForFields(fieldNames);
formClearanceService.closeSession();
thus I get the following error: Cannot implicitly convert type 'sbyte[]' to 'byte[][]'
So now,
1- should I ask the web service provider what is going on?
or
2- any other way that can use the sbyte as I was suppose to use byte[][] like following using:
byte[] ssss = fieldImagesList [0]..
Java has signed bytes, so that part is correct in some ways (although unsigned bytes are more natural) - but it is vexing that it is returning a single array rather than a jagged array. I expect you're going to have to compare some data to see what you have received vs what you expected.
But changing between signed and unsigned can be as simple as:
sbyte[] orig = ...
byte[] arr = Array.ConvertAll(orig, b => (byte)b);
or (faster) simply:
sbyte[] orig = ...
byte[] arr = new byte[orig.Length];
Buffer.BlockCopy(orig, 0, arr, 0, orig.Length);
Related
I have a code that works with .netstandard2.1/netcore3.0 because of this constructor of BigInteger class. This ctor is not available for netstandard2.0, and I'd like to be able to achieve the same functionality without forcing netstandard2.1.
Here's the problem:
Convert a string to UTF8, and hash it using Sha256
Convert the byte array in BigInteger using big endian byte order
Here's my solution that works with .netstandard2.1
static SHA256 sha256 = SHA256.Create();
internal static string GetEncoded(string value)
{
var data = sha256.ComputeHash(value.GetUTF8Bytes());
return new BigInteger(
value: data,
isUnsigned: true,
isBigEndian: true).ToString();
}
This is what the solution can be in netstandard2.0, unfortunately it doesn't produce the same result.
internal static string GetEncoded(string value)
{
var data = sha256.ComputeHash(value.GetUTF8Bytes());
return new BigInteger(value: data).ToString();
}
Here are some sample values and their expected encoded outputs.
"encoded": "68086943237164982734333428280784300550565381723532936263016368251445461241953",
"raw": "101 Wilson Lane"
"encoded": "101327353979588246869873249766058188995681113722618593621043638294296500696424",
"raw": "SLC"
From what I understand, the BigInteger(byte[]) ctor expects a little endian byte array. The few solutions I tried didn't produce the expected results, so I'm turning to SO for answers.
Any help would be great appreciated.
Related update issue to net core
Here is the code of the constructor you used: https://github.com/dotnet/corefx/blob/191ad0b5d52172366436322bf9d553dc770d23b1/src/System.Runtime.Numerics/src/System/Numerics/BigInteger.cs#L256
You could adapt it in order to replace ReadOnlySpan by byte[].
I have a byte[] object that I'm using as a data buffer.
I want to "read" it as an array of a either primitive/non-primitive structs without duplicating the byte[] data in memory.
The goal would be something like:
byte[] myBuffer;
//Buffer is populated
int[] asInts = PixieDust_ToInt(myBuffer);
MyStruct[] asMyStructs = PixieDust_ToMyStruct(myBuffer);
Is this possible? If so, how?
Is it possible? Practically, yes!
Since .NET Core 2.1, MemoryMarshal lets us do this for spans. If you are satisfied with a span instead of an array, then yes.
var intSpan = MemoryMarshal.Cast<byte, int>(myByteArray.AsSpan());
The int span will contain byteCount / 4 integers.
As for custom structs... The documentation claims to require a "primitive type" on both sides of the conversion. However, you might try using a ref struct and see that is the actual constraint. I wouldn't be surprised if it worked!
Note that ref structs are still very limiting, but the limitation makes sense for the kind of reinterpret casts that we are talking about.
Edit: Wow, the constraint is much less strict. It requires any struct, rather than a primitive. It does not even have to be a ref struct. There is only a runtime check that will throw if your struct contains a reference type anywhere in its hierarchy. That makes sense. So this should work for your custom structs as well as it does for ints. Enjoy!
You will not be able to do this. To have a MyStruct[] you'll need to actually create such an array of that type and copy the data over. You could, in theory, create your own custom type that acted as a collection, but was actually just a facade over the byte[], copying the bytes out into the struct objects as a given value was accessed, but if you end up actually accessing all of the values, this would end up copying all of the same data eventually, it would just potentially allow you to defer it a bit and may be helpful if you only actually use a small number of the values.
Consider class System.BitConverter
This class has functions to reinterpret the bytes starting at a given index as an Int32, Int64, Double, Boolean, etc. and back from those types into a sequence of bytes.
Example:
int32 x = 0x12345678;
var xBytes = BitConverter.GetBytes(x);
// bytes is a byte array with length 4: 0x78; 0x56; 0x34; 0x12
var backToInt32 = BitConverter.ToInt32(xBytes, 0);
Or if your array contains mixed data:
double d = 3.1415;
int16 n = 42;
Bool b = true;
Uint64 u = 0xFEDCBA9876543210;
// to array of bytes:
var dBytes = BitConverter.GetBytes(d);
var nBytes = BitConverter.GetBytes(n);
var bBytes = BitConverter.GetBytes(b);
var uBytes = BitConterter.GetBytes(u);
Byte[] myBytes = dBytes.Concat(nBytes).Concat(bBytes).Concat(uBytes).ToArray();
// startIndexes in myBytes:
int startIndexD = 0;
int startIndexN = dBytes.Count();
int startIndexB = startIndexN + nBytes.Count();
int startIndexU = startIndexB + bBytes.Count();
// back to original elements
double dRestored = Bitconverter.ToDouble(myBytes, startIndexD);
int16 nRestored = BitConverter.ToInt16(myBytes, startIndexN);
bool bRestored = BitConverter.ToBool(myBytes, startIndexB);
Uint64 uRestored = BitConverter.ToUint64(myBytes, startIndexU);
The closest you will get in order to convert a byte[] to other base-types is
Byte[] b = GetByteArray();
using(BinaryReader r = new BinaryReader(new MemoryStream(b)))
{
r.ReadInt32();
r.ReadDouble();
r.Read...();
}
There is however no simple way to convert a byte[] to any kind of object[]
I'm doing some conversions between some structures and thier byte[] representation. I found two way to do this but the difference (performance, memory and ...) is not clear to me.
Method 1:
public static T ByteArrayToStructure<T>(byte[] buffer)
{
int length = buffer.Length;
IntPtr i = Marshal.AllocHGlobal(length);
Marshal.Copy(buffer, 0, i, length);
T result = (T)Marshal.PtrToStructure(i, typeof(T));
Marshal.FreeHGlobal(i);
return result;
}
Method 2:
public static T Deserialize<T>(byte[] buffer)
{
BinaryFormatter formatter = new BinaryFormatter();
using (System.IO.MemoryStream stream = new System.IO.MemoryStream(buffer))
{
return (T)formatter.Deserialize(stream);
}
}
so which one is better and what is the major difference?
You are talking about two different approaches and two different types of data. If you are working with raw values converted to Byte Array, go for the first method. If you are dealing with values serialized into Byte Array (they also contains serialization data), go for the second method. Two different situations, two different methods... they are not, let me say, "synonyms".
Int32 Serialized into Byte[] -> Length 54
Int32 Converted to Byte[] -> Length 4
When using the BinaryFormatter to Serialize your data, it will append meta data in the output stream for use during Deserialization. So for the two examples you have, youll find it wont produce the same T output given the same byte[] input. So youll need to decide if you care about the meta data in the binary output or not. If you dont care, method 2 is obviously cleaner. If you need it to be straight binary, then you'll have to use something like method 1.
I am having a data conversion issue that need your help.
My project is an InterOp between C and C#, all the data from C is char * type, the data itself could be binary or displayable chars, I.e. each byte is in 0x00 to 0xFF range.
I am using Data marshal::PtrToStringAnsi to convert the char* to String^ in CLI code, but I found some bytes value changed. for example C382 converted to C32C. I guess it is possibly because ANSI is only capable of converting 7-bit char, but 82 is over the range? Can anyone explain why and what is the best way?
Basically what I want to do is, I don't need any encoding conversion, I just want to convert any char * face value to a string, e.g. if char *p = "ABC" I want to String ^s="ABC" as well, if *p="C382"(represents binary value) I also want ^s="C382".
Inside my .NET code, two subclasses will take the input string that either represents binary data or real string, if it is binary it will convert "C382" to byte[]=0xC3 0x82;
When reading back the data, C382 will be fetched from database as binary data, eventually it need be converted to char* "C382".
Does anybody have similar experience how to do these in both directions? I tried many ways, they all seem to be encode ways.
The Marshal class will do this for you.
When converting from char* to byte[] you need to pass the pointer and the buffer length to the managed code. Then you can do this:
byte[] FromNativeArray(IntPtr nativeArray, int len)
{
byte[] retval = new byte[len];
Marshal.Copy(nativeArray, retval, 0, len);
return retval;
}
And in the other direction there's nothing much to do. If you have a byte[] then you can simply pass that to your DLL function that expects to receive a char*.
C++
void ReceiveBuffer(char* arr, int len);
C#
[DllImport(...)]
static extern void ReceiveBuffer(byte[] arr, int len);
....
ReceiveBuffer(arr, arr.Length);
I need to pass a byte array into a C++ com object from C#. C++ will then fill the buffer for C# to read.
c++ function definition
STDMETHODIMP CSampleGrabber::GetBuffer(byte* bd)
{
int p=0;
while (p< nBufSize) {
bd[p]=pLocalBuf[p];
p++;
}
c# code :
byte[] byTemp = new byte[nBufSize];
igb.GetBuffer(ref byTemp);
This crashes the program with no exception. Please can someone help. Thanks
SOLVED:
with
byte[] byTemp = new byte[nBufSize];
GCHandle h = GCHandle.Alloc(byTemp, GCHandleType.Pinned);
igb.GetBuffer(h.AddrOfPinnedObject());
Thanks
The parameter should not be declared as ref. You want something like:
uint GetBuffer(byte[] bd);
If you include the ref you are passing a pointer to the array, when you just want the array. (And by array, I mean pointer to the first element.)
I know this is an old question, but Google brought me here so it might bring someone else.
If you're using P/Invoke to call:
... GetBuffer(byte* bd)
it should look something along the lines of
[DllImport("MyDll.dll")]
... GetBuffer(ref byte bd);
And a buffer array in c# should be passed in like this:
var arr = new byte[Length];
GetBuffer(ref arr[0]);
This also works with char*, as you can just pass in the same byte array reference and then use string s = Encoding.<encoding>.GetString(arr);