PInvoke on the Windows Mobile platform - c#

So I'm trying to invoke a function I have in my unmanaged C++ dll.
void foo(char* in_file, char * out_file)
In my C# application I declare the same function as
[DllImport("dll.dll")]
public static extern void foo(byte[] in_file, byte[] out_file);
The actual parameters I pass through an ASCII encoder like so
byte[] param1 = Encoding.ASCII.GetBytes(filename1);
byte[] param2 = Encoding.ASCII.GetBytes(filename2);
foo(param1, param2);
Everything works, except when the length of param1 is 0 or 36 or 72 characters (bytes). Then for some reason the DLL reports the received length of 1, 38, 73 with the 0x16 character appened on the very end.
I have wasted half a day trying to debug and understand why this is happening. Any ideas?
What is the significance of multiples of 36?
Edit/Solution:
After I posted this a possible answer struck me. By appending the null character 0x0 to the end of my strings before converting them to byte arrays I have eliminated the problem. Although I'm not sure if that is by design.

You've stumbled across the answer yourself - nowhere is the length passed to the function.
You need to think carefully about what functions you are running these on and whether ASCII is really what you want.
You probably really want this, with no ASCII encoding necessary and null termination explicit in the marshaled type:
[DllImport("dll.dll")]
public static extern void foo(
[MarshalAs(LPStr)] string in_file,
[MarshalAs(LPStr)] string out_file);
But check your foo() implementation and make sure you understand what you really expect to be passed in.

Related

Getting Memory Exception Error when calling C++ method from C#

I have C++ dll which has below method signature. I would like to know what should be compatible C# method to call C++ dll. I am getting error on execution as attempted to read write protected memory. The dll used here is third party dll. I am able to call simple methods without pointers from C++ dll.
C++ Declaration:
int __stdcall Deduct(int V, unsigned char *AI);
C# Declaration
[System.Runtime.InteropServices.DllImport("rwl.dll",EntryPoint = "_Deduct#8", CallingConvention = CallingConvention.Cdecl)]
public static extern long Deduct(long i,ref string AI);
As per the document of third party dll.
AI Used as input and output buffer. The buffer shall have at least 80 bytes
Input Additional Information for the transaction. It is a byte pointer containing 7 bytes.
e.g. Assume the usage information is not used,
if receipt number = 1234567,
hex value = 0x12D687,
7 bytes AI = D6 87 00 00 00 D6 87
Output On return, the AI contains the UD.
Please help.
I think the c# signature looks wrong.
[System.Runtime.InteropServices.DllImport("rwl.dll",EntryPoint = "_Deduct#8", CallingConvention = CallingConvention.Cdecl)]
public static extern long Deduct(int i,ref char[] AI);
if it is expecting a char[80] as it's input you may need to declare it as such and null terminate it yourself as whatever is reading from it might be reading past the end of the array.
usage
char[] tmp = new char[80];
tmp[0] = (char)0x00;
int i = 0;
Deduct(i, ref tmp);
I'm not sure if this will work but hopefully it helps.
Besides string marshaling, I found a likely reason for getting an exception.
You have specified the return type and the first parameter as int in C, but long int C#. This could cause problems.
C language specifies minimum bits of integer types which means that different for environments.
int in C : unsigned integer, at least 16 bits
long in C : unsigned integer, at least 32 bits
int in C# : 32-bit unsigned integer, always
long in C# : 64-bit unsigned integer, always
Most of major 32/64-bit OS/compilers use 32 bits for type int in C. However long is always 64-bit in C#. So the signature mismatches. I would suggest to revise:
public static extern int Deduct(int i, ref string AI); // Still not sure about string

Python to C#: how to format data for sockets?

I am translating a python communication library into C#, and am having trouble interpreting how the string gets formatted before being sent over tcp.
The relevant portion of the code is as follows:
struct.pack(
'!HHBH'+str(var_name_len)+'s',
self.msg_id,
req_len,
flag,
var_name_len,
self.varname
)
Then it gets sent with: sendall()
I have looked at the Python documentation (https://docs.python.org/2/library/struct.html) but am still drawing a blank regarding the first line: '!HHBH'+str(var_name_len)+'s', I understand this is where the formatting is set, but what it is being formatted to is beyond me.
The python code that I am translating can be found at the following link:
https://github.com/linuxsand/py_openshowvar/blob/master/py_openshowvar.py
Any python and C# vigilantes out there that can help me build this bridge?
Edit: Based on jas' answer, I have written the following c# struct:
public struct messageFormat
{
ushort messageId;
ushort reqLength;
char functionType;
ushort varLengthHex;
string varname;
...
Once I populate it, I will need to send it over TCP. I have an open socket, but need to convert to to a byte[] I assume so I can use socket.send(byte[])?
Thanks
What's being formatted are the five arguments following the format string. Each argument has a corresponding element in the format string.
For the sake of the explanation, let's assume that var_name_len has the value 12 (presumably because var_name is a string of length 12 in this hypothetical case).
So the format string will be
!HHBH12s
Breaking that down according to the docs:
! Big-ending byte ordering will be used
H self.msg_id will be packed as a two-byte unsigned short
H req_len will be packed as above
B flag will be packed as a one-byte unsigned char
H var_name_len will be packed as a two-byte unsigned short
12s self.varname will be packed as a 12-byte string

unsigned char* causes Access Violation Exception

I am developing a wrapper library that allow my project using a x86 C++ dll library in any CPU environment, I have no control about the dll thus I am using DllImport in C#.
There is a provided function which declared in C++: int __stdcall Func(int V, unsigned char *A)
and provided a sample declaration in VB: Private Declare Function Func Lib "lib.dll" Alias "_Func#8" (ByVal V As Long, A As Any) As Long
This function will request a device to Add/Deduct a value to/from a card by passing Convert.ToInt64(decimalValue) as V, and some customize information in A.
Here is the description of A:
It is a byte pointer containing 7 bytes.
The first 5 bytes are used to stores info that will be passed to the card log (The last 4 digits of the receipt number should be included in the first 2 bytes, the other 3 could be A3A4A5)
The last 2 bytes are used to stores info that will be passed to the device (The last 4 digits of the receipt number)
On return, the A contains a 32 bytes data.
After hours and hours of researches and tries, I cannot make result other than 'Access Violation Exception'. Please see the following draft code:
[DllImport("lib.dll", EntryPoint="_Func#8")]
public static external Int64 Func(Int64 V, StringBuilder sb);
string ReceiptNum = "ABC1234";
decimal Amount = 10m;
byte[] A = new byte[32];
A[0] = Convert.ToByte(ReceiptNum.Substring(3, 2));
A[1] = Convert.ToByte(ReceiptNum.Substring(5));
A[2] = Convert.ToByte("A3");
A[3] = Convert.ToByte("A4");
A[4] = Convert.ToByte("A5");
A[5] = Convert.ToByte(ReceiptNum.Substring(3, 2));
A[6] = Convert.ToByte(ReceiptNum.Substring(5));
StringBuilder sb = new StringBuilder(
new ASCIIEncoding().GetString(A), A.Length
);
Int64 Result = Func(Convert.ToInt64(Amount), sb);
And at this point it throws the exception. I have tried passing IntPtr, byte*, byte (by A[0]), byval, byref and none of them works. (Tried to deploy as x86 CPU as well)
Would appreciate any help! Thanks for your time!
PS - The reason of using StringBuilder is the library contains a function that accept a "char *Data" parameter that causes the same exception, and the solution is using StringBuilder to pass as a pointer, this function's VB Declaration is: Private Declare Function Func1 Lib "lib.dll" Alias "_Func1#12(ByVal c As Byte, ByVal o As Byte, ByVal Data As String) As Long
Your external definition is wrong.
StringBuilder is a complex structure containing an array of c# char.
c# chars are utf-16 (double bytes with complex rules for decoding unicode multichar caracters). Probably not what your are seeking.
If your data is a raw byte bufer you should go for byte[]
Int64 is also c# long.
Well, your native method signature takes int, and you're trying to pass a long long. That's not going to work, rather obviously. The same is true with the return value. Don't assume that VB maps clearly to VB.NET, much less C# - Long means a 32-bit integer in VB, but not in .NET. Native code is a very complex environment, and you better know what you're doing when trying to interface with native.
StringBuilder should only be used for character data. That's not your case, and you should use byte[] instead. No matter the fun things you're doing, you're trying to pass invalid unicode data instead of raw bytes. The confusion is probably from the fact that C doesn't distinguish between byte[] and string - both are usually represented as char*.
Additionally, I don't see how you'd expect this wrapper to work in an AnyCPU environment. If the native DLL is 32-bit, you can only use it from a 32-bit process. AnyCPU isn't magic, it just defers the decision of bit-ness to runtime, rather than compile-time.

C# Converting Each Char to Byte Array [duplicate]

How do I convert a string to a byte[] in .NET (C#) without manually specifying a specific encoding?
I'm going to encrypt the string. I can encrypt it without converting, but I'd still like to know why encoding comes to play here.
Also, why should encoding even be taken into consideration? Can't I simply get what bytes the string has been stored in? Why is there a dependency on character encodings?
Contrary to the answers here, you DON'T need to worry about encoding if the bytes don't need to be interpreted!
Like you mentioned, your goal is, simply, to "get what bytes the string has been stored in".
(And, of course, to be able to re-construct the string from the bytes.)
For those goals, I honestly do not understand why people keep telling you that you need the encodings. You certainly do NOT need to worry about encodings for this.
Just do this instead:
static byte[] GetBytes(string str)
{
byte[] bytes = new byte[str.Length * sizeof(char)];
System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
return bytes;
}
// Do NOT use on arbitrary bytes; only use on GetBytes's output on the SAME system
static string GetString(byte[] bytes)
{
char[] chars = new char[bytes.Length / sizeof(char)];
System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
return new string(chars);
}
As long as your program (or other programs) don't try to interpret the bytes somehow, which you obviously didn't mention you intend to do, then there is nothing wrong with this approach! Worrying about encodings just makes your life more complicated for no real reason.
Additional benefit to this approach: It doesn't matter if the string contains invalid characters, because you can still get the data and reconstruct the original string anyway!
It will be encoded and decoded just the same, because you are just looking at the bytes.
If you used a specific encoding, though, it would've given you trouble with encoding/decoding invalid characters.
It depends on the encoding of your string (ASCII, UTF-8, ...).
For example:
byte[] b1 = System.Text.Encoding.UTF8.GetBytes (myString);
byte[] b2 = System.Text.Encoding.ASCII.GetBytes (myString);
A small sample why encoding matters:
string pi = "\u03a0";
byte[] ascii = System.Text.Encoding.ASCII.GetBytes (pi);
byte[] utf8 = System.Text.Encoding.UTF8.GetBytes (pi);
Console.WriteLine (ascii.Length); //Will print 1
Console.WriteLine (utf8.Length); //Will print 2
Console.WriteLine (System.Text.Encoding.ASCII.GetString (ascii)); //Will print '?'
ASCII simply isn't equipped to deal with special characters.
Internally, the .NET framework uses UTF-16 to represent strings, so if you simply want to get the exact bytes that .NET uses, use System.Text.Encoding.Unicode.GetBytes (...).
See Character Encoding in the .NET Framework (MSDN) for more information.
The accepted answer is very, very complicated. Use the included .NET classes for this:
const string data = "A string with international characters: Norwegian: ÆØÅæøå, Chinese: 喂 谢谢";
var bytes = System.Text.Encoding.UTF8.GetBytes(data);
var decoded = System.Text.Encoding.UTF8.GetString(bytes);
Don't reinvent the wheel if you don't have to...
BinaryFormatter bf = new BinaryFormatter();
byte[] bytes;
MemoryStream ms = new MemoryStream();
string orig = "喂 Hello 谢谢 Thank You";
bf.Serialize(ms, orig);
ms.Seek(0, 0);
bytes = ms.ToArray();
MessageBox.Show("Original bytes Length: " + bytes.Length.ToString());
MessageBox.Show("Original string Length: " + orig.Length.ToString());
for (int i = 0; i < bytes.Length; ++i) bytes[i] ^= 168; // pseudo encrypt
for (int i = 0; i < bytes.Length; ++i) bytes[i] ^= 168; // pseudo decrypt
BinaryFormatter bfx = new BinaryFormatter();
MemoryStream msx = new MemoryStream();
msx.Write(bytes, 0, bytes.Length);
msx.Seek(0, 0);
string sx = (string)bfx.Deserialize(msx);
MessageBox.Show("Still intact :" + sx);
MessageBox.Show("Deserialize string Length(still intact): "
+ sx.Length.ToString());
BinaryFormatter bfy = new BinaryFormatter();
MemoryStream msy = new MemoryStream();
bfy.Serialize(msy, sx);
msy.Seek(0, 0);
byte[] bytesy = msy.ToArray();
MessageBox.Show("Deserialize bytes Length(still intact): "
+ bytesy.Length.ToString());
This is a popular question. It is important to understand what the question author is asking, and that it is different from what is likely the most common need. To discourage misuse of the code where it is not needed, I've answered the latter first.
Common Need
Every string has a character set and encoding. When you convert a System.String object to an array of System.Byte you still have a character set and encoding. For most usages, you'd know which character set and encoding you need and .NET makes it simple to "copy with conversion." Just choose the appropriate Encoding class.
// using System.Text;
Encoding.UTF8.GetBytes(".NET String to byte array")
The conversion may need to handle cases where the target character set or encoding doesn't support a character that's in the source. You have some choices: exception, substitution, or skipping. The default policy is to substitute a '?'.
// using System.Text;
var text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes("You win €100"));
// -> "You win ?100"
Clearly, conversions are not necessarily lossless!
Note: For System.String the source character set is Unicode.
The only confusing thing is that .NET uses the name of a character set for the name of one particular encoding of that character set. Encoding.Unicode should be called Encoding.UTF16.
That's it for most usages. If that's what you need, stop reading here. See the fun Joel Spolsky article if you don't understand what encoding is.
Specific Need
Now, the question author asks is, "Every string is stored as an array of bytes, right? Why can't I simply have those bytes?"
He doesn't want any conversion.
From the C# spec:
Character and string processing in C# uses Unicode encoding. The char
type represents a UTF-16 code unit, and the string type represents a
sequence of UTF-16 code units.
So, we know that if we ask for the null conversion (i.e., from UTF-16 to UTF-16), we'll get the desired result:
Encoding.Unicode.GetBytes(".NET String to byte array")
But to avoid the mention of encodings, we must do it another way. If an intermediate data type is acceptable, there is a conceptual shortcut for this:
".NET String to byte array".ToCharArray()
That doesn't get us the desired datatype but Mehrdad's answer shows how to convert this Char array to a Byte array using BlockCopy. However, this copies the string twice! And, it too explicitly uses encoding-specific code: the datatype System.Char.
The only way to get to the actual bytes the String is stored in is to use a pointer. The fixed statement allows taking the address of values. From the C# spec:
[For] an expression of type string, ... the initializer computes the
address of the first character in the string.
To do so, the compiler writes code skipping over the other parts of the string object with RuntimeHelpers.OffsetToStringData. So, to get the raw bytes, just create a pointer to the string and copy the number of bytes needed.
// using System.Runtime.InteropServices
unsafe byte[] GetRawBytes(String s)
{
if (s == null) return null;
var codeunitCount = s.Length;
/* We know that String is a sequence of UTF-16 code units
and such code units are 2 bytes */
var byteCount = codeunitCount * 2;
var bytes = new byte[byteCount];
fixed(void* pRaw = s)
{
Marshal.Copy((IntPtr)pRaw, bytes, 0, byteCount);
}
return bytes;
}
As #CodesInChaos pointed out, the result depends on the endianness of the machine. But the question author is not concerned with that.
You need to take the encoding into account, because 1 character could be represented by 1 or more bytes (up to about 6), and different encodings will treat these bytes differently.
Joel has a posting on this:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
The first part of your question (how to get the bytes) was already answered by others: look in the System.Text.Encoding namespace.
I will address your follow-up question: why do you need to pick an encoding? Why can't you get that from the string class itself?
The answer is in two parts.
First of all, the bytes used internally by the string class don't matter, and whenever you assume they do you're likely introducing a bug.
If your program is entirely within the .Net world then you don't need to worry about getting byte arrays for strings at all, even if you're sending data across a network. Instead, use .Net Serialization to worry about transmitting the data. You don't worry about the actual bytes anymore: the Serialization formatter does it for you.
On the other hand, what if you are sending these bytes somewhere that you can't guarantee will pull in data from a .Net serialized stream? In this case, you definitely do need to worry about encoding, because obviously, this external system cares. So again, the internal bytes used by the string don't matter: you need to pick an encoding so you can be explicit about this encoding on the receiving end, even if it's the same encoding used internally by .Net.
I understand that in this case, you might prefer to use the actual bytes stored by the string variable in memory where possible, with the idea that it might save some work creating your byte stream. However, I put it to you it's just not important compared to making sure that your output is understood at the other end, and to guarantee that you must be explicit with your encoding. Additionally, if you really want to match your internal bytes, you can already just choose the Unicode encoding, and get those performance savings.
This brings me to the second part... picking the Unicode encoding is telling .Net to use the underlying bytes. You do need to pick this encoding because when some new-fangled Unicode-Plus comes out the .Net runtime needs to be free to use this newer, better encoding model without breaking your program. But, for the moment (and foreseeable future), just choosing the Unicode encoding gives you what you want.
It's also important to understand your string has to be rewritten to wire, and that involves at least some translation of the bit-pattern even when you use a matching encoding. The computer needs to account for things like Big vs Little Endian, network byte order, packetization, session information, etc.
Just to demonstrate that Mehrdrad's sound answer works, his approach can even persist the unpaired surrogate characters(of which many had leveled against my answer, but of which everyone are equally guilty of, e.g. System.Text.Encoding.UTF8.GetBytes, System.Text.Encoding.Unicode.GetBytes; those encoding methods can't persist the high surrogate characters d800 for example, and those just merely replace high surrogate characters with value fffd ) :
using System;
class Program
{
static void Main(string[] args)
{
string t = "爱虫";
string s = "Test\ud800Test";
byte[] dumpToBytes = GetBytes(s);
string getItBack = GetString(dumpToBytes);
foreach (char item in getItBack)
{
Console.WriteLine("{0} {1}", item, ((ushort)item).ToString("x"));
}
}
static byte[] GetBytes(string str)
{
byte[] bytes = new byte[str.Length * sizeof(char)];
System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
return bytes;
}
static string GetString(byte[] bytes)
{
char[] chars = new char[bytes.Length / sizeof(char)];
System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
return new string(chars);
}
}
Output:
T 54
e 65
s 73
t 74
? d800
T 54
e 65
s 73
t 74
Try that with System.Text.Encoding.UTF8.GetBytes or System.Text.Encoding.Unicode.GetBytes, they will merely replace high surrogate characters with value fffd
Every time there's a movement in this question, I'm still thinking of a serializer(be it from Microsoft or from 3rd party component) that can persist strings even it contains unpaired surrogate characters; I google this every now and then: serialization unpaired surrogate character .NET. This doesn't make me lose any sleep, but it's kind of annoying when every now and then there's somebody commenting on my answer that it's flawed, yet their answers are equally flawed when it comes to unpaired surrogate characters.
Darn, Microsoft should have just used System.Buffer.BlockCopy in its BinaryFormatter ツ
谢谢!
Try this, a lot less code:
System.Text.Encoding.UTF8.GetBytes("TEST String");
Well, I've read all answers and they were about using encoding or one about serialization that drops unpaired surrogates.
It's bad when the string, for example, comes from SQL Server where it was built from a byte array storing, for example, a password hash. If we drop anything from it, it'll store an invalid hash, and if we want to store it in XML, we want to leave it intact (because the XML writer drops an exception on any unpaired surrogate it finds).
So I use Base64 encoding of byte arrays in such cases, but hey, on the Internet there is only one solution to this in C#, and it has bug in it and is only one way, so I've fixed the bug and written back procedure. Here you are, future googlers:
public static byte[] StringToBytes(string str)
{
byte[] data = new byte[str.Length * 2];
for (int i = 0; i < str.Length; ++i)
{
char ch = str[i];
data[i * 2] = (byte)(ch & 0xFF);
data[i * 2 + 1] = (byte)((ch & 0xFF00) >> 8);
}
return data;
}
public static string StringFromBytes(byte[] arr)
{
char[] ch = new char[arr.Length / 2];
for (int i = 0; i < ch.Length; ++i)
{
ch[i] = (char)((int)arr[i * 2] + (((int)arr[i * 2 + 1]) << 8));
}
return new String(ch);
}
Also please explain why encoding should be taken into consideration.
Can't I simply get what bytes the string has been stored in?
Why this dependency on encoding?!!!
Because there is no such thing as "the bytes of the string".
A string (or more generically, a text) is composed of characters: letters, digits, and other symbols. That's all. Computers, however, do not know anything about characters; they can only handle bytes. Therefore, if you want to store or transmit text by using a computer, you need to transform the characters to bytes. How do you do that? Here's where encodings come to the scene.
An encoding is nothing but a convention to translate logical characters to physical bytes. The simplest and best known encoding is ASCII, and it is all you need if you write in English. For other languages you will need more complete encodings, being any of the Unicode flavours the safest choice nowadays.
So, in short, trying to "get the bytes of a string without using encodings" is as impossible as "writing a text without using any language".
By the way, I strongly recommend you (and anyone, for that matter) to read this small piece of wisdom: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
C# to convert a string to a byte array:
public static byte[] StrToByteArray(string str)
{
System.Text.UTF8Encoding encoding=new System.Text.UTF8Encoding();
return encoding.GetBytes(str);
}
byte[] strToByteArray(string str)
{
System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding();
return enc.GetBytes(str);
}
With the advent of Span<T> released with C# 7.2, the canonical technique to capture the underlying memory representation of a string into a managed byte array is:
byte[] bytes = "rubbish_\u9999_string".AsSpan().AsBytes().ToArray();
Converting it back should be a non-starter because that means you are in fact interpreting the data somehow, but for the sake of completeness:
string s;
unsafe
{
fixed (char* f = &bytes.AsSpan().NonPortableCast<byte, char>().DangerousGetPinnableReference())
{
s = new string(f);
}
}
The names NonPortableCast and DangerousGetPinnableReference should further the argument that you probably shouldn't be doing this.
Note that working with Span<T> requires installing the System.Memory NuGet package.
Regardless, the actual original question and follow-up comments imply that the underlying memory is not being "interpreted" (which I assume means is not modified or read beyond the need to write it as-is), indicating that some implementation of the Stream class should be used instead of reasoning about the data as strings at all.
You can use the following code for conversion between string and byte array.
string s = "Hello World";
// String to Byte[]
byte[] byte1 = System.Text.Encoding.Default.GetBytes(s);
// OR
byte[] byte2 = System.Text.ASCIIEncoding.Default.GetBytes(s);
// Byte[] to string
string str = System.Text.Encoding.UTF8.GetString(byte1);
I'm not sure, but I think the string stores its info as an array of Chars, which is inefficient with bytes. Specifically, the definition of a Char is "Represents a Unicode character".
take this example sample:
String str = "asdf éß";
String str2 = "asdf gh";
EncodingInfo[] info = Encoding.GetEncodings();
foreach (EncodingInfo enc in info)
{
System.Console.WriteLine(enc.Name + " - "
+ enc.GetEncoding().GetByteCount(str)
+ enc.GetEncoding().GetByteCount(str2));
}
Take note that the Unicode answer is 14 bytes in both instances, whereas the UTF-8 answer is only 9 bytes for the first, and only 7 for the second.
So if you just want the bytes used by the string, simply use Encoding.Unicode, but it will be inefficient with storage space.
The key issue is that a glyph in a string takes 32 bits (16 bits for a character code) but a byte only has 8 bits to spare. A one-to-one mapping doesn't exist unless you restrict yourself to strings that only contain ASCII characters. System.Text.Encoding has lots of ways to map a string to byte[], you need to pick one that avoids loss of information and that is easy to use by your client when she needs to map the byte[] back to a string.
Utf8 is a popular encoding, it is compact and not lossy.
Use:
string text = "string";
byte[] array = System.Text.Encoding.UTF8.GetBytes(text);
The result is:
[0] = 115
[1] = 116
[2] = 114
[3] = 105
[4] = 110
[5] = 103
Fastest way
public static byte[] GetBytes(string text)
{
return System.Text.ASCIIEncoding.UTF8.GetBytes(text);
}
EDIT
as Makotosan commented this is now the best way:
Encoding.UTF8.GetBytes(text)
The closest approach to the OP's question is Tom Blodget's, which actually goes into the object and extracts the bytes. I say closest because it depends on implementation of the String Object.
"Can't I simply get what bytes the string has been stored in?"
Sure, but that's where the fundamental error in the question arises. The String is an object which could have an interesting data structure. We already know it does, because it allows unpaired surrogates to be stored. It might store the length. It might keep a pointer to each of the 'paired' surrogates allowing quick counting. Etc. All of these extra bytes are not part of the character data.
What you want is each character's bytes in an array. And that is where 'encoding' comes in. By default you will get UTF-16LE. If you don't care about the bytes themselves except for the round trip then you can choose any encoding including the 'default', and convert it back later (assuming the same parameters such as what the default encoding was, code points, bug fixes, things allowed such as unpaired surrogates, etc.
But why leave the 'encoding' up to magic? Why not specify the encoding so that you know what bytes you are gonna get?
"Why is there a dependency on character encodings?"
Encoding (in this context) simply means the bytes that represent your string. Not the bytes of the string object. You wanted the bytes the string has been stored in -- this is where the question was asked naively. You wanted the bytes of string in a contiguous array that represent the string, and not all of the other binary data that a string object may contain.
Which means how a string is stored is irrelevant. You want a string "Encoded" into bytes in a byte array.
I like Tom Bloget's answer because he took you towards the 'bytes of the string object' direction. It's implementation dependent though, and because he's peeking at internals it might be difficult to reconstitute a copy of the string.
Mehrdad's response is wrong because it is misleading at the conceptual level. You still have a list of bytes, encoded. His particular solution allows for unpaired surrogates to be preserved -- this is implementation dependent. His particular solution would not produce the string's bytes accurately if GetBytes returned the string in UTF-8 by default.
I've changed my mind about this (Mehrdad's solution) -- this isn't getting the bytes of the string; rather it is getting the bytes of the character array that was created from the string. Regardless of encoding, the char datatype in c# is a fixed size. This allows a consistent length byte array to be produced, and it allows the character array to be reproduced based on the size of the byte array. So if the encoding were UTF-8, but each char was 6 bytes to accommodate the largest utf8 value, it would still work. So indeed -- encoding of the character does not matter.
But a conversion was used -- each character was placed into a fixed size box (c#'s character type). However what that representation is does not matter, which is technically the answer to the OP. So -- if you are going to convert anyway... Why not 'encode'?
How do I convert a string to a byte[] in .NET (C#) without manually specifying a specific encoding?
A string in .NET represents text as a sequence of UTF-16 code units, so the bytes are encoded in memory in UTF-16 already.
Mehrdad's Answer
You can use Mehrdad's answer, but it does actually use an encoding because chars are UTF-16. It calls ToCharArray which looking at the source creates a char[] and copies the memory to it directly. Then it copies the data to a byte array that is also allocated. So under the hood it is copying the underlying bytes twice and allocating a char array that is not used after the call.
Tom Blodget's Answer
Tom Blodget's answer is 20-30% faster than Mehrdad since it skips the intermediate step of allocating a char array and copying the bytes to it, but it requires you compile with the /unsafe option. If you absolutely do not want to use encoding, I think this is the way to go. If you put your encryption login inside the fixed block, you don't even need to allocate a separate byte array and copy the bytes to it.
Also, why should encoding be taken into consideration? Can't I simply get what bytes the string has been stored in? Why is there a dependency on character encodings?
Because that is the proper way to do it. string is an abstraction.
Using an encoding could give you trouble if you have 'strings' with invalid characters, but that shouldn't happen. If you are getting data into your string with invalid characters you are doing it wrong. You should probably be using a byte array or a Base64 encoding to start with.
If you use System.Text.Encoding.Unicode, your code will be more resilient. You don't have to worry about the endianness of the system your code will be running on. You don't need to worry if the next version of the CLR will use a different internal character encoding.
I think the question isn't why you want to worry about the encoding, but why you want to ignore it and use something else. Encoding is meant to represent the abstraction of a string in a sequence of bytes. System.Text.Encoding.Unicode will give you a little endian byte order encoding and will perform the same on every system, now and in the future.
You can use following code to convert a string to a byte array in .NET
string s_unicode = "abcéabc";
byte[] utf8Bytes = System.Text.Encoding.UTF8.GetBytes(s_unicode);
If you really want a copy of the underlying bytes of a string, you can use a function like the one that follows. However, you shouldn't please read on to find out why.
[DllImport(
"msvcrt.dll",
EntryPoint = "memcpy",
CallingConvention = CallingConvention.Cdecl,
SetLastError = false)]
private static extern unsafe void* UnsafeMemoryCopy(
void* destination,
void* source,
uint count);
public static byte[] GetUnderlyingBytes(string source)
{
var length = source.Length * sizeof(char);
var result = new byte[length];
unsafe
{
fixed (char* firstSourceChar = source)
fixed (byte* firstDestination = result)
{
var firstSource = (byte*)firstSourceChar;
UnsafeMemoryCopy(
firstDestination,
firstSource,
(uint)length);
}
}
return result;
}
This function will get you a copy of the bytes underlying your string, pretty quickly. You'll get those bytes in whatever way they are encoding on your system. This encoding is almost certainly UTF-16LE but that is an implementation detail you shouldn't have to care about.
It would be safer, simpler and more reliable to just call,
System.Text.Encoding.Unicode.GetBytes()
In all likelihood this will give the same result, is easier to type, and the bytes will round-trip, as well as a byte representation in Unicode can, with a call to
System.Text.Encoding.Unicode.GetString()
Here is my unsafe implementation of String to Byte[] conversion:
public static unsafe Byte[] GetBytes(String s)
{
Int32 length = s.Length * sizeof(Char);
Byte[] bytes = new Byte[length];
fixed (Char* pInput = s)
fixed (Byte* pBytes = bytes)
{
Byte* source = (Byte*)pInput;
Byte* destination = pBytes;
if (length >= 16)
{
do
{
*((Int64*)destination) = *((Int64*)source);
*((Int64*)(destination + 8)) = *((Int64*)(source + 8));
source += 16;
destination += 16;
}
while ((length -= 16) >= 16);
}
if (length > 0)
{
if ((length & 8) != 0)
{
*((Int64*)destination) = *((Int64*)source);
source += 8;
destination += 8;
}
if ((length & 4) != 0)
{
*((Int32*)destination) = *((Int32*)source);
source += 4;
destination += 4;
}
if ((length & 2) != 0)
{
*((Int16*)destination) = *((Int16*)source);
source += 2;
destination += 2;
}
if ((length & 1) != 0)
{
++source;
++destination;
destination[0] = source[0];
}
}
}
return bytes;
}
It's way faster than the accepted anwser's one, even if not as elegant as it is.
Here are my Stopwatch benchmarks over 10000000 iterations:
[Second String: Length 20]
Buffer.BlockCopy: 746ms
Unsafe: 557ms
[Second String: Length 50]
Buffer.BlockCopy: 861ms
Unsafe: 753ms
[Third String: Length 100]
Buffer.BlockCopy: 1250ms
Unsafe: 1063ms
In order to use it, you have to tick "Allow Unsafe Code" in your project build properties.
As per .NET Framework 3.5, this method can also be used as String extension:
public static unsafe class StringExtensions
{
public static Byte[] ToByteArray(this String s)
{
// Method Code
}
}
Upon being asked what you intend to do with the bytes, you responded:
I'm going to encrypt it. I can encrypt it without converting but I'd still like to know why encoding comes to play here. Just give me the bytes is what I say.
Regardless of whether you intend to send this encrypted data over the network, load it back into memory later, or stream it to another process, you are clearly intending to decrypt it at some point. In that case, the answer is that you're defining a communication protocol. A communication protocol should not be defined in terms of implementation details of your programming language and its associated runtime. There are several reasons for this:
You may need to communicate with a process implemented in a different language or runtime. (This might include a server running on another machine or sending the string to a JavaScript browser client, for example.)
The program may be re-implemented in a different language or runtime in the future.
The .NET implementation might change the internal representation of strings. You may think this sounds farfetched, but this actually happened in Java 9 to reduce memory usage. There's no reason .NET couldn't follow suit. Skeet suggests that UTF-16 probably isn't optimal today give the rise of the emoji and other blocks of Unicode needing more than 2 bytes to represent as well, increasing the likelihood that the internal representation could change in the future.
For communicating (either with a completely disparate process or with the same program in the future), you need to define your protocol strictly to minimize the difficulty of working with it or accidentally creating bugs. Depending on .NET's internal representation is not a strict, clear, or even guaranteed to be consistent definition. A standard encoding is a strict definition that will not fail you in the future.
In other words, you can't satisfy your requirement for consistency without specifying an encoding.
You may certainly choose to use UTF-16 directly if you find that your process performs significantly better since .NET uses it internally or for any other reason, but you need to choose that encoding explicitly and perform those conversions explicitly in your code rather than depending on .NET's internal implementation.
So choose an encoding and use it:
using System.Text;
// ...
Encoding.Unicode.GetBytes("abc"); # UTF-16 little endian
Encoding.UTF8.GetBytes("abc")
As you can see, it's also actually less code to just use the built in encoding objects than to implement your own reader/writer methods.
The string can be converted to byte array in few different ways, due to the following fact: .NET supports Unicode, and Unicode standardizes several difference encodings called UTFs. They have different lengths of byte representation but are equivalent in that sense that when a string is encoded, it can be coded back to the string, but if the string is encoded with one UTF and decoded in the assumption of different UTF if can be screwed up.
Also, .NET supports non-Unicode encodings, but they are not valid in general case (will be valid only if a limited sub-set of Unicode code point is used in an actual string, such as ASCII). Internally, .NET supports UTF-16, but for stream representation, UTF-8 is usually used. It is also a standard-de-facto for Internet.
Not surprisingly, serialization of string into an array of byte and deserialization is supported by the class System.Text.Encoding, which is an abstract class; its derived classes support concrete encodings: ASCIIEncoding and four UTFs (System.Text.UnicodeEncoding supports UTF-16)
Ref this link.
For serialization to an array of bytes using System.Text.Encoding.GetBytes. For the inverse operation use System.Text.Encoding.GetChars. This function returns an array of characters, so to get a string, use a string constructor System.String(char[]).
Ref this page.
Example:
string myString = //... some string
System.Text.Encoding encoding = System.Text.Encoding.UTF8; //or some other, but prefer some UTF is Unicode is used
byte[] bytes = encoding.GetBytes(myString);
//next lines are written in response to a follow-up questions:
myString = new string(encoding.GetChars(bytes));
byte[] bytes = encoding.GetBytes(myString);
myString = new string(encoding.GetChars(bytes));
byte[] bytes = encoding.GetBytes(myString);
//how many times shall I repeat it to show there is a round-trip? :-)
It depends on what you want the bytes FOR
This is because, as Tyler so aptly said, "Strings aren't pure data. They also have information." In this case, the information is an encoding that was assumed when the string was created.
Assuming that you have binary data (rather than text) stored in a string
This is based off of OP's comment on his own question, and is the correct question if I understand OP's hints at the use-case.
Storing binary data in strings is probably the wrong approach because of the assumed encoding mentioned above! Whatever program or library stored that binary data in a string (instead of a byte[] array which would have been more appropriate) has already lost the battle before it has begun. If they are sending the bytes to you in a REST request/response or anything that must transmit strings, Base64 would be the right approach.
If you have a text string with an unknown encoding
Everybody else answered this incorrect question incorrectly.
If the string looks good as-is, just pick an encoding (preferably one starting with UTF), use the corresponding System.Text.Encoding.???.GetBytes() function, and tell whoever you give the bytes to which encoding you picked.
If you are using .NET Core or System.Memory for .NET Framework, there is a very efficient marshaling mechanism available via Span<T> and Memory<T> that can effectively reinterpret string memory as a span of bytes. Once you have a span of bytes you are free to marshal back to another type, or copy the span to an array for serialization.
To summarize what others have said:
Storing a representation of this kind of serialization is sensitive to system endianness, compiler optimizations, and changes to the internal representation of strings in the executing .NET Runtime.
Avoid long-term storage
Avoid deserializing or interpreting the string in other environments
This includes other machines, processor architectures, .NET runtimes, containers, etc.
This includes comparisons, formatting, encryption, string manipulation, localization, character transforms, etc.
Avoid making assumptions about the character encoding
The default encoding tends to be UTF-16LE in practice, but the compiler / runtime can choose any internal representation
Implementation
public static class MarshalExtensions
{
public static ReadOnlySpan<byte> AsBytes(this string value) => MemoryMarshal.AsBytes(value.AsSpan());
public static string AsString(this ReadOnlySpan<byte> value) => new string(MemoryMarshal.Cast<byte, char>(value));
}
Example
static void Main(string[] args)
{
string str1 = "你好,世界";
ReadOnlySpan<byte> span = str1.AsBytes();
string str2 = span.AsString();
byte[] bytes = span.ToArray();
Debug.Assert(bytes.Length > 0);
Debug.Assert(str1 == str2);
}
Furthur Insight
In C++ this is roughly equivalent to reinterpret_cast, and C this is roughly equivalent to a cast to the system's word type (char).
In recent versions of the .NET Core Runtime (CoreCLR), operations on spans effectively invoke compiler intrinsics and various optimizations that can sometimes eliminate bounds checking, leading to exceptional performance while preserving memory safety, assuming that your memory was allocated by the CLR and the spans are not derived from pointers from an unmanaged memory allocator.
Caveats
This uses a mechanism supported by the CLR that returns ReadOnlySpan<char> from a string; Additionally, this span does not necessarily encompass the complete internal string layout. ReadOnlySpan<T> implies that you must create a copy if you need to perform mutation, as strings are immutable.
Computers only understand raw binary data, raw bits.
One bit is a Binary Digit : 0 or 1.
An 8-bits number is a byte. One byte is a number between 0 and 255.
ASCII is a table that converts numbers to characters.
Numbers between 0 and 31 are controls: tab, new line, and others.
Numbers between 32 and 126 are printable characters :
letter a, number 1, % sign, underscore _
So with ASCII, there are 33 control characters and 95 printable characters.
ASCII is the most commonly used character encoding today.
The first entries of the Unicode table are ASCII and match the ASCII character set.
ASCII is a 7-bit character set. Numbers between 0 and 127.
With 8 bits we can go up to 255.
The most common alternative to ASCII is EBCDIC which is not compatible with ASCII and still exists today on IBM computers and databases.
1 byte, so 8 bits number is the most common unit in computer science nowadays. 1 byte is a number between 0 and 255.
ASCII defines a meaning for each number between 0 and 127.
The character associated with numbers between 128 and 255 depends on the character encoding being used. Two widely used character encodings used nowadays are windows1252 and UTF-8.
In windows1252 the number corresponding to the € sign is 128.
1 byte : [A0].
In the Unicode Database, the € sign is number 8364.
Now I give you the number 8364. Tow bytes : [20,AC].
In UTF-8 the Euro sign is the number 14844588. Three bytes : [E282AC].
Now I give you some raw data. Let's say 20AC.
Is it two windows1252 characters: £ or one single Unicode € sign?
I give you some more raw data. E282AC.
Well, 82 is an unassigned character in windows1252 so it is probably not windows1252.
It could be macRoman "‚Ǩ" or OEM 437 "ßéó" or the UTF-8 "€" sign.
It is possible to guess the encoding of a stream of raw bytes based on the characteristics of the character encodings and on statistics but there is no reliable way to do that.
Numbers between 128 and 255 on their own are invalid in UTF-8.
The é is common in some languages (french) so if you see many bytes with the value E9 surrounded by letters it is probably a windows1252-encoding string, the E9 byte representing the é character.
When you have a stream of raw bytes that represents a string, it is far better to know the matching encoding rather than trying to guess.
Below is a screenshot of one raw byte in various encodings that were once widely used.
Two ways:
public static byte[] StrToByteArray(this string s)
{
List<byte> value = new List<byte>();
foreach (char c in s.ToCharArray())
value.Add(c.ToByte());
return value.ToArray();
}
And,
public static byte[] StrToByteArray(this string s)
{
s = s.Replace(" ", string.Empty);
byte[] buffer = new byte[s.Length / 2];
for (int i = 0; i < s.Length; i += 2)
buffer[i / 2] = (byte)Convert.ToByte(s.Substring(i, 2), 16);
return buffer;
}
I tend to use the bottom one more often than the top, haven't benchmarked them for speed.

C++ read void* data from a binary file as utf8 in Visual Studio 2010

I'm not very fluent in C++, to tell the truth.
I have some binary data in memory under a type void* (which means, i think, a pointer to some unrepresentable something/nothing). The data are first taken from the file by fread.
int readfile FILE *file, void **data_return) {
//some code...
fread((void *)data, length, 1, file);
//some code...
}
There's a complex algorithm behind reading the binary data, but I think I don't need to understand it for this task.
char *t = ((char *)loc->mo_data) + string_offset;
return t;
This code reads the void* type (loc->mo_data) as string. Still understandable for me I guess.
The problem is that this data contains russian, spanish, czech and all sort of international characters representable in UTF8.
I'm not even sure, what encoding the "char" represents, probably win1250, because the strings returned are just bad. The function returns Организация instead of Организация. The first string is the UTF8 but represented in ASCII.
The bigger picture: I'm playing with this C++ library which has already been written by someone else - the library exposes just two functions, open file (returns pointer) and get string from this file by string key (returns string). This library is being used in a C# project.
At first, I thought that there might be something wrong with passing UTF8 strings between an C# app and DLL library
[DllImport("MoReader.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr OpenFile(string path);
[DllImport("MoReader.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern string FindString(IntPtr filePointer, string key);
C++ code:
extern "C" __declspec(dllexport) BinaryFileType* OpenFile(char *filePath);
extern "C" __declspec(dllexport) char *FindString(BinaryFileType *locText, char *key);
FindString returns the string but in a wrong encoding. And I don't know, how one could read ASCII represented in C# strings which are Unicode as UTF8...I tried some conversion methods but to no avail.
Though I think the problem is in the C++ code, i'd love the char type to be in the UTF8 encoding, I've noticed there's something called a codepage and there are some conversion functions and utf8 stream readers but because of my weak C++ knowledge, I don't really know the solution.
=== UPDATE ===
I've found a property in the Encoding class...
When I read the output string like this:
Encoding.UTF8.GetString(Encoding.Default.GetBytes(e))
...the result is right. I'm just getting the bytes from the string via some "Default" encoding and then I read the bytes again with the UTF8. The Default encoding here on my computer is ISO-8859-2 but it would be just plain stupid to rely on this property.
So...the question remains. I still need to know, how to read the void* type with a particular encoding. But at least, I know now that the string is being returned in the default encoding used by Windows.
** === ANSWER === **
Thanks everyone for answers.
As James pointed out, char * are just numbers. So I avoided all encoding troubles by just getting the numbers and not trying to interpret them as a string at all.
There was another problem...how to get an array of bytes in C# out of a char* in the C++ library? There is a Marshal.Copy method but I need to know the size of the string.
Every char* in C++ must end with a null character '\0'.
So in the end, I just read a byte after a byte until i find this null character. The code then looks like this.
IntPtr charPointer = ExternDll.FindString(fileIntPtr, "someString");
List<byte> bytes = new List<byte>();
for (int i=0; ;i++)
{
byte b = Marshal.ReadByte(charPointer, i);
if (b == '\0')
break;
bytes.Add(b);
}
string theResultStringInTheUTF8 = Encoding.UTF8.ToString(bytes.ToArray());
C++ is agnostic about character encoding. For that matter, if you're
getting the characters through some sort of hacky type conversions, any
language will be; there's no way for the language to know what the
encoding is.
In C++, char is really just a small integer; it's only by convention
that it contains some character encoding. But which encoding depends on
you. If your input is really UTF-8, then the char's pointed to by a
char* will contain UTF-8; if it's something else, then they'll contain
something else.
When you output the char's to the screen, C++ just passes them on (at
least by default). It's up to the terminal window to decide how to
interpret them; i.e. break the sequence into code points, then map each
code point to a graphic image. Under Unix (xterm), this is defined by
the display font; under Windows, formally at least by the code page (but
you can certainly install miscoded fonts which will screw it up). C++
has nothing to do with this. The code page for UTF-8 is 65001; if you
set the terminal to use this code page (chcp 5001 on the command
line), then output UTF-8, it should work.
.Net can automatically marshal only OEM/ANSI and Unicode/UTF-16 strings. It can't do it for UTF-8, so it's where you are wrong.
You have to manually convert strings from/to UTF-8 with System.Text.Encoding.UTF8
String decodedString = utf8.GetString(encodedBytes);
and pass them to C++ as binary data. Don't forget to append terminating '\0'

Categories

Resources