I am writing a C# program that has to communicate with an Arduino. Basically it sends data to it and I should be able to read in the serial monitor.
C# code:
if (errCheck[i].IsChecked.GetValueOrDefault() == true)
err = "1"+err;
else
err = "0"+err;
_serialPort.Write("<16,"+ Convert.ToUInt32(err,2) + ">");
Arduino code:
void parseData() { // split the data into its parts
char * strtokIndx; // this is used by strtok() as an index
//strtokIndx = strtok(tempChars,","); // get the first part - the string
//strcpy(messageFromPC, strtokIndx); // copy it to messageFromPC
strtokIndx = strtok(tempChars, ","); // this continues where the previous call left off
integerFromPC = atoi(strtokIndx); // convert this part to an integer
switch (integerFromPC) {
//all cases
case 16: //managing errors
delay(10);
strtokIndx = strtok(NULL, ",");
uint32_tFromPC = atoi(strtokIndx);
errors=uint32_tFromPC;
Serial.print("errors Updated" );
When the last checkbox is checked (so my binary string is 1 and 31 0's) the serial monitor reads 7F FF FF FF instead of 80 00 00 00.
I have tried using ulong but it doesn't seem to work either, any ideas?
Why do you want to convert the String to int32 and then back to String??
Simply do this:
if (errCheck[i].IsChecked.GetValueOrDefault() == true)
err = "1"+err;
else
err = "0"+err;
_serialPort.Write("<16,"+ err + ">");
Even Uint32 can't take 32 digits!
And in your arduino code you are using atoi too. Handle it as a String. Why do u need it as a integer?
Btw thing about using enums for bitwise operations for examples look here:
http://www.alanzucconi.com/2015/07/26/enum-flags-and-bitwise-operators/
Related
I have a file I am reading from to acquire a database of music files, like this;
00 6F 74 72 6B 00 00 02 57 74 74 79 70 00 00 00 .otrk...Wttyp...
06 00 6D 00 70 00 33 70 66 69 6C 00 00 00 98 00 ..m.p.3pfil...~.
44 00 69............. D.i.....
Etc., there could be hundreds to thousands of records in this file, all split at "otrk" into a string this is the start of a new track.
The problem actually lies in the above, all the tracks start with otrk, and the has field identifiers, their length, and value, for example above;
ttyp = type, and 06 following it is the length of the value, which is .m.p.3 or 00 6D 00 70 00 33
then the next field identifier is pfil = filename, this lies the issue, specifically the length, which value is 98, however when read into a string becomes unrecognizable and defaults to a diamond with a question mark, and a value of 239, which is wrong. How can I avoid this and get the correct value in order to display the value correctly.
My code to read the file;
db_file = File.ReadAllText(filePath, Encoding.UTF8);
and the code to split and sort through the file
string[] entries = content.Split(new string[] "otrk", StringSplitOptions.None);
public List<Songs> Songs { get; } = new List<Songs>();
foreach(string entry in entries)
{
Songs.Add(Song.Create(entry));
}
Song.Create looks like;
public static Song Create(string dbString)
{
Song toRet = new Song();
for (int farthestReached = 0; farthestReached < dbString.Length;)
{
int startOfString = -1;
int iLength = -1;
byte[] b = Encoding.UTF8.GetBytes("0");
//Gets the start index
foreach(var l in labels)
{
startOfString = dbString.IndexOf(l, farthestReached);
if (startOfString >= 0)
{
// get identifer index plus its length
iLength = startOfString + 3;
var valueIndex = iLength + 5;
// get length of value
string temp = dbString.Substring(iLength + 4, 1);
b = Encoding.UTF8.GetBytes(temp);
int xLen = b[0];
// populate the label
string fieldLabel = dbString.Substring(startOfString, l.Length);
// populate the value
string fieldValue = dbString.Substring(valueIndex, xLen);
// set new
farthestReached = xLen + valueIndex;
switch (fieldLabel[0])
{
case 'p':
case 't':
string stringValue = "";
foreach (char c in fieldValue)
{
if (c == 0)
continue;
stringValue += c;
}
assignStringField(toRet, fieldLabel, stringValue);
break;
}
break;
}
}
//If a field was not found, there are no more fields
if (startOfString == -1)
break;
}
return toRet;
}
The file is not a UTF-8 file. The hex dump shown in the question makes it clear that it is not a UTF-8 file, and neither a proper text file in any other text encoding. It rather looks like some binary (serialized) format, with data fields of different types.
You cannot reliably read a binary file naively like a text file, especially considering that certain UTF-8 characters are represented by two or more bytes. Thus, it is pretty likely that the UTF-8 decoder will get confused by all the binary data and might miss the first character(s) of a text field because preceding (binary) byte not belonging to a text field could coincidentally be equal to a start byte of a multi-byte character sequence in UTF-8, thus accidentally not correctly identifying the first character(s) in a text field because the UTF-8 decoder is trying to decode a multi-byte sequence not aligning with the text field.
Not only that, but certain byte values or byte sequences are not valid UTF-8 encodings for character, and you would "lose" such bytes when trying to read them as UTF-8 text.
Also, since it is possible for byte sequences of multiple bytes to form a single UTF-8 character, you cannot rely on every individual byte being turned into a character with the same ordinal value (even if the byte value might be a valid ASCII value), since such a byte could be decoded "merely" as part of a UTF-8 byte sequence into a single character whose ordinal value is entirely different from the value of the bytes in such a byte sequence.
That said, as far as i can tell, the text field in your litte data snippet above does not look like UTF-8 at all. 00 6D 00 70 00 33 (*m*p*3) and 00 44 00 69 (*D*i) are definitely not UTF-8 -- note the zero bytes.
Thus, first consult the file format specification to figure out the actual text encoding used for the text fields in this file format. Don't guess. Don't assume. Don't believe. Look up, verify and confirm.
Secondly, since the file is not a proper text file (as already mentioned), you cannot read it like a text file with File.ReadAllText. Instead, read the raw byte data, for example with File.ReadAllBytes.
Find the otrk marker in the byte data of the file not as text, but by the 4 byte values this marker is made of.
Then, parse the byte data following the otrk marker according to the file format specification, and only decode the bytes that are actual text data into strings, using the correct text encoding as denoted by the file format specification.
With respect to this tool, I need to convert hexadecimal data, irrespective of their combination to equivalent text. For example:
"HelloWorld" = 48656c6c6f576f726c64;
The solution needs to take into account that hexadecimal can be grouped in different lengths:
48656c6c 6f576f72 6c64
or
48 65 6c 6c 6f 57 6f 72 6c 64
All of the hexadecimal values supplied above read as HelloWorld when converted to text.
First, I would like to point out that this question has been asked many times on the web (here is one example). However, I am going to break this down step by step for you to hopefully teach you how to not only utilize your resources available on the web, but also how to solve your problem.
Overview: Converting from hexadecimal data to text that is able to be read by human beings is a straight-forward process in modern development languages; you clean the data (ensuring no illegal characters remain), then you convert down to the byte level so that you can work with the raw data. Finally, you'll convert that raw data into readable text utilizing a method that has already been created by Microsoft.
Important: Remember, for the conversion to work, you have to ensure you're converting in the same format that you started with:
ASCII -> ASCII: Works Great!
ASCII -> UTF7: Not so much...
Removing Illegal Characters: One of the first things you'll need to do is ensure the hexadecimal value that you're supplying doesn't contain any illegal characters. The simplest way to do this is to create an array of acceptable characters and then remove anything but these in a loop:
private string GetCleanHex(string hex) {
string legalCharacters = "0123456789ABCDEF";
string result = hex.ToUpper();
foreach (char c in result) {
if (!legalCharacters.Contains(c))
result = result.Replace(c.ToString(), string.Empty);
}
}
Getting The Byte Array: Once you've cleaned out all illegal characters, you can now convert your hexadecimal string into a byte array. This is required to convert from hexadecimal to ASCII. This step was provided by the linked post above:
private byte[] GetBytesFromHex(string hex) {
byte[] bytes = new byte[result.Length / 2];
for (int i = 0; i < bytes.Length; i++)
bytes[i] = Convert.ToByte(result.Substring(i * 2, 2), 16);
}
Converting To Text: Now that you've cleaned your data, and converted it to a byte[], you can now convert that byte data into ASCII. This can be done using a method available in Encoding.ASCII called GetString:
string text = Encoding.ASCII.GetString(bytes);
The Final Result: Plug all of this into your application and you'll have successfully converted hexadecimal data into clean, readable text:
string hex = GetCleanHex("506c 65 61736520 72 656164 20686f77 2074 6f 2061 73 6b 2e");
byte[] bytes = GetBytesFromHex(hex);
string text = Encoding.ASCII.GetString(bytes);
Console.WriteLine(text);
Console.ReadKey();
The code above will print the following text to the console:
Please read how to ask.
my input is this 12 13 13 AF 3F 5f.
I need output the same.
I pass the input through client to server:
byte[] = system.text.encoding.ascii.getbyte(input);
and receive at server side
string some = System.Text.Encoding.ASCII.GetString(output);
but I get excess 0's in the end of the byte almost around 1000's,
how do I trim this 0's with out changing my byte array size
Various options here:
some = some.Substring(0, some.IndexOf('\0') + 1); Or
some = some.Remove(some.IndexOf('\0')); Or
some = some.TrimEnd('\0');
ASCII is 7-bit. Your value AF exceeds 7 bit, try to use UTF8 encoding
I am writing software in C# to program a two way radio with TX and RX frequencies and optional information. The manufactures program was written in Delphi. I have the HEX protocol information from the manufacture but I had to decompile their program to get other information. For now I just built a simple program that sends a string and reads the radios reply.
I send the radio the startup string and it responds with something crazy, "-P320733?-". Then I send it another string to start the channel information and it gives a "W???????????". The W is correct but the ?s are not and from what I understand that maybe the result of wrong serial port config.
Since the original program was written in Delphi I don't know what some of the settings are and don't know what to set them to in C#.
This is the Delphi port settings. Most things in there are self explanatory while others are not. I'm not sure witch need to be set and how they need to setup and which I can leave out.
CommName = 'COM2'
BaudRate = 9600
ParityCheck = False
Outx_CtsFlow = False
Outx_DsrFlow = False
DtrControl = DtrEnable
DsrSensitivity = False
TxContinueOnXoff = True
Outx_XonXoffFlow = False
Inx_XonXoffFlow = False
ReplaceWhenParityError = False
IgnoreNullChar = False
RtsControl = RtsEnable
XonLimit = 500
XoffLimit = 500
ByteSize = _8
Parity = None
StopBits = _2
XonChar = #17
XoffChar = #19
ReplacedChar = #0
ReadIntervalTimeout = 20
ReadTotalTimeoutMultiplier = 0
ReadTotalTimeoutConstant = 0
WriteTotalTimeoutMultiplier = 0
WriteTotalTimeoutConstant = 0
OnReceiveData = Comm1ReceiveData
Here is my code for the port.
{
InitializeComponent();
serialPort1.PortName = "COM1";
serialPort1.BaudRate = 9600;
serialPort1.DtrEnable = true;
serialPort1.RtsEnable = true;
serialPort1.StopBits = StopBits.Two;
serialPort1.DataReceived += new SerialDataReceivedEventHandler(serialPort1_DataReceived);
}
I send it the code below and it sends me "-P320733?-". It should return with, 0650333230373333FF06.
byte[] juf = new byte[9];
j = 0;
juf[j++] = (byte)0x50; //start talking to radio
juf[j++] = (byte)0x52;
juf[j++] = (byte)0x4f;
juf[j++] = (byte)0x47;
juf[j++] = (byte)0x52;
juf[j++] = (byte)0x41;
juf[j++] = (byte)0x4D;
juf[j++] = (byte)0x02;
juf[j++] = (byte)0x06;
After the radio responds, I send it the code below and get back "W?????????". The amount of ?s will vary from 0 to 20. It is supposed to return with "57004040" and "0075024000750240FFFFFFFF00BFA0F8".
byte[] vuf = new byte[5];
v = 0;
vuf[v++] = (byte)0x52; //start channel info
vuf[v++] = (byte)0x00;
vuf[v++] = (byte)0x40;
vuf[v++] = (byte)0x40;
vuf[v++] = (byte)0x06;
Eventually I need to convert radio's response into readable information and display it on a datagridview for the user to see but that's later, right now I'm just trying to make sure the serial plays nice.
Regarding the communication parameters, you have them correctly set. The TxContinueOnXoff parameter should have no meaning since Xon/Xoff flow control is set to false.
You need to make a distinction between ASCII representation and HEX representation of byte values. Ascii table
If the documentation uses hex representation for sent and received data you should compare the docs with actual data as hex representation, not as text.
It seems the first response does match the documentation if looked at as hex. When the documented hex representation is translated to ASCII (except for the non printable characters 0x06 and 0xFF) it is the same that you call 'something crazy'.
HEX: 06 50 33 32 30 37 33 33 FF 06 // expected, as hex
ASCII: - P 3 2 0 7 3 3 ? - // received, as text
There is no printable character for 0x06 and 0xFF, therefore, whatever you used to look at the message as text, chose to display 0x06 as a dash and 0xFFas a question mark.
Just treat the bytes for what they are - bytes, and convert to hex representation to compare/verify with the docs.
Just of curiosity, what is the protocol called, or if it is publicly available, do you have a link to share?
I have a textbox that I use to convert things like:
74 00 65 00 73 00 74 00
Back into a string, the above says "test" but for some reason when I click the convert button it will display only the first letter "t" 74 00 and other byte arrays work just as expected, the entire text is converted.
Here is the 2 codes I have tried which produce the same behavior of not properly converting the entire byte array back to word:
byte[] bArray = ByteStrToByteArray(iSequence.Text);
ASCIIEncoding enc = new ASCIIEncoding();
string word = enc.GetString(bArray);
iResult.Text = word + Environment.NewLine;
which uses the function:
private byte[] ByteStrToByteArray(string byteString)
{
byteString = byteString.Replace(" ", string.Empty);
byte[] buffer = new byte[byteString.Length / 2];
for (int i = 0; i < byteString.Length; i += 2)
buffer[i / 2] = (byte)Convert.ToByte(byteString.Substring(i, 2), 16);
return buffer;
}
another way I was using is:
string str = iSequence.Text.Replace(" ", "");
byte[] bArray = Enumerable.Range(0, str.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(str.Substring(x, 2), 16))
.ToArray();
ASCIIEncoding enc = new ASCIIEncoding();
string word = enc.GetString(bArray);
iResult.Text = word + Environment.NewLine;
Tried checking for the lengths to see if it was iterating thru and it was ...
Don't really know how to debug why this is happenning to the above byte array but all the other byte arrays seemed to be working just fine only this one is outputing only the first letter of it.
Have I done something wrong that could produce this behavior some how ?
What could I try in order to find out what is wrong ?
If you have the byte sequence
var bytes = new byte[] { 0x74, 0x00, 0x65, 0x00, 0x73, 0x00, 0x74, 0x00 };
and you decode it to a string using ASCII encoding (Encoding.ASCII), then you get
var result = Encoding.ASCII.GetString(bytes);
// result == "\x74\x00\x65\x00\x73\x00\x74\x00" == "t\0e\0s\0t\0"
Notice the Null \0 characters? When you display such a string in a textbox, only the part of the string until the first Null character is displayed.
Since you say the result should read "test", the input is actually not encoded in ASCII but in UTF-16LE (Encoding.Unicode).
var result = Encoding.Unicode.GetString(bytes);
// result == "\u0074\u0065\u0073\u0074" == "test"
your converting a unicode string to ascii , your not specifying the codepage on your machine to convert from.
System.Text.Encoding.GetEncoding("codepage").GetString()
if my memory serves me correct. Also to note, any control in .NET is unicode ... Soooooo.... what your trying to stick in the text box (if the conversion isent correct) could be an end of line character .. or eof, or any kind of control character. all depends on your codepage.
I tried debugging the first program using breakpoints in VS2010. I found out that the line
string word = enc.GetString(bArray);
output word as "t\0e\0s\0t".
The last line
iResult.Text = word + Environment.NewLine;
gives iResult.Text as simply "t".
So I was thinking since \0 is not a valid escape sequence, the compiler ignored everything after it. Could be wrong though but try removing all occurrences of 00 in the input string.
I'm not really into C#. I'm only suggesting this because it looks like C++.
It works for me:
string outputText = "t\0e\0s\0t";
outputText = outputText.Replace("\0", " ");