Receive special char from serial port - c#

I need to receive and transmit data with a serial port. I have no problem receiving and transmitting, but I do not see the received data correctly.
If I use a program, ComTestSerial, I see these correct data:
{STX}1H|\^&|||cobas6000^1|||||host|RSUPL^BATCH|P|1 P|1|||||||U||||||^
O|1| IANNETTA M
BIS|0^5016^1^^S1^SC|^^^480^|R||||||N||||1|||||||20191018113556|||F
C|1|I| ^ ^^^|G
R|1|^^^480/|11|U/L{ETB}A6
But if I use my program in c #, with RicheditText or Texbox, I see this wrong data:
2497212492943812412412499111989711554484848944912412412412412410411111511612482838580769466658467721248012449138012449124124124124124124124851241241241241241249413791244912432323232323232327365787869848465327732667383124489453484954944994948349948367124949494525648941248212412412412412412478124124124124491241241241241241241245048495749484956494951535354124124124701367124491247312432323232323232323232323232323232323232323232323232323232323294323232323232323232323232323232323232323232323232329494941247113821244912494949452564847124494912485477623655413104.
I use this simple code (written by a colleague) to receive:
string cMsg = "";
while (this.ComPort.BytesToRead > 0)
{
int nChar = this.ComPort.ReadChar();
cMsg += nChar.ToString();
}
Thread.Sleep(100);
return cMsg;
Which reads data from a serial connection that works perfectly.
What could be the problem?

You're converting a number to a string, so, say, when nChar is 2, the output will be a string "2", and when nChar is 49, the output will be "49".
So, the message begins with {STX}1. {STX} is an ASCII control code 2, and 1 is ASCII code 49. Thus the "wrong data" begins with "249".
Thus, the data isn't wrong, and the code does exactly what you told it to, except that your colleague didn't mean what you intended :)
Instead of converting ASCII codes to strings, convert them to characters, and also use a stringbuilder to minimize the number of times the string is resized.
StringBuilder message(ComPort.BytesToRead);
while (ComPort.BytesToRead > 0)
{
message.Append((char)ComPort.ReadChar());
}
return message.ToString();
But you don't need to do any of it! SerialPort.ReadExisting does what you want:
return ComPort.ReadExisting();.
Stylistic note: C# is not Java, and littering the code with this. is not idiomatic nor necessary. Don't do it unless there's a good reason to.

From your code, it seems that you are getting an integer from your port, and when you are using ToString() you just write a number to string
int nChar = this.ComPort.ReadChar();
cMsg += nChar.ToString();
This integer should be A 21-bit Unicode code point.
So you just can use Char.ConvertFromUtf32(Int32) Method, it will convert integer to the actual character:
https://learn.microsoft.com/en-us/dotnet/api/system.char.convertfromutf32?view=netframework-4.8
Your full code should then look like this:
string cMsg = "";
while (this.ComPort.BytesToRead > 0)
{
int nChar = this.ComPort.ReadChar();
cMsg += Char.ConvertFromUtf32(nChar);
}
Thread.Sleep(100);
return cMsg;

Related

Convert RTU Mode sensor data to ASCII mode

I am trying to develop Windows application for Modbus RTU mode (RS-485) sensor in C#.
While reading sensor data there is no problem but main problem is when I try to read version of sensor the result is showing in:
01041A4350532D524D2056312E303020323031383033323900000000007B00
But I need to show the result is like
CPS-RM V1.00 20180329
I searched for this in internet I think I should have to convert to ascii code but I am not finding any solution do you have any idea for this.
It looks like only part of the string is actually text. I suspect the third byte is the number of bytes to treat as text following it (so the final two bytes aren't part of the text). Note that it's padded with Unicode NUL characters (U+0000) that you may want to trim.
So if you have your data in a variable called bytes:
string text = Encoding.ASCII
// Decode from the 4th byte, using the 3rd byte as the length
.GetString(bytes, index: 3, count: bytes[2])
// Trim any trailing U+0000 characters
.TrimEnd('\0');
Console.WriteLine(text);
I would mention that that's based on guesswork though. I would strongly advise you to try to find a specification for the data format to check my assumption about the use of the third byte as a length.
If you haven't already got the data as bytes (instead having it in hex) I would suggest you convert it to a byte array first. There are lots of pieces of code on Stack Overflow to do that already, e.g. here and here.
I found a answer and it worked
public static string ConvertHex(String hexString)
{
try
{
string ascii = string.Empty;
for (int i = 0; i < hexString.Length; i += 2)
{
String hs = string.Empty;
hs = hexString.Substring(i, 2);
uint decval = System.Convert.ToUInt32(hs, 16);
char character = System.Convert.ToChar(decval);
ascii += character;
}
return ascii;
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
return string.Empty;
}

Unable to convert string to byte if high-order bit is set

Apologies for the abortive first try, particularly to Olivier. Trying again.
Situation is we have a string coming in from a mainframe to a C# app. We understand it needs to be converted to a byte array. However, this data is a mixture of ASCII characters and true binary UINT16 and UINT32 fields, which are not always in the same spot in the data. Later on we will deserialize the data and will know the structure's data alignments, but not at this juncture.
Logic flow briefly is to send a structure with binary embedded, receive a reply with binary embedded, convert string reply to bytes (this is where we have issues), deserialize the bytes based on an embedded structure name, then process the structure. Until we reach deserialize, we don't know where the UINTs are. Bits are bits at this point.
When we have a reply byte which is ultimately part of a UINT16, and that byte has the high-order bit set (making it "extended ascii" or "negative", however you want to say it), that byte is converted to nulls. So any value >= 128 in that byte is lost.
Our code to convert looks like this:
public async Task<byte[]> SendMessage(byte[] sendBytes)
{
byte[] recvbytes = null;
var url = new Uri("http://<snipped>");
WebRequest webRequest = WebRequest.Create(url);
webRequest.Method = "POST";
webRequest.ContentType = "application/octet-stream";
webRequest.Timeout = 10000;
using (Stream postStream = await webRequest.GetRequestStreamAsync().ConfigureAwait(false))
{
await postStream.WriteAsync(sendBytes, 0, sendBytes.Length);
await postStream.FlushAsync();
}
try
{
string Response;
int Res_lenght;
using (var response = (HttpWebResponse)await webRequest.GetResponseAsync())
using (Stream streamResponse = response.GetResponseStream())
using (StreamReader streamReader = new StreamReader(streamResponse))
{
Response = await streamReader.ReadToEndAsync();
Res_lenght = Response.Length;
}
if (string.IsNullOrEmpty(Response))
{
recvbytes = null;
}
else
{
recvbytes = ConvertToBytes(Response);
var table = (Encoding.Default.GetString(
recvbytes,
0,
recvbytes.Length - 1)).Split(new string[] { "\r\n", "\r", "\n" },
StringSplitOptions.None);
}
}
catch (WebException e)
{
//error
}
return recvbytes;
}
static byte[] ConvertToBytes(string inputString)
{
byte[] outputBytes = new byte[inputString.Length * sizeof(byte)];
String strLocalDate = DateTime.Now.ToString("hh.mm.ss.ffffff");
String fileName = "c:\\deleteMe\\Test" + strLocalDate;
fileName = fileName + ".txt";
StreamWriter writer = new StreamWriter(fileName, true);
for (int i=0;i<inputString.Length;i++) {
try
{
outputBytes[i] = Convert.ToByte(inputString[i]);
writer.Write("String in: {0} \t Byte out: {1} \t Index: {2} \n", inputString.Substring(i, 2), outputBytes[i], i);
}
catch (Exception ex)
{
//error
}
}
writer.Flush();
return outputBytes;
}
ConvertToBytes has a line in the FOR loop to display the values in and out, plus the index value. Here is one of several spots where we see the conversion error - note indexes 698 and 699 represent a UINT16:
String in: sp Byte out: 32 Index: 696 << sp = space
String in: sp Byte out: 32 Index: 697
String in: \0 Byte out: 0 Index: 698
String in: 2 Byte out: 50 Index: 700 << where is 699?
String in: 0 Byte out: 48 Index: 701
String in: 1 Byte out: 49 Index: 702
String in: 6 Byte out: 54 Index: 703
The expected value for index 699 is decimal 156, which is binary 10011100. The high order bit is on. So the conversion for #698 is correct, and for #700, which is an ascii 2 is correct, but not for #699. Given the UINT16 (0/156) is a component of the key to subsequent records, seeing 0/0 for the values is a show-stopper. We don't have a displacement error for 699, we see nulls in the deserialize. No idea why the .Write didn't report it.
Another example, such as 2/210 (decimal 722 when seen as a full UINT16) come out as 2/0 (decimal 512).
Please understand this code as shown above works for everything except the 8-bit reply string fields which have the high-order bit set.
Any suggestions how to convert a string element to byte regardless of the content of the string element would be appreciated. Thanks!
Without a good Minimal, Complete, and Verifiable example that reliably reproduces the problem, it's impossible to state specifically what is wrong. But given what you've posted, some useful observations can be made:
First and foremost, as far as "where is 699?" goes, it's obvious that an exception is being thrown. That's how the WriteLine() call would be skipped and result in no output for that index. You have a couple of opportunities in the code you posted for that to happen: the call to Convert.ToByte(), or the following statement (particularly the call to inputString.Substring()).
Unfortunately, without a good MCVE it's hard to understand why you are printing a two-character substring from the input string, or why you say the characters "sp" become the character value 0x20 (i.e. a space character). The output you describe in the question doesn't appear to be self-consistent. But, let's move on…
Assuming for the moment that at least in the specific case you're looking at, there are enough characters in inputString at that point for the call to Substring() to succeed, we're left with the conclusion that the call to Convert.ToByte() is failing.
Given what you wrote, it seems that the main issue here is a misunderstanding on your part about how text is encoded and manipulated in a C# program. In particular, a C# character is in some sense an abstraction and doesn't have an encoding at all. To the extent that you force the encoding to be revealed, i.e. by casting or otherwise converting the raw character value directly, that value is always encoded as UTF16.
Put another way: you are dealing with a C# string object, made of C# char values. I.e. by the time you get this text into your program and call the ConvertToBytes() method, it's already been converted to UFT16, regardless of the encoding used by the sender.
In UTF16, character values that would be greater than 127 (0x7f) in an "extended ASCII" encoding (e.g. any of the various ANSI/OEM/ISO single-byte encodings) are not encoded as their original value. Instead, they will have a 16-bit value greater than 255.
When you ask Convert.ToByte() to convert such a value to a byte, it will throw an exception, because the value is larger than the largest value that can fit in a byte.
It is fairly clear why the code you posted is producing the results you describe (at least, to some extent). But it is not clear at all what you are actually hoping to accomplish here. I can say that attempting to convert char values to/from byte values by straight casting is simply not going to work. The char type isn't a byte, it's two bytes and any non-ASCII characters will use larger values than can fit in a byte. You should be using one of the several .NET classes that actually will do text encoding, such as the Encoding.GetBytes() method.
Of course, to do that you'll have to make sure you first understand precisely why you are trying to convert to bytes and what encoding you want to use. The code you posted seems to be trying to interpret your encoded bytes as the current Encoding.Default encoding, so you should use that encoding to encode the text. But there's not really any value in encoding to that encoding only to decode back to a C# string value. Assuming you've done it correctly, all that will happen is you'll get exactly the same string you started with.
In other words, while I can explain the behavior you're seeing to the extent that you've described it here, that's unlikely to address whatever broader problem you are actually trying to solve. If the above does not get you back on track, please post a new question in which you've included a good MCVE and a clear explanation of what that broader problem you're trying to solve actually us.

Substring not refers to the given String exception in C#

I'm making simple program in C# just for fun, where I'm coding simple HEX to String -translation method like this:
private String translate(String hex)
{
StringBuilder output = new StringBuilder();
int i = 0;
while (i+2 <= hex.Length)
{
String str = hex.Substring(i, i + 2);
output.Append((char)Convert.ToInt32(str, 16));
i += 2;
}
Every time when I'm running the application, I'm getting unhandled exception "Out of ranges and bounds", and getting message, that index and point of Substring method parameters have to refer to some point of hex -String. I did the if -statement before Substring -method line with conditions true only if i +2 is smaller or equal to hex.Length, but this did not work properly. My application works properly, when I'm using only one byte HEX input String meaning one ASCII -character. Can anyone help me with this that I could go forward with my project?
Use:
hex.Substring(i, 2)
The second argument in Substring is length, not end.

python parse binary data

I have an application in (windows) that sends logs in binary format.
The c# code to convert that to strings is:
public static CounterSampleCollection Deserialize(BinaryReader binaryReader)
{
string name = binaryReader.ReadString(); // counter name
short valueCount = binaryReader.ReadInt16(); // number of counter values
var sampleCollection = new CounterSampleCollection(name);
for (int i = 0; i < valueCount; i++)
{
// each counter value consists of a timestamp + the actual value
long binaryTimeStamp = binaryReader.ReadInt64();
DateTime timeStamp = DateTime.FromBinary(binaryTimeStamp);
float value = binaryReader.ReadSingle();
sampleCollection.Add(new CounterSample(timeStamp, value));
}
return sampleCollection;
}
I have a python udp socket that is listening to the port, but don't know how to convert the binary data I am receiving into strings so that I can parse it further.
Can any python expert please help me to convert that function into python function, so that I can convert the data I receive into python.
My code so far:
import socket
UDP_IP = "0.0.0.0"
UDP_PORT = 40001
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(8192) # buffer size is 8192 bytes
print "[+] : ", data
// this prints the binary
// convert the data to strings ??
I use struct to unpack binary data.
https://docs.python.org/2/library/struct.html
here's an example I use to unpack the data from a static file.
import struct
comp = open(traceFile, 'rb')
aData = comp.read()
s = struct.Struct('>' +' i i i f f f d i H H')
sSize = s.size
for n in range(0, len(aData), sSize):
print s.unpack(aData[n:n+sSize])
An example of reading from sockets is covered in the following:
http://www.binarytides.com/receive-full-data-with-the-recv-socket-function-in-python/
A snippet from that reference gives you some tools for writing the Python code you want. The snippet uses the try ... except clause and sleep() funciton. The reference contains other nice tips. But key to your question is that the binary data naturally converts to a python string.
while 1:
#recv something
try:
data = the_socket.recv(8192)
if data:
total_data.append(data)
#change the beginning time for measurement
begin=time.time()
else:
#sleep for sometime to indicate a gap
time.sleep(0.1)
except:
pass
#join all parts to make final string
s = ''.join(total_data) # join accepts type str, so binary string is converted
After you have string "s", you need to parse based on (1) the separator for the data pair that you have, (2) the separator between date and (3) the format of the date field. I do not know what your binary string looks like, so I will just sketch some code that you might use:
results = []
from datetime import datetime
pairs = s.split('\n') # assume that the pairs are linefeed-separated
for pair in pairs:
sdate, scount = pair.split(',') # assume that a pair is separated by a comma
timestamp = datetime.strptime(sdate, "%Y-%m-%d %H:%M:%S.%f") # format must match sdate
count = int(scount)
results.append(timestamp, count)
return results

How to prevent conversion of Windows-1252 argument into a Unicode string?

I've written my first COM classes. My unit tests work fine, but my first use of the COM objects has hit a snag.
The COM classes provide methods which accept a string, manipulate it and return a string. The consumer of the COM objects is a dBASE PLUS program.
When the input string contains common keyboard characters (ASCII 127 or lower), the COM methods work fine. However, if the string contains characters beyond the ASCII range, some of them get remapped from Windows-1252 to C#'s Unicode. This table shows the mapping that takes place: http://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WINDOWS/CP1252.TXT
For example, if the dBASE program calls the COM object with:
oMyComObject.MyMethod("It will cost€123") where the € is hex 80,
the C# method receives it as Unicode:
public string MyMethod(string source)
{
// source is Unicode and now the Euro symbol is hex 20AC
...
}
I would like to avoid this remapping because I want the original hex content of the string.
I've tried adding the following to MyMethod to convert the string back to Windows-1252, but the Euro symbol gets lost because it becomes a question mark:
byte[] UnicodeBytes = Encoding.Unicode.GetBytes(source.ToString());
byte[] Win1252Bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), UnicodeBytes);
string Win1252 = Encoding.GetEncoding(1252).GetString(Win1252Bytes);
Is there a way to prevent this conversion of the "source" parameter to Unicode? Or, is there a way to convert it 100% from Unicode back to Windows-1252?
Yes, I'm answering my own question. The answer by "Jigsore" put me on the right track, but I want to explain more clearly in case someone else makes the same mistake I made.
I eventually figured out that I had misdiagnosed the problem. dBASE was passing the string fine and C# was receiving it fine. It was how I checked the contents of the string that was in error.
This turnkey builds on Jigsore's answer:
void Main()
{
string unicodeText = "\u20AC\u0160\u0152\u0161";
byte[] unicodeBytes = Encoding.Unicode.GetBytes(unicodeText);
byte[] win1252bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), unicodeBytes);
for (int i = 0; i < win1252bytes.Length; i++)
Console.Write("0x{0:X2} ", win1252bytes[i]); // output: 0x80 0x8A 0x8C 0x9A
// win1252String represents the string passed from dBASE to C#
string win1252String = Encoding.GetEncoding(1252).GetString(win1252bytes);
Console.WriteLine("\r\nWin1252 string is " + win1252String); // output: Win1252 string is €ŠŒš
Console.WriteLine("looking at the code of the first character the wrong way: " + (int)win1252String[0]);
// output: looking at the code of the first character the wrong way: 8364
byte[] bytes = Encoding.GetEncoding(1252).GetBytes(win1252String[0].ToString());
Console.WriteLine("looking at the code of the first character the right way: " + bytes[0]);
// output: looking at the code of the first character the right way: 128
// Warning: If your input contains character codes which are large in value than what a byte
// can hold (ex: multi-byte Chinese characters), then you will need to look at more than just bytes[0].
}
The reason the first method was wrong is that casting (int)win1252String[0] (or the converse of casting an integer j to a character with (char)j) involves an implicit conversion with the Unicode character set C# uses.
I consider this resolved and would like to thank each person who took the time to comment or answer for their time and trouble. It is appreciated!
Actually you're doing the Unicode to Win-1252 conversion correctly, but you're performing an extra step. The original Win1252 codes are in the Win1252Bytes array!
Check the following code:
string unicodeText = "\u20AC\u0160\u0152\u0161";
byte[] unicodeBytes = Encoding.Unicode.GetBytes(unicodeText);
byte[] win1252bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), unicodeBytes);
for (i = 0; i < win1252bytes.Length; i++)
Console.Write("0x{0:X2} ", win1252bytes[i]);
The output shows the Win-1252 codes for the unicodeText string, you can check this by looking at the CP1252.TXT table.

Categories

Resources