Reading data over serial port from voltmeter - c#

I'm sort of new at this and I'm writing a small application to read data from a voltmeter. It's a RadioShack Digital Multimeter 46-range. The purpose of my program is to perform something automatically when it detects a certain voltage. I'm using C# and I'm already familiar with the SerialPort class.
My program runs and reads the data in from the voltmeter. However, the data is all unformatted/gibberish. The device does come with its own software that displays the voltage on the PC, however this doesn't help me since I need to grab the voltage from my own program. I just can't figure out how to translate this data into something useful.
For reference, I'm using the SerialPort.Read() method:
byte[] voltage = new byte[100];
_serialPort.Read(voltage, 0, 99);
It grabs the data and displays it as so:
16 0 30 0 6 198 30 6 126 254 30 0 30 16 0 30 0 6 198 30 6 126 254 30 0 30 16 0 3
0 0 6 198 30 6 126 254 30 0 30 16 0 30 0 6 198 30 6 126 254 30 0 30 16 0 30 0 6
198 30 6 126 254 30 0 30 24 0 30 0 6 198 30 6 126 254 30 0 30 16 0 30 0 254 30 6
126 252 30 0 6 0 30 0 254 30 6 126 254 30 0
The space separates each element of the array. If I use a char[] array instead of byte[], I get complete gibberish:
▲ ? ? ▲ ♠ ~ ? ▲ ♠ ▲ ? ? ▲ ♠ ~ ? ▲ ♠ ▲ ? ? ▲ ♠ ~ ? ▲ ♠
Using the .ReadExisting() method gives me:
▲ ?~?♠~?▲ ▲? ▲ ?~♠~?▲ ?↑ ▲ ??~♠~?▲ F? ▲ ??~♠~?▲ D? ▲ ??~♠~?▲ f?
.ReadLine() times out, so doesn't work. ReadByte() and ReadChar() just give me numbers similar to the Read() into array function.
I'm in way over my head as I've never done something like this, not really sure where else to turn.

It sounds like you're close, but you need to figure out the correct Encoding to use.
To get a string from an array of bytes, you need to know the Code Page being used. If it's not covered in the manual, and you can't find it via a google/bing/other search, then you will need to use trial and error.
To see how to use GetChars() to get a string from a byte array, see Decoder.GetChars Method
In the code sample, look at this line:
Decoder uniDecoder = Encoding.Unicode.GetDecoder();
That line is specifically stating that you are to use the Unicode code page to get the correct code page.
From there, you can use an override of the Encoding class to specify different Code Pages. This is documented here: Encoding Class
If the Encoding being used isn't one of the standards, you can use the Encoding(Int32) override in the Constructor of the Encoding class. A list of valid Code Page IDs can be found at Code Pages Supported by Windows

There are two district strategies for solving your communications problem.
Locate and refer to appropriate documentation and design\modify a program to implement the specification.
The following may be appropriate, but are not guaranteed to describe the particular model DVM that you have. Nonetheless, they MAY serve as a starting point.
note that the authors of these documents comment that the Respective models may be 'visually identical', but also comments that '"Open-source packages that reportedly worked on LINUX with earlier RS-232 models do not work with the 2200039"
http://forums.parallax.com/attachment.php?attachmentid=88160&d=1325568007
http://sigrok.org/wiki/RadioShack_22-812
http://code.google.com/p/rs22812/
Try to reverse engineer the protocol. if you can read the data in a loop and collect the results, a good approach to reverse engineering a protocol, is to apply various representative signals to the DVM. You can use a short-circuit resistance measurements, various stable voltage measurements, etc.
The technique I suggest is most valuable is to use an automated variable signal generator. In this way, by analyzing the patterns of the data, you should be more readily be able to identify which points represent the raw data and which points represent stable descriptive data, like the unit of measurements, mode of operation, etc.

Some digital multimeters use 7 bit data transfer. You should set serial communication port to 7 data bits instead of standard 8 data bits.

I modified and merged a couple of older open source C programs together on linux in order to read the data values from the radio shack meter whose part number is 2200039. This is over usb. I really only added a C or an F on one range. My program is here, and it has the links where I got the other two programs in it.
I know this example is not in C#, but it does provide the format info you need. Think of it is as the API documentation written in C, you just have to translate it into C# yourself.
The protocol runs at 4800 baud, and 8N1 appears to work.

Related

Profilling results - how to understand

I did profiling for my console application using Unity IOC and a lot of calls using HttpCLient. How to understand it?
Function Name, Inclusive Samples, Exclusive Samples, Inclusive Samples %, Exclusive Samples %
Microsoft.Practices.Unity.UnityContainer.Resolve 175 58 38.89 12.89
Microsoft.Practices.Unity.UnityContainer..ctor 29 29 6.44 6.44
System.Runtime.CompilerServices.AsyncTaskMethodBuilder1[System.DateTime].Start 36 13 8.00 2.89
Microsoft.Practices.Unity.UnityContainerExtensions.RegisterInstance 9 9 2.00 2.00
System.Net.Http.HttpClientHandler..ctor 9 9 2.00 2.00
System.Net.Http.HttpMessageInvoker.Dispose 9 9 2.00 2.00
System.Activator.CreateInstance 20 8 4.44 1.78
Microsoft.Practices.Unity.ObjectBuilder.NamedTypeDependencyResolverPolicy.Resolve 115 3 25.56 0.67
What means that inclusive samples for Microsoft.Practices.Unity.UnityContainer.Resolve are 38,89 but exclusive are 12,89? Is it ok? Not too much?
"Inclusive" means "exclusive time plus time spent in all callees".
Forget the "exclusive" stuff.
"Inclusive" is what it's costing you.
It says UnityContainer.Resolve is costing you 39% of time,
and Unity.ObjectBuilder.NamedTypeDependencyResolverPolicy.Resolve is costing you 26%.
It looks like the first one calls the second one, so you can't add their times together.
If you could avoid calling all that stuff, you would save at least 40%, giving you a speedup of at least 100/60 or 1.67 or 67%
By the way, that Unity stuff, while not exactly deprecated, is no longer being maintained.

Read Fortran binary file into C# without knowledge of Fortran source code?

Part one of my question is even if this is possible? I will briefly describe my situation first.
My work has a licence for a software that performs a very specific task, however most of our time is spent exporting data from the results into excel etc to perform further analysis. I was wondering if it was possible to dump all of the data into a C# object so that I can then write my own analysis code, which would save us a lot of time.
The software we licence was written in Fortran, but we have no access to the source code. The file looks like it is written out in binary, however I do not know if it is unformatted / sequential etc (is there anyway to discern this?).
I have used some of the other answers on this site to successfully read in the data to a byte[], however this is as far as I have got. I have tried to change portions to doubles (which I assume most of the data is) but the numbers do not strike me as being meaningful (most appear too large or too small).
I have the documentation for the software and I can see that most of the internal variable names are 8 character strings, would this be saved with the data? If not I think it would be almost impossible to match all the data to its corresponding variable. I imagine most of the data will be double arrays of the same length (the number of time points), however there will also be some arrays with a longer length as some data would have been interpolated where shorter time steps were needed for convergence.
Any tips or hints would be appreciated, or even if someone tells me its just not possible so I don't waste any more time trying to solve this.
Thank you.
If it was formatted, you should be able to read it with a text editor: The numbers are written in plain text.
So yes, it's probably unformatted.
There are different methods still. The file can have a fixed record length, or it might have a variable one.
But it seems to me that the first 4 bytes represent an integer containing the length of that record in bytes. For example, here I've written the numbers 1 to 10, and then 11 to 30 into an unformatted file, and the file looks like this:
40 1 2 3 4 5 6 7 8 9 10 40
80 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 80
(I added the new line) In here, the first 4 bytes represent the number 40, followed by 10 4-byte blocks representing the numbers 1-10, followed by another 40. The next record starts with an 80, and 20 4-byte blocks containing the numbers 11 through 30, followed by another 80.
So that might be a pattern you could try to see. Read the first 4 bytes and convert them to integer, then read that many bytes and convert them to whatever you think it should be (4 byte float, 8 byte float, et cetera), and then check whether the next 4 bytes again represents the number that you read first.
But there other methods to write data in Fortran that doesn't seem to have this behaviour, for example direct access and stream. So no guarantees.

Stream reader - new line - carrage return

I'm reading an IES file, here is a little blurb about them...
"The photometric data is stored in an ASCII file. Each line in the file must be less than 132 characters long and must be terminated by a carriage return/line-feed character sequence. Longer lines can be continued by inserting a carriage return/line-feed character sequence."
after a bunch of header information, the line i'm after is 14 lines down. But it can extend any number of lines down from there because of the 123 character restriction. And if you end and continue lines with a carriage return, how can I tell where to stop reading the data? And the following chunk of data is the exact format, a series of angles. Each set may or may not begin and/or end with 0, 90 & 180. What am I missing, how can I collect this data? Below is an example, starting at line 14. Thanks.
0 2.5 5 7.5 10 12.5 15 17.5 20 22.5 25 27.5 30 32.5 35 37.5
40 42.5 45 47.5 50 52.5 55 57.5 60 62.5 65 67.5 70 72.5 75
77.5 80 82.5 85 87.5 90
[no space]
0 22.5 45 67.5 90 112.5 135 157.5 180 202.5 225 247.5
270 292.5 315 337.5 360
Google and the amazing people that post their open source code are your friends :)
https://github.com/kmorin/IESparser/blob/master/IESParser.cs

How do I determine if a packet is RTP/RTCP?

I am using SharpPCap which is built on WinPCap to capture UDP traffic. My end goal is to capture the audio data from H.323 and save those phone conversations as WAV files. But first thing is first - I need to figure out what my UDP packets are crossing the NIC.
SharpPCap provides a UdpPacket class that gives me access to the PayloadData of the message. But I am unsure what do with this data. It's a Byte[] array and I don't know how to go about determining if it's an RTP or RTCP packet.
I've Googled this topic but there isn't much out there. Any help is appreciated.
Look at the definitions for RTP and RTCP packets in RFC 3550:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|V=2|P|X| CC |M| PT | sequence number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| timestamp |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| synchronization source (SSRC) identifier |
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
| contributing source (CSRC) identifiers |
| .... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
I won't reproduce the legend for all of the above - it's quite long - but take a look at Section 5.1.
With that in hand you'll see there's not a lot you can do to determine if a packet contains RTP/RTCP. Best of all would be to sniff, as other posters have suggested, the media stream negotiation. Second best would be some sort've pattern matching over a sequence of packets: the first two bits will be 10, followed by the next two bits being constant, followed by bits 9 through 15 being constant, then 16 -> 31 incrementing, and so on.
I would look at the packet detectors in Wireshark, which can decode most common protocols available.
If communucations are done over RTSP, take a look at the udp port that is negotiated upon SETUP.
the udp port will tell you if it is RTP or RTCP (also worth noting that RTP is usually done over even port numbers and RTCP on odd).
finally if you are communicating via RTSP you can take the list of payload numbers from the SDP file from the DESCRIBE and then check the payload type in the RTP header to tell the codec you need to decode the payload.
I believe you need to look at the SIP packets that come before the RTP packets.
There is a discussion on this issue on Pcap.Net site.

Binary to Ascii and back again

I'm trying to interface with a hardware device via the serial port. When I use software like Portmon to see the messages they look like this:
42 21 21 21 21 41 45 21 26 21 29 21 26 59 5F 41 30 21 2B 21 27
42 21 21 21 21 41 47 21 27 21 28 21 27 59 5D 41 32 21 2A 21 28
When I run them thru a hex to ascii converter the commands don't make sense. Are these messages in fact something different than hex? My hope was to see the messages the device is passing and emulate them using c#. What can I do to find out exactly what the messages are?
Does the hardware device specify a protocol? Just because it's a serial port connection it doesn't mean that it has to be ASCII/Readable english Text. It could as well be just a sequence of bytes where for example 42 is a command and 21212121 is data to that command. Could be an initialization sequence or whatever.
At the end of the day, all you work with is a series of bytes. The meaning of them can be found in a protocol specification or if you don't have one, you need to manually look at each command. Issue a command to the device, capture the input, issue another command.
Look for patterns. Common Initialization? What could be the commands? What data gets passed?
Yes, it's tedious, but reverse engineering is rarely easy.
The ASCII for the Hex is this:
B!!!!AE!&!)!&Y_A0!+!'
B!!!!AG!'!(!'Y]A2!*!(
That does look like some sort of protocol to me, with some Initialization Sequence (B!!!!) and commands (AE and AG), but that's just guessing.
The decive is sending data to the computer. All digital data has the form of ones and zeroes, such as 10101001010110010... . Most often one combines groups of eight such bits (binary digits) into bytes, so all data consists of bytes. One byte can thus represent any of the 2^8 values 0 to 2^8 - 1 = 255, or, in hexadecimal notation, any of the numbers 0x00 to 0xFF.
Sometimes the bytes represent a string of alphanumerical (and other) characters, often ASCII encoded. This data format assigns a character to each value from 0 to 127. But all data is not ASCII-encoded characters.
For instance, if the device is a light-intensity sensor, then each byte could give the light intensity as a number between 0 (pitch-black) and 255 (as bright as it gets). Or, the data could be a bitmap image. Then the data would start with a couple of well-defined structures (namely this and this) specifying the colour depth (number of bits per pixel, i.e. more or less the number of colours), the width, the height, and the compression of the bitmap. Then the pixel data would begin. Typically the bytes would go BBGGRRBBGGRRBBGGRR where the first BB is the blue intensity of the first pixel, the first GG is the green intensity of the first pixel, the first RR is the red intensity of the first pixel, the second BB is the blue intensity of the second pixel, and so on.
In fact the data could mean anything. Whay kind of device is it? Does it have an open specification?

Categories

Resources