Using DLLImport to import an Object - c#

I have a dll for a c++ class (SLABHIDDevice.dll). I am trying to use the functions of this dll in a C#.net application. The dll contains several methods which I can use easily with statements such as this...
(I appolagize if i get some of the terminology wrong here I am new to using dlls)
[DllImport("SLABHIDDevice.dll")]
public static extern byte GetHidString
(Int32 deviceIndex, Int32 vid, Int32 pid,
Byte hidStringType, String deviceString,
Int32 deviceStringLength);
The documentation for SLABHIDDevice.dll says that it also contains a class object, CHIDDevice.
and that object has a whole list of member functions such as Open();
If I try to import Open() using the same syntax as above, I get an error saying that it can not find an entry point for the Open() function. Is this because Open() is a member of CHIDDevice?
This is the makeup of the dll from DUMPBIN... The bottom three functions are the only ones I am able to get to work? Does anyone know what syntax I need to use to get the other ones? What do the question marks mean that precede the function names?
Dump of file SLABHIDDEVICE.dll
File Type: DLL
Section contains the following exports for SLABHIDDevice.dll
00000000 characteristics
47E13E0F time date stamp Wed Mar 19 12:23:43 2008
0.00 version
1 ordinal base
26 number of functions
26 number of names
ordinal hint RVA name
4 0 00001000 ??0CHIDDevice##QAE#ABV0##Z
5 1 00001330 ??0CHIDDevice##QAE#XZ
6 2 00001430 ??1CHIDDevice##UAE#XZ
7 3 00001080 ??4CHIDDevice##QAEAAV0#ABV0##Z
8 4 00020044 ??_7CHIDDevice##6B#
9 5 00001460 ?Close#CHIDDevice##QAEEXZ
10 6 00001C70 ?FlushBuffers#CHIDDevice##QAEHXZ
11 7 00001CA0 ?GetFeatureReportBufferLength#CHIDDevice##QAEGXZ
12 8 00001850 ?GetFeatureReport_Control#CHIDDevice##QAEEPAEK#Z
13 9 00001C80 ?GetInputReportBufferLength#CHIDDevice##QAEGXZ
14 A 00001BE0 ?GetInputReport_Control#CHIDDevice##QAEEPAEK#Z
15 B 00001A20 ?GetInputReport_Interrupt#CHIDDevice##QAEEPAEKGPAK#Z
16 C 00001CB0 ?GetMaxReportRequest#CHIDDevice##QAEKXZ
17 D 00001C90 ?GetOutputReportBufferLength#CHIDDevice##QAEGXZ
18 E 00001730 ?GetString#CHIDDevice##QAEEEPADK#Z
19 F 00001CC0 ?GetTimeouts#CHIDDevice##QAEXPAI0#Z
20 10 00001700 ?IsOpened#CHIDDevice##QAEHXZ
21 11 000014A0 ?Open#CHIDDevice##QAEEKGGG#Z
22 12 00001360 ?ResetDeviceData#CHIDDevice##AAEXXZ
23 13 00001810 ?SetFeatureReport_Control#CHIDDevice##QAEEPAEK#Z
24 14 00001B80 ?SetOutputReport_Control#CHIDDevice##QAEEPAEK#Z
25 15 000018C0 ?SetOutputReport_Interrupt#CHIDDevice##QAEEPAEK#Z
26 16 00001CE0 ?SetTimeouts#CHIDDevice##QAEXII#Z
3 17 00001320 GetHidGuid
2 18 00001230 GetHidString
1 19 00001190 GetNumHidDevices
Summary
6000 .data
7000 .rdata
5000 .reloc
4000 .rsrc
1C000 .text

You cannot use P/Invoke to call instance methods of a C++ class. The primary hang-up is that you can't create an object of the class, you cannot discover the required memory allocation size. Passing the implicit "this" pointer to the instance method is another problem, it needs to be passed in a register.
You'll need to create a managed wrapper for the class, that requires using the C++/CLI language. Google "C++/CLI wrapper" for good hits.

C++ uses name mangling. All the weird symbols around your function names are a way for the compiler/linker to know the calling convention, parameters, return type, etc.
If you don't want to use name mangling with your functions, you need to surround them with a
extern "C" {
}
statement.
See http://en.wikipedia.org/wiki/Name_mangling

Related

Memory usage of dynamic type in c#

Whether does dynamic-type use more memory size, than relevant type?
For example, does the field use only four bytes?
dynamic foo = (int) 1488;
Short answer:
No. It will actually use 12 bytes on 32bit machine and 24 bytes on 64 bit.
Long Answer
A dynamic type will be stored as an object but at run time the compiler will load many more bytes to make sense of what to do with the dynamic type. In order to do that, a lot more memory will be used to figure that out. Think of dynamic as a fancy object.
Here is a class:
class Mine
{
}
Here is the overhead for the above object on 32bit:
-------------------------- -4 bytes
| Object Header Word |
|------------------------| +0 bytes
| Method Table Pointer |
|------------------------| +4 bytes for Method Table Pointer
A total of 12 bytes needs to be allocated to it since the smallest reference type on 32bit is 12 bytes.
If we add one field to that class like this:
class Mine
{
public int Field = 1488;
}
It will still take 12 bytes because the overhead and the int field can fit in the 12 bytes.
If we add another int field, it will take 16 bytes.
However, if we add one dynamic field to that class like this:
class Mine
{
public dynamic Field = (int)1488;
}
It will NOT be 12 bytes. The dynamic field will be treated like an object and thus the size will be 12 + 12 = 24 bytes.
What is interesting is if you do this instead:
class Mine
{
public dynamic Field = (bool)false;
}
An instance of Mine will still take 24 bytes because even though the dynamic fields is only a boolean, it is still treated like an object.
On a 64bit machine, an instance of Mine with dynamic will take 48 bytes since the smallest reference type on 64 bit is 24 bytes (24 + 24 = 48 bytes).
Here are some gotchas you should be aware of and see this answer for size of object.

Read Fortran binary file into C# without knowledge of Fortran source code?

Part one of my question is even if this is possible? I will briefly describe my situation first.
My work has a licence for a software that performs a very specific task, however most of our time is spent exporting data from the results into excel etc to perform further analysis. I was wondering if it was possible to dump all of the data into a C# object so that I can then write my own analysis code, which would save us a lot of time.
The software we licence was written in Fortran, but we have no access to the source code. The file looks like it is written out in binary, however I do not know if it is unformatted / sequential etc (is there anyway to discern this?).
I have used some of the other answers on this site to successfully read in the data to a byte[], however this is as far as I have got. I have tried to change portions to doubles (which I assume most of the data is) but the numbers do not strike me as being meaningful (most appear too large or too small).
I have the documentation for the software and I can see that most of the internal variable names are 8 character strings, would this be saved with the data? If not I think it would be almost impossible to match all the data to its corresponding variable. I imagine most of the data will be double arrays of the same length (the number of time points), however there will also be some arrays with a longer length as some data would have been interpolated where shorter time steps were needed for convergence.
Any tips or hints would be appreciated, or even if someone tells me its just not possible so I don't waste any more time trying to solve this.
Thank you.
If it was formatted, you should be able to read it with a text editor: The numbers are written in plain text.
So yes, it's probably unformatted.
There are different methods still. The file can have a fixed record length, or it might have a variable one.
But it seems to me that the first 4 bytes represent an integer containing the length of that record in bytes. For example, here I've written the numbers 1 to 10, and then 11 to 30 into an unformatted file, and the file looks like this:
40 1 2 3 4 5 6 7 8 9 10 40
80 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 80
(I added the new line) In here, the first 4 bytes represent the number 40, followed by 10 4-byte blocks representing the numbers 1-10, followed by another 40. The next record starts with an 80, and 20 4-byte blocks containing the numbers 11 through 30, followed by another 80.
So that might be a pattern you could try to see. Read the first 4 bytes and convert them to integer, then read that many bytes and convert them to whatever you think it should be (4 byte float, 8 byte float, et cetera), and then check whether the next 4 bytes again represents the number that you read first.
But there other methods to write data in Fortran that doesn't seem to have this behaviour, for example direct access and stream. So no guarantees.

Why does struct alignment depend on whether a field type is primitive or user-defined?

In Noda Time v2, we're moving to nanosecond resolution. That means we can no longer use an 8-byte integer to represent the whole range of time we're interested in. That has prompted me to investigate the memory usage of the (many) structs of Noda Time, which has in turn led me to uncover a slight oddity in the CLR's alignment decision.
Firstly, I realize that this is an implementation decision, and that the default behaviour could change at any time. I realize that I can modify it using [StructLayout] and [FieldOffset], but I'd rather come up with a solution which didn't require that if possible.
My core scenario is that I have a struct which contains a reference-type field and two other value-type fields, where those fields are simple wrappers for int. I had hoped that that would be represented as 16 bytes on the 64-bit CLR (8 for the reference and 4 for each of the others), but for some reason it's using 24 bytes. I'm measuring the space using arrays, by the way - I understand that the layout may be different in different situations, but this felt like a reasonable starting point.
Here's a sample program demonstrating the issue:
using System;
using System.Runtime.InteropServices;
#pragma warning disable 0169
struct Int32Wrapper
{
int x;
}
struct TwoInt32s
{
int x, y;
}
struct TwoInt32Wrappers
{
Int32Wrapper x, y;
}
struct RefAndTwoInt32s
{
string text;
int x, y;
}
struct RefAndTwoInt32Wrappers
{
string text;
Int32Wrapper x, y;
}
class Test
{
static void Main()
{
Console.WriteLine("Environment: CLR {0} on {1} ({2})",
Environment.Version,
Environment.OSVersion,
Environment.Is64BitProcess ? "64 bit" : "32 bit");
ShowSize<Int32Wrapper>();
ShowSize<TwoInt32s>();
ShowSize<TwoInt32Wrappers>();
ShowSize<RefAndTwoInt32s>();
ShowSize<RefAndTwoInt32Wrappers>();
}
static void ShowSize<T>()
{
long before = GC.GetTotalMemory(true);
T[] array = new T[100000];
long after = GC.GetTotalMemory(true);
Console.WriteLine("{0}: {1}", typeof(T),
(after - before) / array.Length);
}
}
And the compilation and output on my laptop:
c:\Users\Jon\Test>csc /debug- /o+ ShowMemory.cs
Microsoft (R) Visual C# Compiler version 12.0.30501.0
for C# 5
Copyright (C) Microsoft Corporation. All rights reserved.
c:\Users\Jon\Test>ShowMemory.exe
Environment: CLR 4.0.30319.34014 on Microsoft Windows NT 6.2.9200.0 (64 bit)
Int32Wrapper: 4
TwoInt32s: 8
TwoInt32Wrappers: 8
RefAndTwoInt32s: 16
RefAndTwoInt32Wrappers: 24
So:
If you don't have a reference type field, the CLR is happy to pack Int32Wrapper fields together (TwoInt32Wrappers has a size of 8)
Even with a reference type field, the CLR is still happy to pack int fields together (RefAndTwoInt32s has a size of 16)
Combining the two, each Int32Wrapper field appears to be padded/aligned to 8 bytes. (RefAndTwoInt32Wrappers has a size of 24.)
Running the same code in the debugger (but still a release build) shows a size of 12.
A few other experiments have yielded similar results:
Putting the reference type field after the value type fields doesn't help
Using object instead of string doesn't help (I expect it's "any reference type")
Using another struct as a "wrapper" around the reference doesn't help
Using a generic struct as a wrapper around the reference doesn't help
If I keep adding fields (in pairs for simplicity), int fields still count for 4 bytes, and Int32Wrapper fields count for 8 bytes
Adding [StructLayout(LayoutKind.Sequential, Pack = 4)] to every struct in sight doesn't change the results
Does anyone have any explanation for this (ideally with reference documentation) or a suggestion of how I can get hint to the CLR that I'd like the fields to be packed without specifying a constant field offset?
I think this is a bug. You are seeing the side-effect of automatic layout, it likes to align non-trivial fields to an address that's a multiple of 8 bytes in 64-bit mode. It occurs even when you explicitly apply the [StructLayout(LayoutKind.Sequential)] attribute. That is not supposed to happen.
You can see it by making the struct members public and appending test code like this:
var test = new RefAndTwoInt32Wrappers();
test.text = "adsf";
test.x.x = 0x11111111;
test.y.x = 0x22222222;
Console.ReadLine(); // <=== Breakpoint here
When the breakpoint hits, use Debug + Windows + Memory + Memory 1. Switch to 4-byte integers and put &test in the Address field:
0x000000E928B5DE98 0ed750e0 000000e9 11111111 00000000 22222222 00000000
0xe90ed750e0 is the string pointer on my machine (not yours). You can easily see the Int32Wrappers, with the extra 4 bytes of padding that turned the size into 24 bytes. Go back to the struct and put the string last. Repeat and you'll see the string pointer is still first. Violating LayoutKind.Sequential, you got LayoutKind.Auto.
It is going to be difficult to convince Microsoft to fix this, it has worked this way for too long so any change is going to be breaking something. The CLR only makes an attempt to honor [StructLayout] for the managed version of a struct and make it blittable, it in general quickly gives up. Notoriously for any struct that contains a DateTime. You only get the true LayoutKind guarantee when marshaling a struct. The marshaled version certainly is 16 bytes, as Marshal.SizeOf() will tell you.
Using LayoutKind.Explicit fixes it, not what you wanted to hear.
EDIT2
struct RefAndTwoInt32Wrappers
{
public int x;
public string s;
}
This code will be 8 byte aligned so the struct will have 16 bytes. By comparison this:
struct RefAndTwoInt32Wrappers
{
public int x,y;
public string s;
}
Will be 4 byte aligned so this struct also will have 16 bytes. So the rationale here is that struct aligment in CLR is determined by the number of most aligned fields, clases obviously cannot do that so they will remain 8 byte aligned.
Now if we combine all that and create struct:
struct RefAndTwoInt32Wrappers
{
public int x,y;
public Int32Wrapper z;
public string s;
}
It will have 24 bytes {x,y} will have 4 bytes each and {z,s} will have 8 bytes. Once we introduce a ref type in the struct CLR will always align our custom struct to match the class alignment.
struct RefAndTwoInt32Wrappers
{
public Int32Wrapper z;
public long l;
public int x,y;
}
This code will have 24 bytes since Int32Wrapper will be aligned the same as long. So the custom struct wrapper will always align to the highest/best aligned field in the structure or to it's own internal most significant fields. So in the case of a ref string that is 8 byte aligned the struct wrapper will align to that.
Concluding custom struct field inside struct will always be aligned to the highest aligned instance field in the structure. Now if i'm not sure if this is a bug but without some evidence I'm going to stick by my opinion that this might be conscious decision.
EDIT
The sizes are actually accurate only when allocated on a heap but the structs themselves have smaller sizes (the exact sizes of it's fields). Further analysis seam to suggest that this might be a bug in the CLR code, but needs to be backed up by evidence.
I will inspect cli code and post further updates if something useful will be found.
This is a alignment strategy used by .NET mem allocator.
public static RefAndTwoInt32s[] test = new RefAndTwoInt32s[1];
static void Main()
{
test[0].text = "a";
test[0].x = 1;
test[0].x = 1;
Console.ReadKey();
}
This code compiled with .net40 under x64, In WinDbg lets do the following:
Lets find the type on the Heap first:
0:004> !dumpheap -type Ref
Address MT Size
0000000003e72c78 000007fe61e8fb58 56
0000000003e72d08 000007fe039d3b78 40
Statistics:
MT Count TotalSize Class Name
000007fe039d3b78 1 40 RefAndTwoInt32s[]
000007fe61e8fb58 1 56 System.Reflection.RuntimeAssembly
Total 2 objects
Once we have it lets see what's under that address:
0:004> !do 0000000003e72d08
Name: RefAndTwoInt32s[]
MethodTable: 000007fe039d3b78
EEClass: 000007fe039d3ad0
Size: 40(0x28) bytes
Array: Rank 1, Number of elements 1, Type VALUETYPE
Fields:
None
We see that this is a ValueType and its the one we created. Since this is an array we need to get the ValueType def of a single element in the array:
0:004> !dumparray -details 0000000003e72d08
Name: RefAndTwoInt32s[]
MethodTable: 000007fe039d3b78
EEClass: 000007fe039d3ad0
Size: 40(0x28) bytes
Array: Rank 1, Number of elements 1, Type VALUETYPE
Element Methodtable: 000007fe039d3a58
[0] 0000000003e72d18
Name: RefAndTwoInt32s
MethodTable: 000007fe039d3a58
EEClass: 000007fe03ae2338
Size: 32(0x20) bytes
File: C:\ConsoleApplication8\bin\Release\ConsoleApplication8.exe
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8c358 4000006 0 System.String 0 instance 0000000003e72d30 text
000007fe61e8f108 4000007 8 System.Int32 1 instance 1 x
000007fe61e8f108 4000008 c System.Int32 1 instance 0 y
The structure is actually 32 bytes since it's 16 bytes is reserved for padding so in actuality every structure is at least 16 bytes in size from the get go.
if you add 16 bytes from ints and a string ref to: 0000000003e72d18 + 8 bytes EE/padding you will end up at 0000000003e72d30 and this is the staring point for string reference, and since all references are 8 byte padded from their first actual data field this makes up for our 32 bytes for this structure.
Let's see if the string is actually padded that way:
0:004> !do 0000000003e72d30
Name: System.String
MethodTable: 000007fe61e8c358
EEClass: 000007fe617f3720
Size: 28(0x1c) bytes
File: C:\WINDOWS\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String: a
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8f108 40000aa 8 System.Int32 1 instance 1 m_stringLength
000007fe61e8d640 40000ab c System.Char 1 instance 61 m_firstChar
000007fe61e8c358 40000ac 18 System.String 0 shared static Empty
>> Domain:Value 0000000001577e90:NotInit <<
Now lets analyse the above program the same way:
public static RefAndTwoInt32Wrappers[] test = new RefAndTwoInt32Wrappers[1];
static void Main()
{
test[0].text = "a";
test[0].x.x = 1;
test[0].y.x = 1;
Console.ReadKey();
}
0:004> !dumpheap -type Ref
Address MT Size
0000000003c22c78 000007fe61e8fb58 56
0000000003c22d08 000007fe039d3c00 48
Statistics:
MT Count TotalSize Class Name
000007fe039d3c00 1 48 RefAndTwoInt32Wrappers[]
000007fe61e8fb58 1 56 System.Reflection.RuntimeAssembly
Total 2 objects
Our struct is 48 bytes now.
0:004> !dumparray -details 0000000003c22d08
Name: RefAndTwoInt32Wrappers[]
MethodTable: 000007fe039d3c00
EEClass: 000007fe039d3b58
Size: 48(0x30) bytes
Array: Rank 1, Number of elements 1, Type VALUETYPE
Element Methodtable: 000007fe039d3ae0
[0] 0000000003c22d18
Name: RefAndTwoInt32Wrappers
MethodTable: 000007fe039d3ae0
EEClass: 000007fe03ae2338
Size: 40(0x28) bytes
File: C:\ConsoleApplication8\bin\Release\ConsoleApplication8.exe
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8c358 4000009 0 System.String 0 instance 0000000003c22d38 text
000007fe039d3a20 400000a 8 Int32Wrapper 1 instance 0000000003c22d20 x
000007fe039d3a20 400000b 10 Int32Wrapper 1 instance 0000000003c22d28 y
Here the situation is the same, if we add to 0000000003c22d18 + 8 bytes of string ref we will end up at the start of the first Int wrapper where the value actually point to the address we are at.
Now we can see that each value is an object reference again lets confirm that by peeking 0000000003c22d20.
0:004> !do 0000000003c22d20
<Note: this object has an invalid CLASS field>
Invalid object
Actually thats correct since its a struct the address tells us nothing if this is an obj or vt.
0:004> !dumpvc 000007fe039d3a20 0000000003c22d20
Name: Int32Wrapper
MethodTable: 000007fe039d3a20
EEClass: 000007fe03ae23c8
Size: 24(0x18) bytes
File: C:\ConsoleApplication8\bin\Release\ConsoleApplication8.exe
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8f108 4000001 0 System.Int32 1 instance 1 x
So in actuality this is a more like an Union type that will get 8 byte aligned this time around (all of the paddings will be aligned with the parent struct). If it weren't then we would end up with 20 bytes and that's not optimal so the mem allocator will never allow it to happen. If you do the math again it will turn out that the struct is indeed 40 bytes of size.
So if you want to be more conservative with memory you should never pack it in a struct custom struct type but instead use simple arrays. Another way is to allocate memory off heap (VirtualAllocEx for e.g)
this way you are given you own memory block and you manage it the way you want.
The final question here is why all of a sudden we might get layout like that. Well if you compare the jited code and performance of a int[] incrementation with struct[] with a counter field incrementation the second one will generate a 8 byte aligned address being an union, but when jited this translates to more optimized assembly code (singe LEA vs multiple MOV). However in the case described here the performance will be actually worse so my take is that this is consistent with the underlying CLR implementation since it's a custom type that can have multiple fields so it may be easier/better to put the starting address instead of a value (since it would be impossible) and do struct padding there, thus resulting in bigger byte size.
Summary see #Hans Passant's answer probably above. Layout Sequential doesn't work
Some testing:
It is definitely only on 64bit and the object reference "poisons" the struct. 32 bit does what you are expecting:
Environment: CLR 4.0.30319.34209 on Microsoft Windows NT 6.2.9200.0 (32 bit)
ConsoleApplication1.Int32Wrapper: 4
ConsoleApplication1.TwoInt32s: 8
ConsoleApplication1.TwoInt32Wrappers: 8
ConsoleApplication1.ThreeInt32Wrappers: 12
ConsoleApplication1.Ref: 4
ConsoleApplication1.RefAndTwoInt32s: 12
ConsoleApplication1.RefAndTwoInt32Wrappers: 12
ConsoleApplication1.RefAndThreeInt32s: 16
ConsoleApplication1.RefAndThreeInt32Wrappers: 16
As soon as the object reference is added all the structs expand to be 8 bytes rather their 4 byte size. Expanding the tests:
Environment: CLR 4.0.30319.34209 on Microsoft Windows NT 6.2.9200.0 (64 bit)
ConsoleApplication1.Int32Wrapper: 4
ConsoleApplication1.TwoInt32s: 8
ConsoleApplication1.TwoInt32Wrappers: 8
ConsoleApplication1.ThreeInt32Wrappers: 12
ConsoleApplication1.Ref: 8
ConsoleApplication1.RefAndTwoInt32s: 16
ConsoleApplication1.RefAndTwoInt32sSequential: 16
ConsoleApplication1.RefAndTwoInt32Wrappers: 24
ConsoleApplication1.RefAndThreeInt32s: 24
ConsoleApplication1.RefAndThreeInt32Wrappers: 32
ConsoleApplication1.RefAndFourInt32s: 24
ConsoleApplication1.RefAndFourInt32Wrappers: 40
As you can see as soon as the reference is added every Int32Wrapper becomes 8 bytes so isn't simple alignment. I shrunk down the array allocation incase it was LoH allocation which is differently aligned.
Just to add some data to the mix - I created one more type from the ones you had:
struct RefAndTwoInt32Wrappers2
{
string text;
TwoInt32Wrappers z;
}
The program writes out:
RefAndTwoInt32Wrappers2: 16
So it looks like the TwoInt32Wrappers struct aligns properly in the new RefAndTwoInt32Wrappers2 struct.

Reading data over serial port from voltmeter

I'm sort of new at this and I'm writing a small application to read data from a voltmeter. It's a RadioShack Digital Multimeter 46-range. The purpose of my program is to perform something automatically when it detects a certain voltage. I'm using C# and I'm already familiar with the SerialPort class.
My program runs and reads the data in from the voltmeter. However, the data is all unformatted/gibberish. The device does come with its own software that displays the voltage on the PC, however this doesn't help me since I need to grab the voltage from my own program. I just can't figure out how to translate this data into something useful.
For reference, I'm using the SerialPort.Read() method:
byte[] voltage = new byte[100];
_serialPort.Read(voltage, 0, 99);
It grabs the data and displays it as so:
16 0 30 0 6 198 30 6 126 254 30 0 30 16 0 30 0 6 198 30 6 126 254 30 0 30 16 0 3
0 0 6 198 30 6 126 254 30 0 30 16 0 30 0 6 198 30 6 126 254 30 0 30 16 0 30 0 6
198 30 6 126 254 30 0 30 24 0 30 0 6 198 30 6 126 254 30 0 30 16 0 30 0 254 30 6
126 252 30 0 6 0 30 0 254 30 6 126 254 30 0
The space separates each element of the array. If I use a char[] array instead of byte[], I get complete gibberish:
▲ ? ? ▲ ♠ ~ ? ▲ ♠ ▲ ? ? ▲ ♠ ~ ? ▲ ♠ ▲ ? ? ▲ ♠ ~ ? ▲ ♠
Using the .ReadExisting() method gives me:
▲ ?~?♠~?▲ ▲? ▲ ?~♠~?▲ ?↑ ▲ ??~♠~?▲ F? ▲ ??~♠~?▲ D? ▲ ??~♠~?▲ f?
.ReadLine() times out, so doesn't work. ReadByte() and ReadChar() just give me numbers similar to the Read() into array function.
I'm in way over my head as I've never done something like this, not really sure where else to turn.
It sounds like you're close, but you need to figure out the correct Encoding to use.
To get a string from an array of bytes, you need to know the Code Page being used. If it's not covered in the manual, and you can't find it via a google/bing/other search, then you will need to use trial and error.
To see how to use GetChars() to get a string from a byte array, see Decoder.GetChars Method
In the code sample, look at this line:
Decoder uniDecoder = Encoding.Unicode.GetDecoder();
That line is specifically stating that you are to use the Unicode code page to get the correct code page.
From there, you can use an override of the Encoding class to specify different Code Pages. This is documented here: Encoding Class
If the Encoding being used isn't one of the standards, you can use the Encoding(Int32) override in the Constructor of the Encoding class. A list of valid Code Page IDs can be found at Code Pages Supported by Windows
There are two district strategies for solving your communications problem.
Locate and refer to appropriate documentation and design\modify a program to implement the specification.
The following may be appropriate, but are not guaranteed to describe the particular model DVM that you have. Nonetheless, they MAY serve as a starting point.
note that the authors of these documents comment that the Respective models may be 'visually identical', but also comments that '"Open-source packages that reportedly worked on LINUX with earlier RS-232 models do not work with the 2200039"
http://forums.parallax.com/attachment.php?attachmentid=88160&d=1325568007
http://sigrok.org/wiki/RadioShack_22-812
http://code.google.com/p/rs22812/
Try to reverse engineer the protocol. if you can read the data in a loop and collect the results, a good approach to reverse engineering a protocol, is to apply various representative signals to the DVM. You can use a short-circuit resistance measurements, various stable voltage measurements, etc.
The technique I suggest is most valuable is to use an automated variable signal generator. In this way, by analyzing the patterns of the data, you should be more readily be able to identify which points represent the raw data and which points represent stable descriptive data, like the unit of measurements, mode of operation, etc.
Some digital multimeters use 7 bit data transfer. You should set serial communication port to 7 data bits instead of standard 8 data bits.
I modified and merged a couple of older open source C programs together on linux in order to read the data values from the radio shack meter whose part number is 2200039. This is over usb. I really only added a C or an F on one range. My program is here, and it has the links where I got the other two programs in it.
I know this example is not in C#, but it does provide the format info you need. Think of it is as the API documentation written in C, you just have to translate it into C# yourself.
The protocol runs at 4800 baud, and 8N1 appears to work.

Use BeginOutputReadLine when the process output newline or return

I used Process() to execute a external file named "test.exe".
The "test.exe" only prints a string.
"abc \n \r xyz\n"
My goal is to get the string and turn each byte to corresponding ASCII code.
That is, I expect the outputs in my c# console are as below,
97 98 99 32 10 32 13 32 120 121 122 10
But when I used BeginOutputReadLine to get the output of test.exe, \n and \r were striped.
As a result, I only got
97 98 99 32 32 32 120 121 122
Finally, I don't want to use synchronized ways like Read, ReadLine, and ReadToEnd.
Is there any way to get what I want??
Thanks!
Actually, I create a backgroundWorker to deal with external process test.exe
I have a proc_DataReceived and backgroundWorker_Build_ProgressChanged...
the related code as below
http://codepad.org/Gmq1XqXb
all code as below
http://codepad.org/k7VpWynu
(I'm new to stackoverflow. I pasted my code in codepad.org before finding out how to format code here.)
If you use BeginOutputReadLine, the string won't contain the "end of line" characters (or some of them).
See Capture output from unrelated process for another way to capture output of another process. This would work better in your case since you can read the stream character by character.

Categories

Resources