I've a device connected via ModBus TCP/IP.
I read data in c# and check via KepServerEx. If I compare the raw int value I get same result but when I try to convert into string no.
I read 8 register with value
12544,50,0,0,0,0,0,0
KepSever show me this string -> 1
C# conversion -> (with EasyModBus) ATALA or other value but not 1
I try to "play" with ASCII table to find a path for get only a string with "1" like KepServerEX... no success.
Modbus does not define how a character string is transmitted, it only defines the transmission of 16-bit words and bits.
So EasyModBus may show one string and another modbus client may show another, it all depends on the programmer who wrote the code, one thought he had to do the conversion one way and another did it differently, since there is no standard.
I have to translate a project from c# to R. In this c# project i have to handle binary files.
I have three problems:
1.I am having some issues to convert this code:
//c#
//this work fine
using (BinaryReader rb = new BinaryReader(archive.Entries[0].Open())){
a = rb.ReadInt32();
b = rb.ReadInt32();
c = rb.ReadDouble();
}
#R
#this work, but it reads different values
#I tried to change the size in ReadBin, but it's the same story. The working diretory is the right one
to.read <- "myBinaryFile.tmp"
line1<-c(readBin(to.read,"integer",2),
readBin(to.read,"double",1))
How can I read float (in c# i have rb.ReadSingle()) in R?
Is there in R a function to memorize the position that you have arrived when you are reading a binary file? So next time you will read it again, you could skip what you have already read (as in c# with BinaryReader)
Answering your questions directly:
I am having some issues to convert this code...
What is the problem here? Your code block contains the comment "but it's the same story", but what is the story? You haven't explained anything here. If your problem is with the double, you should try setting readBin(..., size = 8). In your case, your code would read line1 <- c(readBin(to.read,"integer", 2), readBin(to.read, "double", 1, 8)).
How can I read float (in c# i have rb.ReadSingle()) in R?
Floats are 4 bytes in size in this case (I would presume), so set size = 4 in readBin().
Is there in R a function to memorize the position that you have arrived when you are reading a binary file? So next time you will read it again, you could skip what you have already read (as in c# with BinaryReader)
As far as I know there is nothing available (more knowledgeable people are welcome to add their inputs). You could, however, easily write a wrapper script for readBin() that does this for you. For instance, you could specify how many bytes you want to discard (i.e., this can correspond to n bytes that you have already read into R), and read in that many bytes via a dummy readBin() like so readBin(con = yourinput, what = "raw", n = n), where the integer n would indicate the number of bytes you wish to throw away. Thereafter, you could have your wrapper script go read succeeding bytes into a variable of your choice.
I'm using the code
cmd.Parameters.Add("#Array", SqlDbType.VarBinary).Value = Array;
The SqlDbType.VarBinary description states that it can only handle array's upto 8 K bytes. I have a byte array that represents an image and can go upto 10k bytes.
How do I store that in a varbinary(max) column using C#?
I have no trouble creating the array. I'm stuck at this 8k limit when trying to execute the query.
Edit: Let me clarify, on my machine even pictures upto 15k bytes get stored on the database in the varbinary(MAX) column when I run the asp.net application locally but once I deployed it the pictures would not get stored. I then resorted to drastically resizing the images to ensure their size was less that 8K and now the images get stored without any problem.
Perhaps you could look at the Sql Server FILESTREAM feature since its meant for storing files. It basically stores a pointer to your file and the file is stored directly in the filesystem (in the databases data directory).
I like FILESTREAM since you it means you continue to use the interface to the database (SQLClient for example) rather then breaking out to an adhoc method to read/write files to the harddrive. This means security is managed for you in that your app doesn't need special permissions to access the filesystem.
Quick google gave this acticle on using filestream in c# but I'm sure there are many others.
UPDATE following OP EDIT
So once deployed to other server the upload fails? Perhaps the problem is not the sql insert but that there is a http request content length limit imposed - for example in your web.config the httpRuntime element has the maxRequestLength attribute. If this is set to a low value perhaps this is the problem. So you could set to something like this (sets max to 6MB well over the 10kb problem):
<system.web>
<httpRuntime maxRequestLength="6144" />
The only thing here is the limit it 4MB buy default :|
No, this is what the description actually says:
Array of type Byte. A variable-length stream of binary data ranging
between 1 and 8,000 bytes. Implicit conversion fails if the byte array
is greater than 8,000 bytes. Explicitly set the object when working
with byte arrays larger than 8,000 bytes.
I would assume that what that actually means is that you cannot use AddWithValue to have a parameter infer the type as VarBinary if the byte array is over 8000 elements. You would have to use Add, specify the type of the parameter yourself and then set the Value property, i.e. use this:
command.Parameters.Add("#MyColumn", SqlDbType.VarBinary).Value = myByteArray;
rather than this:
command.Parameters.AddWithValue("#MyColumn", myByteArray);
Adding the length of data seems to be the fix
var dataParam = cmd.Parameters.AddWithValue("#Data", (object)data.Data ?? DBNull.Value);
if (data.Data != null)
{
dataParam.SqlDbType = SqlDbType.VarBinary;
dataParam.Size = data.Data.Length;
}
I am working on an application that communicates with an embedded device to log or view data from networked machinery. The machinery is networked and communicates with data logging device via CAN (ISO 15765-4). The logging device uses an unmanaged DLL API, which I have already created a working Wrapper - and I can send single hard coded requests to the logger and properly parse the data (With the help of this website) - I do not need help with wrapping the dll, parsing CAN, or communication with the logging device, that is all working. What I am needing help with is an overall strategy for setting up the individual data items to be logged - through a database, xml serialization, or have the data embedded within the code. My thoughts are a database would be ideal for this but I am struggling on how to implement this in a good OOP strategy.
The next step is to allow user to dynamically select the data they wish to view / log, and have the application setup the requests / responses dynamically. The application that my company is currently using for this function is all in c++, and it is extremely buggy and hard to maintain as the original coding team has moved on. I am NOT trying to directly port the c++ code or structure to C#, just the functionality. I am providing some snippets of the c++ code just as an example of the type of data I am trying to work with. I am using C# because of the relative ease for creating a GUI compared to c++.
The data being logged can be 1 of 3 different types:
Boolean (true / false)
Linear (y=mx+b)
Switched enumerations (1 = Cold, 2 = Warm, 3 = Hot, 4 = Overheated)
Here are some general requirements for the application
There are approximately 400 unique data items that can be logged.
Typically, there will be between 20 - 50 items monitored at any one time, but the capability needs to exist to log ALL supported data.
Not all of the machinery supports all data items - so there is a pruning request to determine what is supported by each machine.
CAN requests are FUNCTIONAL addressing rather than PHYSICAL addressing - so requests are made with a broadcast type message, and all supporting equipment will respond. (This just means I may get multiple responses from the same request)
Data should be able to be viewed live - or logged to file for later analysis. Any saved logs would be on the order of 10,000 frames (probably never exceeding 50,000 frames)
Typical request / response turn around is ~150mS, with each request occurring in a sequential manner (must repeat each request in a loop)
So here is a typical use case:
User connects application to embedded device
Query all connected machinery for supported data
Present user with a list of supported data
User selects data from this set to log/display
"Stream" data to GUI in text or graphical format with option to save
View saved data (without embedded device connected)
I have all of the potential supported data items in an excel table currently with the thought that I could port it in to a database or xml file. Although this data is not "Top Secret", we would like to hide/encrypt it from customers/end users if possible (although I can work on that later - just bringing the point up now in case it changes the strategy)
I have been searching the web over the last couple of weeks, and I am just hitting a mental block on how to deal with this. Do I read the data list from database and create a class that houses all of the parameters? Do I create a separate class for each data item? Structure? etc? I just cant quite figure out the strategy to get me started. I am not looking for someone to do this for me, just a direction to go (small code examples would be helpful as a starting point however) I know that I will need to use threading to keep the GUI updated, so if this factors in to any strategy as well. Once the initial setup is made, it is basically an infinite loop until user cancels the action or there is a predefined trigger to halt the data flow. I know that this will tie up the GUI so a background thread will be needed to keep screen refreshed and responsive.
Below is a snippet from the c++ code -all of the data was embedded within the code, any changes that needed to be made (add or edit existing data items) required changes in about 4 different places, so it is extremely hard to maintain. This is just an example - 1 of each type of data, along with how they structured things. Basically they had a struct like block for each data item, and an enumeration that contained ALL data items. Any switch cases had separate code that defined the states. I am thinking this would be much better suited to a database than 20000 lines of code strictly dealing with the data item values / conversions / text, etc (That is how much c++ code is devoted to this)
Thanks in advance for any help or direction you can give
-Lee
// Example of a Switch Case enumerated data item
ITEMs::PARAM ITEMs::m_CASE_A =
{
0, // starting byte
0, // ending byte
0, // starting bit
4, // ending bit
1, // number of bytes
5, // number of bits
{
_T("Long Description Case A"), // Full description
_T("Case A"), // Abbreviated description
_T("CASEA"), // acronym 1
_T("CASEA"), // acronym 2
DATA_TYPE_INTEGER, // Data type
0x03, // DATA Itentifier
UOM_ENUMERATION, // Raw units to be scaled
UOM_ENUMERATION, // Default units of measure for display
0.0, // Minimum value for numeric types
FALSE, // TRUE if minimum value defined
0.0, // Maximum value for numeric types
FALSE, // TRUE if maximum value defined
-0.5, // Minimum display value
6.5, // Maximum display value
0.0, // Tolerance
FALSE, // TRUE if tolerance defined
0, // Number of digits after decimal pt
NUMERIC_FORMAT_DECIMAL, // Default format for numeric types
0.0, // Scaling per bit
0.0, // Bias to apply after scaling
FALSE, // TRUE if signed for numeric types
_T(""), // String to display when boolean parameter is ON
_T(""), // String to display when boolean parameter is OFF
_T(""), // String to display when numeric value is not valid
TRUE // Visible to end-user
}
};
// Example of a Linear Data Items
ITEMs::PARAM ITEMs::m_LINEAR_A =
{
0, // starting byte
0, // ending byte
0, // starting bit
7, // ending bit
1, // number of bytes
8, // number of bits
{
_T("Long Description Linear A"), // Full description
_T("LINEAR A"), // Abbreviated description
_T("LIN_A"), // acronym
_T("LIN_A"), // acronym
DATA_TYPE_FLOAT, // Data type
0x05, // DATA Itentifier
UOM_TEMP_DEGC, // Raw units to be scaled
UOM_TEMP_DEGF, // Default units of measure for display
-40.0, // Minimum value for numeric types
TRUE, // TRUE if minimum value defined
215.0, // Maximum value for numeric types
TRUE, // TRUE if maximum value defined
-40.0, // Minimum display value
419.0, // Maximum display value
9.0, // Tolerance
TRUE, // TRUE if tolerance defined
0, // Number of digits after decimal pt
NUMERIC_FORMAT_DECIMAL, // Default format for numeric types
1.8, // Scaling per bit
-40.0, // Bias to apply after scaling
FALSE, // TRUE if signed for numeric types
_T(""), // String to display when boolean parameter is ON
_T(""), // String to display when boolean parameter is OFF
_T(""), // String to display when numeric value is not valid
TRUE // Visible to end-user
}
};
// Example of a Boolean Data Item
ITEMs::PARAM ITEMs::m_BOOL_A =
{
1, // starting byte
1, // ending byte
6, // starting bit
6, // ending bit
1, // number of bytes
1, // number of bits
{
_T("Long Description BOOL A"), // Full description
_T("Boolean A"), // Abbreviated description
_T("BOOL_A"), // acronym
_T("BOOL_A"), // acronym
DATA_TYPE_BOOLEAN, // Data type
0x01, // DATA Itentifier
UOM_NONE, // Raw units to be scaled
UOM_NONE, // Default units of measure for display
0.0, // Minimum value for numeric types
FALSE, // TRUE if minimum value defined
0.0, // Maximum value for numeric types
FALSE, // TRUE if maximum value defined
0.0, // Minimum display value
0.0, // Maximum display value
0.0, // Tolerance
FALSE, // TRUE if tolerance defined
0, // Number of digits after decimal pt
NUMERIC_FORMAT_NONE, // Default format for numeric types
1.0, // Scaling per bit
0.0, // Bias to apply after scaling
FALSE, // TRUE if signed for numeric types
_T("NO"), // String to display when boolean parameter is 1
_T("YES"), // String to display when boolean parameter is 0
_T(""), // String to display when numeric value is not valid
TRUE // Visible to end-user
}
};
// This example sets the user display text for enumerated CASE data items (m_CASE_A in example above)
void PARAM1::FormatValue( ULONG value,
CStdString* pstrResult,
PARAMVALUETYPE* pdResult )
{
UINT uID;
UCHAR uValue = value & 0xFF;
*pdResult = (PARAMVALUETYPE)uValue;
if ( pstrResult != NULL )
{
switch ( uValue )
{
case 0x01: uID = CASE_1; break;
case 0x02: uID = CASE_2; break;
case 0x03: uID = CASE_3; break;
default: uID = CASE_INVALID; break;
}
(*pstrResult).LoadString( uID );
}
}
// CAUTION: Items must be in same order as in the enumeration DATA_ENUM.
ITEMs::PARAM* ITEMs::m_PARAMs[] =
{
&m_CASE_A,
&m_LINEAR_A,
&m_BOOL_A,
...
...
...
};
// Enumeration of all Data Parameters
typedef enum
{
CASE_A,
LINEAR_A,
BOOL_A,
...
...
...
} DATA_ENUM;
Well, although I do not understand all the details (for example, how actually this log data is gathered, I believe this is based on cyclic request/response pattern) I have some design thoughts about it:
whether metadata about data items should be stored in database or XML depends how often it will change and what is distribution model of this software. If it is internal application used by a few people in your company, probably storing local configuration (in XML file for example) will be enough. If the number of users is big, synchronization and management through database will be useful. If it is a application sold to users, it would be in place to have some auto-updates system, so here we come back to the file solution again. Personally I would choose to use a configuration file because it covers more scenarios and it would be probably a XML file due to its clearness.
I would choose to represent each data item as a instance of some subclass (EnumDataItem, LinearDataItem, ...) of abstract DataItem class (or just IDataItem interface). They should implement methods like Serialize/Deserialize (for request/response manipulation) and Format (or ToString simply) for GUI/logging. Definitions of those data items should be loaded from XML file. Then you can define a Parameters class that is a list of IDataItems and can be easily Serialized/Deserialized.
I have a feeling (although it might be wrong) that there can be a lot of similar data items, so I would think about possibility of templating or deriving, so I could specify in XML some templates that can be reused in defining concrete data items.
I've recently found out about protocol buffers and was wondering if they could be applied to my specific problem.
Basically I have some CSV data that I need to convert to a more compact format for storage as some of the files are several gig.
Each field in the CSV has a header, and there are only two types, strings and decimals (because sometimes there are alot of significant digits and I need to handle all numbers the same way). But each file will have different column names for each field.
As well as capturing the original CSV data I need to be able to add extra information to the file before saving. And I was hoping to make this future proof by handling different file versions.
So, is it possible to use protocol buffers to capture a random number of randomly named columns of data, like a CSV file?
Well, it's certainly representable. Something like:
message CsvFile {
repeated CsvHeader header = 1;
repeated CsvRow row = 2;
}
message CsvHeader {
require string name = 1;
require ColumnType type = 2;
}
enum ColumnType {
DECIMAL = 1;
STRING = 2;
}
message CsvRow {
repeated CsvValue value = 1;
}
// Note that the column is implicit based on position within row
message CsvValue {
optional string string_value = 1;
optional Decimal decimal_value = 2;
}
message Decimal {
// However you want to represent it (there are various options here)
}
I'm not sure how much benefit it will provide, mind you... You can certainly add more information (add to the CsvFile message) and future proofing is in the "normal PB way" - only add optional fields, etc.
Well, protobuf-net (my version) is based on regular .NET types, so no (since it won't cope with different schemas all the time). But Jon's version might allow dynamic types. Personally, I'd just use CSV and run it through GZipStream - I expect that will be fine for the purpose.
Edit: actually, I forgot: protobuf-net does support extensible objects, but you need to be a bit careful... it would depend on the full context, I expect.
Plus Jon's approach of nested data would probably work too.