I have a piece of code that i am trying to implement in C#. The code writes a file using the frwite command using Matlab. I have tried looking at the documentation and doing some examples to understand how does frwite works.
I tried the following but no success.
Here is the code:
line_vectors = [5;10;15;20;25]
sampPeriod=100000;
[filename,permission,machineformat] = fopen(outputfile);
fwrite(outputfile,sampPeriod,'int32');
fwrite(outputfile,line_vectors(:),'float32');
Output using fread():
160
134
1
0
0
0
160
64
0
0
32
65
0
0
112
65
0
0
160
65
0
0
200
65
I tried to implement a similar code in C#:
using (BinaryWriter writer = new BinaryWriter(file))
{
writer.Write(100000);
writer.Write(5);
writer.Write(10);
writer.Write(15);
writer.Write(20);
}
Output using fread() in Matlab:
160
134
1
0
5
0
0
0
10
0
0
0
15
0
0
0
20
0
0
0
If anybody could help me in mapping the fwrite functionality in C#.
If you want all numbers after the 1st to be written as float32, you can indicate the value type for the other values, like this:
using (BinaryWriter writer = new BinaryWriter(file))
{
writer.Write(100000);
writer.Write(5.0f);
writer.Write(10.0f);
writer.Write(15.0f);
writer.Write(20.0f);
}
BinaryWriter.Write is an overloaded function, with many possible input types. Depending on the type passed as the input variable, the specific version of the function writes the bytes that represent the value as that type. Since your initial code provided no other information, the default is to assume int, and a 4 byte integer representation was used.
Related
Imagine a schema:
namespace MyEvents;
table EventAddress
{
id:uint;
timestamp:ulong;
adress:string;
}
table EventSignalStrength
{
id:uint;
timestamp:ulong;
strength:float;
}
table EventStatus
{
status:string;
}
union Events {EventAddress, EventSignalStrength, EventStatus}
table EventHolder
{
theEvent:Events;
}
root_type EventHolder;
For status message "EXIT", in C++ I encode and send over the wire like:
std::string message("EXIT");
flatbuffers::FlatBufferBuilder builder;
auto messageString= builder.CreateString(message); // Message to send.
auto statusEvent= MyEvents::CreateEventStatus(builder, messageString);
auto eventHolder= MyEvents::CreateEventHolder(builder, MyEvents::Events_EventStatus, statusEvent.Union());
builder.Finish(eventHolder);
// Code to decode to check my work omitted, but the data decode properly in my real-world application.
ret= sendto(m_udpSocket, reinterpret_cast<const char*>(builder.GetBufferPointer()), static_cast<int>(builder.GetSize()), 0, reinterpret_cast<SOCKADDR *>(&m_destination), sizeof(m_destination));
For the same message, "EXIT", in C# I encode and send over the wire like:
string message= "EXIT";
FlatBufferBuilder builder = new FlatBufferBuilder(1);
StringOffset messageOffset = builder.CreateString(message);
EventStatus.StartEventStatus(builder);
EventStatus.AddStatus(builder, messageOffset);
Offset<EventStatus> eventStatusOffset = EventStatus.EndEventStatus(builder);
EventHolder.StartEventHolder(builder);
EventHolder.AddTheEventType(builder, Events.EventStatus);
EventHolder.AddTheEvent(builder, eventStatusOffset.Value);
Offset<EventHolder> eventHolderOffset = EventHolder.EndEventHolder(builder);
EventHolder.FinishEventHolderBuffer(builder, eventHolderOffset);
// Test the encoding by decoding:
EventHolder flatBuffer = EventHolder.GetRootAsEventHolder(builder.DataBuffer);
Events flatBufferType = flatBuffer.TheEventType; // Type looks good.
EventStatus decodedEvent= new EventStatus();
flatBuffer.GetDataObject<EventStatus>(decodedEvent); // decodedEvent.Status looks good.
// This code seems to send the correct data:
Byte[] sendSized = builder.SizedByteArray();
udpClient.Send(sendSized, sendSized.Length);
// This code does not seem to send the correct data:
//ByteBuffer sendByteBuffer = builder.DataBuffer;
//udpClient.Send(sendByteBuffer.Data, sendByteBuffer.Data.Length);
In my client application, written in C#, I decode as:
Byte[] receiveBytes = udpClient.Receive(ref m_remoteEndpoint);
ByteBuffer flatBufferBytes= new ByteBuffer(receiveBytes);
EventHolder flatBuffer = EventHolder.GetRootAsEventHolder(flatBufferBytes);
Events flatBufferType= flatBuffer.DataObjectType;
EventAddress eventAddress = null;
EventSignalStrength eventSignalStrength = null;
EventStatus eventStatus = null;
switch (flatBufferType)
{
case Events.EventAddress:
{
eventAddress = new EventAddress();
flatBuffer.GetDataObject<EventAddress>(eventAddress);
ProcessEventAddress(eventAddress);
break;
}
case Events.EventSignalStrength:
{
eventSignalStrength = new EventSignalStrength();
flatBuffer.GetDataObject<EventSignalStrength>(eventSignalStrength);
ProcessEventSignalStrength(eventSignalStrength);
break;
}
case Events.EventStatus:
{
eventStatus= new EventStatus();
flatBuffer.GetDataObject<EventStatus>(eventStatus);
Console.WriteLine("\nStatus Message: {0}", eventStatus.status);
break;
}
}
When I receive EventStatus messages from the C++ application, they decode properly.
When I receive EventStatus messages from the C# sending application, they decode properly.
When I dump the buffers sent from the applications, they are (in decimal):
C++ - 12 0 0 0 8 0 14 0 7 0 8 0 8 0 0 0 0 0 0 4 12 0 0 0 0 0 6 0 8 0 4 0 6 0 0 0 4 0 0 0 4 0 0 0 69 88 73 84 0 0 0 0
C# - 12 0 0 0 8 0 10 0 9 0 4 0 8 0 0 0 12 0 0 0 0 4 6 0 8 0 4 0 6 0 0 0 4 0 0 0 4 0 0 0 69 88 73 84 0 0 0 0
Originally, the messages from the C# sender were not decoding properly - now they are. I had made a change to the sender, so maybe had not rebuilt.
I am a little mystified that the received C++ buffer and the C# buffer are different, yet they decode properly to the same result.
My real-world schema is much more complex - am I following the proper procedure for decoding on the C# side?
Am I following the correct procedure for reducing the flatbuffer to Byte[] for sending over the wire in C#? It looks like I am, but it did not seem to work for awhile....
Any input appreciated.
The ByteBuffer contains the buffer, but not necessarily at offset 0, so yes, turning it into a byte array (or sending the bytebuffer contents from its starting offset) are the only correct ways of sending it.
The encoding may differ between languages, as implementations may serialize things in different orders. Here, the C++ implementation decides to write the union type field before the offset, which happens to be inefficient for alignment, so it is a bit bigger. C# does the opposite.
Best way to describe my miss understanding is with the code itself:
var emptyByteArray = new byte[2];
var specificByteArray = new byte[] {150, 105}; //0x96 = 150, 0x69 = 105
var bitArray1 = new BitArray(specificByteArray);
bitArray1.CopyTo(emptyByteArray, 0); //[0]: 150, [1]:105
var hexString = "9669";
var intValueForHex = Convert.ToInt32(hexString, 16); //16 indicates to convert from hex
var bitArray2 = new BitArray(new[] {intValueForHex}) {Length = 16}; //Length=16 truncates the BitArray
bitArray2.CopyTo(emptyByteArray, 0); //[0]:105, [1]:150 (inversed, why??)
I've been reading that the bitarray iterates from the LSB to the MSB, what's the best way for me to initialize the bitarray starting from a hex string then?
I think you are thinking about it wrong. Why are you even using a BitArray? Endianness is a byte-related convention, BitArray is just an array of bits. Since it is least-significant bit first, the correct way to store a 32-bit number in a bit array is with bit 0 at index 0 and bit 31 at index 31. This isn't just just my personal bias towards little-endianness (bit 0 should be in byte 0 not byte 3 for goodness sake), it's because BitArray stores bit 0 of a byte at index 0 in the array. It also stores bit 0 of a 32-bit integer in bit 0 of the array, no matter the endianness of the platform you are on.
For example, instead of your integer 9669, let's look at 1234. No matter what platform you are on, that 16-bit number has the following bit representation, because we write a hex number with the most significant hex digit 1 to the left and the least significant hex digit 4 to the right, bit 0 is on the right (a human convention):
1 2 3 4
0001 0010 0011 0100
No matter how an architecture orders the bytes, bit 0 of a 16-bit number always means the least-significant bit (the right-most here) and bit 15 means the most-significant bit (the left-most here). Due to this, your bit array will always be like this, with bit 0 on the left because that's the way I read an array (with index 0 being bit 0 and index 15 being bit 15):
---4--- ---3--- ---2--- ---1---
0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0
What you are doing is trying to impose the byte order you want onto an array of bits where it doesn't belong. If you want to reverse the bytes, then you'll get this in the bit array which makes a lot less sense, and means you'll have to reverse the bytes again when you get the integer back out:
---2--- ---1--- ---4--- ---3---
0 1 0 0 1 0 0 0 0 0 1 0 1 1 0 0
I don't think this makes any kind of sense for storing an integer. If you want to store the big-endian representation of a 32-bit number in the BitArray then what you are really storing is a byte array that just happens to be the big-endian representation of a 32-bit number and you should convert to a byte array first make it big-endian if necessary before putting it in the BitArray:
int number = 0x1234;
byte[] bytes = BitConverter.GetBytes(number);
if (BitConverter.IsLittleEndian)
{
bytes = bytes.Reverse().ToArray();
}
BitArray ba = new BitArray(bytes);
I've a method which gets some data (from a transmussion) and the method is called many times. I want to use data of the last transmission in this method. But
private List<byte> _dataOfLastMsg = new List<byte>();
internal List<msg> GetData (Imsgod msgod, List<Byte> data)
{
if (data == null || data.Count == 0)
return transmission;
// Call x
// USING _dataOfLastMsg
...
...
if (data.Count != 0)
_dataOfLastMsg = data;
}
Example:
Msg 1: 0 0 70 0 0
Msg 2: 0 0 0 0 0
Msg 3: 20 0 0 0 20
Call 1 of GetData: _dataOfLastMsg = 0 0 70 0 0
Call 2 of GetData: _dataOfLastMsg = 0 0 70 0 0
Call 3 of GetData: _dataOfLastMsg = 20 0 0 0 20
At call 3 the _dataOfLastMsg should be 0 0 70 0 0, because the call of _dataOfLastMsg is before this line: _dataOfLastMsg = data;
Whats wrong? Sorry for my bad english
About 90% of the code required to give you a sensible answer to this is missing, but here's something to think about.
Where are you calling this class?
Is an instance kept or is it recreated every time?
What is the actual output data and how are you calling it?
What is transmission and why are you returning it when data is empty?
What does //Call x mean?
Which data are you feeding into the program and how?
From what I can vaguely tell, on your third call your transmission is returned instead of _dataOfLastMsg, but the only way to tell is to slap down a breakpoint at the start of the method and hit debug to see the logical paths taken in your program and how the variables change.
like the title suggests my problem is that I have a query/Stored Procedure that selects a data from a view and its working just fine at the management studio, the problem is when I try to call this data from my application( using linq to entites) I get wrong data(wrong as in a single row is repeated 10 times when the query should return 5 different rows/records)
Here is my management studio Query :
select * from dbo.v_RouteCardDetails_SizeInfo
where Trans_TransactionHeader = 0
AND Direction = 0
AND RoutGroupID = 1
AND Degree = '1st'
Result Returned:
Size SizeQuantity Trans_TransactionHeader RoutGroupID Direction Degree
XS 10 0 1 0 1st
S 2 0 1 0 1st
M 0 0 1 0 1st
L 5 0 1 0 1st
XXL 2 0 1 0 1st
and here is my Linq Query:
(from x in context.v_RouteCardDetails_SizeInfo
where x.Trans_TransactionHeader == 0
&& x.Direction == 0
&& x.RoutGroupID == 1
&& x.Degree.ToLower() == "1st"
select x).ToList<_Model.v_RouteCardDetails_SizeInfo>();
And the result returned is :
Size SizeQuantity Trans_TransactionHeader RoutGroupID Direction Degree
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
XS 10 0 1 0 1st
for 2 days I've been trying to fix this, will appreciate your help
Thanks
Undoubtedly the fields that Entity Framework has guessed as primary key of the view are not unique in the view. Try to add fields to the PK in the edmx designer (or code-first mapping) until you've really got a unique combination.
EF just materializes identical rows for each identical key value it finds in the result set from the SQL query.
Because it is not possible to have sme enviroment as your I suggest you do following things:
Check in debbuger what exactly is in list. The printed result suggest that you somehow display data returned form database and in this code could be an error.
Preview LINQ query. You can use LinqPad for this.
I'm having some technical problems... I'm trying to use Firmata for arduino but over nrf24, not over Serial interface. I have tested nRF24 communication and it's fine. I have also tested Firmata over Serial and it works.
Base device is simple "serial relay". When it has data available on Serial, read it and send it over nRF24 network. If there is data available from network, read it and send it through Serial.
Node device is a bit complex. It has custom Standard Firmata where I have just added write and read override.
Read override id handeled in loop method in this way:
while(Firmata.available())
Firmata.processInput();
// Handle network data and send it to Firmata process method
while(network.available()) {
RF24NetworkHeader header;
uint8_t data;
network.read(header, &data, sizeof(uint8_t));
Serial.print(data, DEC); Serial.print(" ");
Firmata.processInputOverride(data);
BlinkOnBoard(50);
}
currentMillis = millis();
Firmata processInputOverrride is little changed method of processInput where processInput reads data directly from FirmataSerial, and in this method we pass data down to method from network. This was tested and it should work fine.
Write method is overloaded in a different way. In Firmata.cpp I have added an method pointer that can be set to a custom method and used to send data using that custom method. I have then added custom method call after each of the FirmataSerial.write() call:
Firmata.h
...
size_t (*firmataSerialWriteOverride)(uint8_t);
...
void FirmataClass::printVersion(void) {
FirmataSerial.write(REPORT_VERSION);
FirmataSerial.write(FIRMATA_MAJOR_VERSION);
FirmataSerial.write(FIRMATA_MINOR_VERSION);
Firmata.firmataSerialWriteOverride(REPORT_VERSION);
Firmata.firmataSerialWriteOverride(FIRMATA_MAJOR_VERSION);
Firmata.firmataSerialWriteOverride(FIRMATA_MINOR_VERSION);
}
I have then set the overrided write method to a custom method that just writes byte to network instead of Serial.
size_t ssignal(uint8_t data) {
RF24NetworkHeader header(BaseDevice);
network.write(header, &data, sizeof(uint8_t));
}
void setup() {
...
Firmata.firmataSerialWriteOverride = ssignal;
...
}
Everything seems to be working fine, it's just that some data seems to be inverted or something. I'm using sharpduino (C#) to do some simple digital pin toggle. Here's how output looks like: (< came from BASE, > sent to BASE)
> 208 0
> 209 0
...
> 223 0
> 249
< 4 2 249
and here communication stops...
That last line came inverted. So i tough that I only need to invert received bytes. And it worked for that first command. But then something happens and communication stops again.
> 208 0
> 209 0
...
> 223 0
> 249 // Report firmware version request
< 249 2 4
> 240 121 247 // 240 is sysex begin and 247 is systex end
< 240 121
< 101 0 67 0 0 1 69 0 118
< 117 0 115 0
< 0 70 0 105 0 116 0 111 0 109
< 0 97 0
< 0 109
< 116 0 97 0 247
> 240 107 247
So what could be the problem here? It seems that communication with Firmata works but something isn't right...
-- EDIT --
I solved that issue. The problem was that I didn't see Serial.write() calls in sysex callback. Now that that is solved, I came up to another problem... All stages pass right (I guess) and then I dont get any response from Node when I request pin states
...
< f0 6a 7f 7f 7f ... 7f 0 1 2 3 4 5 6 7 8 9 a b c d e f f7 // analog mapping
> f0 6d 0 f7 // sysex request pin 0 state and value
> f0 6d 1 f7
> f0 6d 2 f7
...
> f0 6d 45 f7
// And I wait for response...
There is no response. Any ideas why would that happen? Node receive all messages correctly and code for handling pin states exist.