writing to rfid - c#

public void writeToCard2(string sourceText, string cardType)
{
Cursor.Current = Cursors.WaitCursor;
int itemLength = sourceText.Split(',').Length;
sourceText = itemLength.ToString() + "," + sourceText + ",";
byte[] dataByteArray = Encoding.GetEncoding(932).GetBytes(sourceText);
//textBox2.Text = BitConverter.ToString(dataByteArray);
int dataByteLength = dataByteArray.Length;
int writeLength = dataByteLength + 11;
byte[] writeByteArray = new byte[writeLength];
writeByteArray[0] = 0x02;//STX
writeByteArray[1] = 0x00;//アドレス
writeByteArray[2] = 0x78;//コマンド
writeByteArray[3] = Convert.ToByte(dataByteLength + 4);//データ長
writeByteArray[4] = 0xa1;//詳細コマンド
writeByteArray[5] = 0x00;//書き込み開始ブロック番号
writeByteArray[6] = Convert.ToByte(dataByteLength);//書き込みバイト数
for (int i = 0; i < dataByteLength; i++)
{
writeByteArray[i + 7] = dataByteArray[i];//書き込みデータ
}
writeByteArray[dataByteLength + 7] = 0x40;//オプションフラグ
writeByteArray[dataByteLength + 8] = 0x03;//ETX
byte sum = 0x00;
for (int i = 0; i <= dataByteLength + 8; i++)
{
sum += writeByteArray[i];
}
writeByteArray[dataByteLength + 9] = sum;//SUM値
writeByteArray[dataByteLength + 10] = 0x0d;//CR
//string tempStr = BitConverter.ToString(writeByteArray);
//port.Write(writeByteArray, 0, writeByteArray.Length);
serialPort1.Write(writeByteArray, 0, writeByteArray.Length);
writeCardType = cardType;
Cursor.Current = Cursors.Default;
}
the above method writes data on an rfid tag in the line
serialPort1.Write(writeByteArray, 0, writeByteArray.Length);
writeByteArray is the size of the data which is exceeding the limit of the RFID tag, my boss said convert it to ascii code and then write to RFID.
Will this help can this conversion reduce size of data?
Is there any other way round withoud using a different RFID tag?

Your boss said to convert to ASCII cause device reads information byt per byte. I worked with that devices and that is usual way they read the data stream passed to them.
There is not any allocation benefit in this, cause the size of the data remains the same, what changes is information rapresentation. That is.

Related

How to get the "pixel" data values from a Photon Focus camera using the Pleora eBUS SDK c# or python?

I have a 3D Photon Focus camera ( MV1-D2048x1088-3D06-760-G2-8) and I am using C# with the Pleora eBUS SDK version 5.1.1 on a Windows 10 machine. The camera is set to scan a laser line in LineFinder Mode, DataFormat3D = 2 and is returning the data (buffer Payload = 2 x 2048 = 4096 bytes). The payload seems correct. I want to save this data but I am having difficulty. How can I get the buffer into an array (or some structure) to save it to a file stream?
My code is using the .DataPointer parameter from the Pleora eBUS SDK but I am not understanding what it is doing. The Manual I have included HERE - MAN075_PhotonFocus
private unsafe static void ThreadProc(object aParameters)
{
object[] lParameters = (object[])aParameters;
MainForm lThis = (MainForm)lParameters[0];
for (;;)
{
if (lThis.mIsStopping)
{
// Signaled to terminate thread, return.
return;
}
PvBuffer lBuffer = null;
PvResult lOperationResult = new PvResult(PvResultCode.OK);
// Retrieve next buffer from acquisition pipeline
PvResult lResult = lThis.mStream.RetrieveBuffer(ref lBuffer, ref lOperationResult, 100);
if (lResult.IsOK)
{
// Operation result of buffer is OK, display.
if (lOperationResult.IsOK)
{
//lThis.displayControl.Display(lBuffer);
uint bSize = lBuffer.GetPayloadSize();
PvImage image1 = lBuffer.Image;
uint height1 = image1.Height;
uint width1 = image1.Width;
uint offx1 = image1.OffsetX;
uint offy1 = image1.OffsetY;
PvPixelType imgpixtype = image1.PixelType;
image1.Alloc(width1, (uint)2, imgpixtype);
byte *data_pnt = image1.DataPointer ;
byte[] MSB_array = new byte[(int)width1];
int buff_size = 2 * (int)width1;
byte[] pix_array = new byte[buff_size];
ulong tStamp = lBuffer.Timestamp;
string msgOut = (bSize.ToString() + " TimeStamp " + tStamp.ToString() + " width " + width1.ToString());
Console.WriteLine(msgOut);
for (int i = 0; i < width1; i++)
{
data_pnt += 0;
Console.Write((uint)*data_pnt);
MSB_array[i] = *data_pnt;
data_pnt += 1;
}
data_pnt += 1;
Console.WriteLine(height1.ToString());
for (int i = 0; i < width1; i++)
{
ushort msb1 = MSB_array[i];
ushort last_4 = (ushort)(*data_pnt & 0x0F);
int integer1 = (msb1 << 4)+(ushort)(*data_pnt>>4);
double dec_part = (float)last_4 / (float)16;
double val1 = (float)integer1 + dec_part;
Console.WriteLine(val1.ToString());
data_pnt += 1;
}
Console.WriteLine(height1.ToString());
}
else
{
uint bSize = lBuffer.GetPayloadSize();
ulong tStamp = lBuffer.Timestamp;
string msgOut = (bSize.ToString() + " BAD RESULT TimeStamp " + tStamp.ToString());
Console.WriteLine(msgOut);
}
// We have an image - do some processing (...) and VERY IMPORTANT,
// re-queue the buffer in the stream object.
lThis.mStream.QueueBuffer(lBuffer);
}
}
}
My current solution is to loop through the buffer by incrementing the pointer and save the bytes into a new array (MSB_array). The way this data is packed (see the attached image in the question) I had to read the next line and bitshift it over and add it to the byte in the MSB_array to get a
for (int i = 0; i < width1; i++)
{
data_pnt += 0;
Console.Write((uint)*data_pnt);
MSB_array[i] = *data_pnt;
data_pnt += 1;
}
data_pnt += 1;
Console.WriteLine(height1.ToString());
for (int i = 0; i < width1; i++)
{
ushort msb1 = MSB_array[i];
ushort last_4 = (ushort)(*data_pnt & 0x0F);
int integer1 = (msb1 << 4)+(ushort)(*data_pnt>>4);
double dec_part = (float)last_4 / (float)16;
double val1 = (float)integer1 + dec_part;
Console.WriteLine(val1.ToString());
data_pnt += 1;
}
I am only writing it out to the console now but the data is correct. There may be a better/faster way than the for loop using the pointer. That post would be appreciated.

How convert a array object saved as string(base64?) to C# array or sql Table?

I have a legacy code that was developed in Ruby, it saves an Array as a string in the database that I would like to decode in the migration to the new environment.
Legacy Ruby Code:
class CatalogoConjunto
include DataMapper::Resource
...
property :coordenadas, Object
...
...
def save_depois
if (coordenadas.length > 1 and coordenadas.include?([0, 0]) rescue false)
self.coordenadas = coordenadas - [[0, 0]]
save!
end
end
def save
save = super
save_depois
save
end
...
end
Samples what saves in the column coordenadas:
"BAhbCFsHaQHYaQIDAVsHaQLZAWkCwwFbB2kB8WkCXQI= "
Other Samples:
"BAhbBlsHaQHRaQI6AQ== ", "BAhbBlsHaQLMAmkB3A== ", "BAhbB1sHaQKmAmkB81sHaQIkA2kBvQ== "
How do I find the encode method?
The new application uses C# and the data migration is being done in SQL ... but any light that shows me the way forward already helps me ...
Ruby makes me pretty lost.
Edit
The string "BAhbBlsHaQJ6A2kCIwI=" stand for this values (890, 547).
but using The following code i get the string "egMAACMCAAA="
Trying convert using C#
int[,] test = new int[,] { { 890, 547 } };
byte[] result = new byte[test.Length * sizeof(int)];
Buffer.BlockCopy(test, 0, result, 0, result.Length);
var anotherString = Convert.ToBase64String(result);
Console.WriteLine(anotherString);
var stringFromDatabase = "BAhbBlsHaQJ6A2kCIwI= ";
byte[] byteArray = Convert.FromBase64String(stringFromDatabase);
//Don't work
int[,] newArr = new int[byteArray.Length / sizeof(int)/2 + ((byteArray.Length / sizeof(int))%2), 2];
for (int ctr = 0; ctr < byteArray.Length / sizeof(int); ctr++)
{
if (ctr % 2 != 0)
{
newArr[ctr/2, 0] = BitConverter.ToInt32(byteArray, ctr * sizeof(int));
}
else
{
newArr[ctr/2, 1] = BitConverter.ToInt32(byteArray, ctr * sizeof(int));
}
}
The string generated by Ruby looks like Base64 but the values don't match
The DataMapper has a default bahavior to marshall the data type Object:
https://github.com/datamapper/dm-core/blob/master/lib/dm-core/property/object.rb
After some research found a post of how Marshall in Ruby work here:
https://ilyabylich.svbtle.com/ruby-marshalling-from-a-to-z
I just need the numbers them i did a simple decode just looking for the Integers
public static List<int> Conversor(string stringFromDatabase)
{
byte[] byteArray = Convert.FromBase64String(stringFromDatabase);
List<int> retorno = new List<int>();
for (int i = 0; i < byteArray.Length; i++)
{
if ((char)byteArray[i] == (char)105)
{
int valInt = 0;
int primeiroByte = Convert.ToInt32(byteArray[i + 1]);
if (primeiroByte == 0)
retorno.Add(0);
else if (primeiroByte > 4)
retorno.Add(primeiroByte - 5);
else if (primeiroByte > 0 && primeiroByte < 5)
{
valInt = byteArray[i + 2];
for (int y = 1; y < primeiroByte; y++)
{
valInt = valInt | (byteArray[i + 2 + y] << 8 * y);
}
retorno.Add(valInt);
}
}
}
return retorno;
}

how to I deal with NaN results from FFT?

I am trying to implement a function which takes an wav file, runs a 100th of a second worth of audio through the FFT by AForge. When I change the offset to alter where in the audio I am computing through the FFT, sometimes I will get results in which I can show in my graph but most of the time I get a complex array of NaN's. Why could this be?
Here is my code.
public double[] test()
{
OpenFileDialog file = new OpenFileDialog();
file.ShowDialog();
WaveFileReader reader = new WaveFileReader(file.FileName);
byte[] data = new byte[reader.Length];
reader.Read(data, 0, data.Length);
samepleRate = reader.WaveFormat.SampleRate;
bitDepth = reader.WaveFormat.BitsPerSample;
channels = reader.WaveFormat.Channels;
Console.WriteLine("audio has " + channels + " channels, a sample rate of " + samepleRate + " and bitdepth of " + bitDepth + ".");
float[] floats = new float[data.Length / sizeof(float)];
Buffer.BlockCopy(data, 0, floats, 0, data.Length);
size = 2048;
int inputSamples = samepleRate / 100;
int offset = samepleRate * 15 * channels;
int y = 0;
Complex[] complexData = new Complex[size];
float[] window = CalcWindowFunction(inputSamples);
for (int i = 0; i < inputSamples; i++)
{
complexData[y] = new Complex(floats[i * channels + offset] * window[i], 0);
y++;
}
while (y < size)
{
complexData[y] = new Complex(0, 0);
y++;
}
FourierTransform.FFT(complexData, FourierTransform.Direction.Forward);
double[] arr = new double[complexData.Length];
for (int i = 0; i < complexData.Length; i++)
{
arr[i] = complexData[i].Magnitude;
}
Console.Write("complete, ");
return arr;
}
private float[] CalcWindowFunction(int inputSamples)
{
float[] arr = new float[size];
for(int i =0; i<size;i++){
arr[i] = 1;
}
return arr;
}
A complex array of NaNs is usually the result of one of the inputs to the FFT being a NaN. To debug, you might check all the values in the input array before the FFT to make sure they are within some valid range, given the audio input scaling.

find checksum of a string in C# .net

Hi I want to find a checksum of single string. here are the requirements of checksum.
32 digit/8byte check sum represented in hexadecimal character.
It should be XOR of header + session + body + message.
Lets suppose header + session + body + message = "This is test string". I want to calculate the checksum of this. So far I have developed below code.
Checksum is calculated correctly if string length(byte[] data) is multiple of 4.
If "data" is not a multiple of 4 I receive exception as
"System.IndexOutOfRangeException: Index was outside the bounds of the array".
I will be taking different inputs having different string length from user and hence the string length will be variable(means some time user can enter only ABCDE some times q and A and so on.). How can I fix this exception issue and calculate correct checksum with multiple of 4.
public string findchecksum(string userinput)
try
{
ASCIIEncoding enc = new ASCIIEncoding();
byte[] data = Encoding.ASCII.GetBytes(userinput);
byte[] checksum = new byte[4];
for (int i = 16; i <= data.Length - 1; i += 4)
{
checksum[0] = (byte)(checksum[0] ^ data[i]);
checksum[1] = (byte)(checksum[1] ^ data[i + 1]);
checksum[2] = (byte)(checksum[2] ^ data[i + 2]);
checksum[3] = (byte)(checksum[3] ^ data[i + 3]);
}
int check = 0;
for (int i = 0; i <= 3; i++)
{
int r = (Convert.ToInt32(checksum[i]));
int c = (-(r + (1))) & (0xff);
c <<= (24 - (i * 8));
check = (check | c);
}
return check.ToString("X");
Because you use i+3 inside your loop, your array size has to always be divisible by 4. You should extend your data array to met that requirement before entering the loop:
byte[] data = Encoding.ASCII.GetBytes(cmd);
if (data.Length % 4 != 0)
{
var data2 = new byte[(data.Length / 4 + 1) * 4];
Array.Copy(data, data2, data.Length);
data = data2;
}

Write Java ObjectOutputStream.writeInt() read with BinaryReader in C#?

Okay, so I've got a game map creater programmed in Java which writes the map out to file using ObjectOutputStream.writeInt()
Now I'm converting the game engine to C# XNA and I'm trying to load the map. I'm getting numerical errors though, so I'm wondering if anyone knows what I'm doing wrong?
Java writes as int 32 Big Endian I believe (I could be wrong though).
Here is the code I'm using to read the height and width of the map in C#.
Edit: br is BinaryReader.
width = (int)IPAddress.NetworkToHostOrder(BitConverter.ToInt32(br.ReadBytes(sizeof(int)), 0));
height = (int)IPAddress.NetworkToHostOrder(BitConverter.ToInt32(br.ReadBytes(sizeof(int)), 0));
Can anyone please tell me what I'm doing wrong? Or how to read the bytes from ObjectOutputStream.writeInt() properly in C#?
Edit: 2nd try failed. here is the current code:
public byte[] ReadBigEndianBytes(int count, BinaryReader br)
{
byte[] bytes = new byte[count];
for (int i = count - 1; i >= 0; i--)
bytes[i] = br.ReadByte();
return bytes;
}
public void loadFile(int level)
{
FileStream fs = new FileStream("map" + level + ".lv", FileMode.Open, FileAccess.Read);
BinaryReader br = new BinaryReader(fs, System.Text.Encoding.BigEndianUnicode);
width = BitConverter.ToInt32(ReadBigEndianBytes(4, br), 0);
height = BitConverter.ToInt32(ReadBigEndianBytes(4, br), 0);
tile = new int[width, height];
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
tile[x, y] = BitConverter.ToInt32(ReadBigEndianBytes(4, br), 0);
}
}
}
}
ObjectOutputStream.writeInt()
Don't use that. Use DataOutputStream.writeInt(). It does the same thing, in network byte order, but it doesn't add the Serialziation header that ObjectOutputStream adds, so you won't have to skip it at the .NET end.
Absolutely correct:
Java writes as int 32 Big Endian I believe (I could be wrong though).
Remember, though: a .Net Int32 is Little-Endian ;)
[Edit] SOLUTION:
1) Here is Java code that writes 10 integers (Java int's are 32-bit, Big-endian))
import java.io.*;
public class WriteBinary {
public static void main (String[] args) {
int[] data = {
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
};
String fname = "myfile.bin";
try
{
System.out.println("Opening " + fname + "...");
FileOutputStream fos =
new FileOutputStream(fname);
int ibyte;
for (int i = 0; i < data.length; i++) {
ibyte = ((data[i] >>> 24) & 0xff); fos.write(ibyte);
ibyte = ((data[i] >>> 16) & 0xff); fos.write(ibyte);
ibyte = ((data[i] >>> 8) & 0xff); fos.write(ibyte);
ibyte = (data[i] & 0xff); fos.write(ibyte);
}
fos.close();
System.out.println("File write complete.");
}
catch (IOException e) {
System.out.println ("I/O error: " + e.getMessage());
}
}
}
2) Here is the C# code that reads it. You'll notice the "using System.Net", in order to get .Net's equivalent of "ntohl()":
using System;
using System.IO;
using System.Net;
namespace ReadBinary
{
class Program
{
static void Main(string[] args)
{
string fname = "myfile.bin";
try
{
Console.WriteLine("Opening " + fname + "...");
BinaryReader br =
new BinaryReader(
File.Open(fname, FileMode.Open));
for (int i = 0; i < (int)(br.BaseStream.Length / 4); i++)
{
int j =
System.Net.IPAddress.NetworkToHostOrder (br.ReadInt32());
Console.WriteLine("array[" + i + "]=" + j + "...");
}
br.Close();
Console.WriteLine("Read complete.");
}
catch (IOException ex)
{
Console.WriteLine("I/O error" + ex.Message);
}
}
}
}
I think a proper way is to use IPAddress.NetworkToHostOrder(Int32) method.
public void loadFile(int level)
{
...
width = IPAddress.NetworkToHostOrder(br.ReadInt32());
height = IPAddress.NetworkToHostOrder(br.ReadInt32());
...
}

Categories

Resources