I am trying to print stored bitmap images in some printers.
The program is a Windows Form.
The command to print the logo (bitmap)-(if there is one stored) is:
port.Write("\x1C\x70\x01\x00");
('port' being my name for new serial port object).
There can be from 0 to 255 DEC (00 to FF HEX) different locations in the
printers memory.
I need a for loop or while loop that will increment the above line of code so,
port.Write("\x1C\x70\x01\x00"); would become
port.Write("\x1C\x70\x02\x00");
port.Write("\x1C\x70\x03\x00"); up to FF
port.Write("\x1C\x70\xFF\x00");
etc etc.
I looked on MSDN & Search Stack Overflow:
https://msdn.microsoft.com/en-us/library/bb311038.aspx
int to hex string in C#
Also, as an alternative to Coriths solution. The SerialPort object lets you write a byte array directly, rather than converting your bytes to a string that the SerialPort then converts back into bytes again.
for (byte i = 0; i < 255; i++)
{
var bytes = new byte[] { 0x1C, 0x70, i, 0x00 };
port.Write(bytes, 0, 4);
}
This loop should work for you. You can always use 0x to work in hexadecimal numbers in your loops.
for(var c = 0x01; c <= 0xFF; c++)
{
port.Write($"\x1C\x70\x{c:X2}\x00");
}
Related
I'm writing an RLE algorithm in C# that can work on any file as input. The approach to encoding I'm taking is as follows:
An RLE packet contains 1 byte for the length and 1 byte for the value. For example, if the byte 0xFF appeared 3 times in a row, 0x03 0xFF would be written to the file.
If representing the data as raw data would be more efficient, I use 0x00 as a terminator. This works because the length of a packet can never be zero. If I wanted to add the bytes 0x53 0x2C 0x01 to my compressed file it would look like this:
0x03 0xFF 0x00 0x53 0x2C 0x01
However a problem arises when trying to switch back to RLE packets. I can't use a byte as a terminator like I did for switching onto raw data because any byte value from 0x00 to 0xFF can be in the input data, and when decoding the bytes the decoder would misinterpret the byte as a terminator and ruin everything.
What can I do to indicate that I have to switch back to RLE packets when it can't be written as data in the file?
Here is my code if it helps:
private static void RunLengthEncode(ref byte[] bytes)
{
// Create a list to store the bytes
List<byte> output = new List<byte>();
byte runLengthByte;
int runLengthCounter = 0;
// Set the RLE byte to the first byte in the array and increment the RLE counter
runLengthByte = bytes[0];
// For each byte in the input array...
for (int i = 0; i < bytes.Length; i++)
{
if (runLengthByte == bytes[i] || runLengthCounter == 255)
{
runLengthCounter++;
}
else
{
// RLE packets under 3 should be written as raw data to avoid increasing the file size
if (runLengthCounter < 3)
{
// Add a 0x00 to indicate raw data
output.Add(0x00);
// Add the bytes that were skipped while counting the run length
for (int j = i - runLengthCounter; j < i; j++)
{
output.Add(bytes[j]);
}
}
else
{
// Add 2 bytes, one for the number of bytes and one for the value
output.Add((byte)runLengthCounter);
output.Add(runLengthByte);
}
runLengthCounter = 1;
runLengthByte = bytes[i];
}
// Add the last bytes to the list when finishing
if (i == bytes.Length - 1)
{
// Add 2 bytes, one for the number of bytes and one for the value
output.Add((byte)runLengthCounter);
output.Add(runLengthByte);
}
}
// Set the bytes to the RLE encoded data
bytes = output.ToArray();
}
Also if you want to comment and say that RLE isn't very efficient for binary data, I know it isn't. This is a project I'm doing to implement many kinds of compression to learn about them, not for an actual product.
Any help would be appreciated! Thanks!
There are many ways to unambiguously encode run-lengths. One simple way is, when decoding: if you see two equal bytes in a row, then the next byte is a a count of repeats of that byte after those first two. I.e. 0..255 additional repeats, so encoding runs of 2..257. (There's no point in encoding runs of 0 or 1.)
I'm new here and I'm a beginner in python programming. I need to convert C# code to python but I stuck when I wanted to read serial data as byte array. I used extend() and bytearray() functions but nothing is working.
Bellow is the C# code which I want to convert to Python 3.x
do
{
int _byteToRead = P._serialPort.BytesToRead;
byte[] inBuffer = new byte[_byteToRead];
P._serialPort.Read(inBuffer, 0, _byteToRead); //Reads a number of characters from the System.IO.Ports.SerialPort input buffer and writes them into an array of characters at a given offset.
byte MatchStart = 242; // HEX: F2
byte MatchEnd = 248; // HEX: F8
try
{
for (int j = 0; j < inBuffer.Length; j++)
{
if (inBuffer[j] == MatchStart)
I don't know how to convert the first 3 lines.
I tried:
bytedata=ser.read(10000) # I need to
b=bytearray()
b.extend(map(ord, bytedata))
or:
bytedata+=ser.read(ser.inWaiting())
z=bytearray(bytedata)
Thanks.
I am trying to write an Encoded file.The file has 9 to 12 bit symbols. While writing a file I guess that it is not written correctly the 9 bit symbols because I am unable to decode that file. Although when file has only 8 bit symbols in it. Everything works fine. This is the way I am writing a file
File.AppendAllText(outputFileName, WriteBackContent, ASCIIEncoding.Default);
Same goes for reading with ReadAllText function call.
What is the way to go here?
I am using ZXing library to encode my file using RS encoder.
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);//if i use AZTEC_DATA_8 it works fine beacuse symbol size is 8 bit
int[] bytesAsInts = Array.ConvertAll(toBytes.ToArray(), c => (int)c);
enc.encode(bytesAsInts, parity);
byte[] bytes = bytesAsInts.Select(x => (byte)x).ToArray();
string contentWithParity = (ASCIIEncoding.Default.GetString(bytes.ToArray()));
WriteBackContent += contentWithParity;
File.AppendAllText(outputFileName, WriteBackContent, ASCIIEncoding.Default);
Like in the code I am initializing my Encoder with AZTEC_DATA_12 which means 12 bit symbol. Because RS Encoder requires int array so I am converting it to int array. And writing to file like here.But it works well with AZTEC_DATA_8 beacue of 8 bit symbol but not with AZTEC_DATA_12.
Main problem is here:
byte[] bytes = bytesAsInts.Select(x => (byte)x).ToArray();
You are basically throwing away part of the result when converting the single integers to single bytes.
If you look at the array after the call to encode(), you can see that some of the array elements have a value higher than 255, so they cannot be represented as bytes. However, in your code quoted above, you cast every single element in the integer array to byte, changing the element when it has a value greater than 255.
So to store the result of encode(), you have to convert the integer array to a byte array in a way that the values are not lost or modified.
In order to make this kind of conversion between byte arrays and integer arrays, you can use the function Buffer.BlockCopy(). An example on how to use this function is in this answer.
Use the samples from the answer and the one from the comment to the answer for both conversions: Turning a byte array to an integer array to pass to the encode() function and to turn the integer array returned from the encode() function back into a byte array.
Here are the sample codes from the linked answer:
// Convert byte array to integer array
byte[] result = new byte[intArray.Length * sizeof(int)];
Buffer.BlockCopy(intArray, 0, result, 0, result.Length);
// Convert integer array to byte array (with bugs fixed)
int bytesCount = byteArray.Length;
int intsCount = bytesCount / sizeof(int);
if (bytesCount % sizeof(int) != 0) intsCount++;
int[] result = new int[intsCount];
Buffer.BlockCopy(byteArray, 0, result, 0, byteArray.Length);
Now about storing the data into files: Do not turn the data into a string directly via Encoding.GetString(). Not all bit sequences are valid representations of characters in any given character set. So, converting a random sequence of random bytes into a string will sometimes fail.
Instead, either store/read the byte array directly into a file via File.WriteAllBytes() / File.ReadAllBytes() or use Convert.ToBase64() and Convert.FromBase64() to work with a base64 encoded string representation of the byte array.
Combined here is some sample code:
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);//if i use AZTEC_DATA_8 it works fine beacuse symbol size is 8 bit
int[] bytesAsInts = Array.ConvertAll(toBytes.ToArray(), c => (int)c);
enc.encode(bytesAsInts, parity);
// Turn int array to byte array without loosing value
byte[] bytes = new byte[bytesAsInts.Length * sizeof(int)];
Buffer.BlockCopy(bytesAsInts, 0, bytes, 0, bytes.Length);
// Write to file
File.WriteAllBytes(outputFileName, bytes);
// Read from file
bytes = File.ReadAllBytes(outputFileName);
// Turn byte array to int array
int bytesCount = bytes.Length * 40;
int intsCount = bytesCount / sizeof(int);
if (bytesCount % sizeof(int) != 0) intsCount++;
int[] dataAsInts = new int[intsCount];
Buffer.BlockCopy(bytes, 0, dataAsInts, 0, bytes.Length);
// Decoding
ReedSolomonDecoder dec = new ReedSolomonDecoder(GenericGF.AZTEC_DATA_12);
dec.decode(dataAsInts, parity);
I am trying to print each byte in an array (Byte Array) using a for loop. However since I am using the String.Format, it converts the 0x00 in the byte array to a 0. How can I print it as 00.
Trace.Write("\n--->");
for (int K = 1; K <= j; K++)
Debug.Write(string.Format("{0:X}", FrameByteArray[K]));
I know it should be simple, but I have a hard time figuring it out.
Please advice.
Just use {0:X2} instead - this will ensure the number will always have at least two characters.
So I'm curious, what exactly is going on here?
static void SetUInt16 (byte [] bytes, int offset, ushort val)
{
bytes [offset] = (byte) ((val & 0x0ff00) >> 8);
bytes [offset + 1] = (byte) (val & 0x0ff);
}
Basically the idea in this code is to set a 16 bit int into a byte buffer at a specific location, but the problem is I'm trying to emulate it using
using(var ms = new MemoryStream())
using(var w = new BinaryWriter(ms))
{
w.Write((ushort)1);
}
I'm expecting to read 1 but instead I'm getting 256. Is this an endianness issue?
The code writes a 16-bit integer in big-endian order. Upper byte is written first. Not the same thing that BinaryWriter does, it writes in little-endian order.
When you decode the data, are you getting 256 when you expect 1? BinaryWriter.Write uses little-endian encoding, your SetUInt16 method is using big-endian.