Moving through each pixel for 1bpp Pixel Format - c#

I had a question related to the use of Lockbits method in C#..
I have a 1bpp Image and I'm trying to access all the pixels of the image but some are still left out.
public Bitmap Pixels(Bitmap original)
{
Rectangle rect = new Rectangle(0, 0, original.Width, original.Height);
BitmapData bimgData = original.LockBits(rect, ImageLockMode.ReadWrite, original.PixelFormat);
IntPtr ptr = bimgData.Scan0;
int bytes = bimgData.Stride * bimg.Height;
byte[] Values = new byte[bytes];
Marshal.Copy(ptr, Values, 0, bytes);
int Val;
int stride = bimgData.Stride;
for (int column = 0; column < bimgData.Height; column = column + 1)
for (int row = 0; row < bimgData.Width; row = row +1)
{
c = column;
r = row;
for (int t = 0; t < 8; t++)
{
Val = Values[((c) * stride) + ((r) / 8)] & 2 ^ t;
if (Val == 0)
Values[((c) * stride) + ((r) / 8)] = (byte)(Values[((c) * stride) + ((r) / 8)] + 2 ^ t);
}
}
Marshal.Copy(Values, 0, ptr, bytes);
original.UnlockBits(bimgData);
return original;
}
This code should turn all the pixels white

Val = Values[((c) * stride) + ((r) / 8)] & 2 ^ t;
2 ^ t doesn't do what you hope it does, that's Visual Basic syntax. In the C# language, ^ is the XOR operator. Use the << operator instead and use parentheses to take care of operator precedence. Fix:
Val = Values[((c) * stride) + ((r) / 8)] & (1 << t);
And fix it again when you set the bit.
Do note that turning the entire image to White doesn't require this kind of code at all. Just set all the bytes to 0xff.

Related

converting byte array to Mat

I have a CCD driver which returns IntPtr to me. I used Marshal.Copy to byte array (bytearray_Image), each element inside bytearray_Image stores 8bit R/G/B value which the sequence is byte[0] = R value, byte[1] = G value, byte[2] = B value...and so on. I have successfully converted to 3 Channels Mat using below code snippet:
var src = new Mat(rows: nHeight, cols: nWidth, type: MatType.CV_8UC3);
var indexer = src.GetGenericIndexer();
int x = 0;
int y = 0;
for (int z = 0; z < (bytearray_Image.Length - 3); z += 3)
{
byte blue = bytearray_Image[(z + 2)];
byte green = bytearray_Image[(z + 1)];
byte red = bytearray_Image[(z + 0)];
Vec3b newValue = new Vec3b(blue, green, red);
indexer[y, x] = newValue;
x += 1;
if (x == nWidth)
{
x = 0;
y += 1;
}
}
Since the image is very large, this method seems to be too slow to convert the image. Is there any ways to do such conversion efficiently?
This code works for me:
var image = new Mat(nHeight, nWidth, MatType.CV_8UC3);
int length = nHeight * nWidth * 3; // or image.Height * image.Step;
Marshal.Copy(bytearray_Image, 0, image.ImageData, length);
But it will work for you only if byte[] data's step length is equal to the Mat's

summing Int32 represented by byte array in c#

I have an array of audio data, which is a lot of Int32 numbers represented by array of bytes (each 4 byte element represents an Int32) and i want to do some manipulation on the data (for example, add 10 to each Int32).
I converted the bytes to Int32, do the manipulation and convert it back to bytes as in this example:
//byte[] buffer;
for (int i=0; i<buffer.Length; i+=4)
{
Int32 temp0 = BitConverter.ToInt32(buffer, i);
temp0 += 10;
byte[] temp1 = BitConverter.GetBytes(temp0);
for (int j=0;j<4;j++)
{
buffer[i + j] = temp1[j];
}
}
But I would like to know if there is a better way to do such manipulation.
You can check the .NET Reference Source for pointers (grin) on how to convert from/to big endian.
class intFromBigEndianByteArray {
public byte[] b;
public int this[int i] {
get {
i <<= 2; // i *= 4; // optional
return (int)b[i] << 24 | (int)b[i + 1] << 16 | (int)b[i + 2] << 8 | b[i + 3];
}
set {
i <<= 2; // i *= 4; // optional
b[i ] = (byte)(value >> 24);
b[i + 1] = (byte)(value >> 16);
b[i + 2] = (byte)(value >> 8);
b[i + 3] = (byte)value;
}
}
}
and sample use:
byte[] buffer = { 127, 255, 255, 255, 255, 255, 255, 255 };//big endian { int.MaxValue, -1 }
//bool check = BitConverter.IsLittleEndian; // true
//int test = BitConverter.ToInt32(buffer, 0); // -129 (incorrect because little endian)
var fakeIntBuffer = new intFromBigEndianByteArray() { b = buffer };
fakeIntBuffer[0] += 2; // { 128, 0, 0, 1 } = big endian int.MinValue - 1
fakeIntBuffer[1] += 2; // { 0, 0, 0, 1 } = big endian 1
Debug.Print(string.Join(", ", buffer)); // "128, 0, 0, 0, 1, 0, 0, 1"
For better performance you can look into parallel processing and SIMD instructions - Using SSE in C#
For even better performance, you can look into Utilizing the GPU with c#
How about the following approach:
struct My
{
public int Int;
}
var bytes = Enumerable.Range(0, 20).Select(n => (byte)(n + 240)).ToArray();
foreach (var b in bytes) Console.Write("{0,-4}", b);
// Pin the managed memory
GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned);
for (int i = 0; i < bytes.Length; i += 4)
{
// Copy the data
My my = (My)Marshal.PtrToStructure<My>(handle.AddrOfPinnedObject() + i);
my.Int += 10;
// Copy back
Marshal.StructureToPtr(my, handle.AddrOfPinnedObject() + i, true);
}
// Unpin
handle.Free();
foreach (var b in bytes) Console.Write("{0,-4}", b);
I made it just for fun.
Not sure that's less ugly.
I don't know, will it be faster? Test it.

c# screen sharing Efficiency

I'm trying to optimize my screen sharing app. I've already used a few ways to make it faster and stable, such as only sending the deltas between two frames, and using Gzip to compress the data.
This is my client code:
private void Form1_Load(object sender, EventArgs e)
{
Thread th = new Thread(startSend);
th.Start();
}
private void startSend()
{
Bitmap curr;
Bitmap diff;
encoderParams.Param[0] = qualityParam;
Bitmap pre = screenshot();
bmpBytes = imageToByteArray(pre);
SendVarData(handler, bmpBytes);
while (true)
{
curr= screenshot();
diff= Difference(pre, curr);
bmpBytes = imageToByteArray(diff);
SendVarData(handler, bmpBytes);
pre = curr;
}
}
Screenshot:
public Bitmap screenshot()
{
Bitmap screenshot = new Bitmap(SystemInformation.VirtualScreen.Width,
SystemInformation.VirtualScreen.Height,
PixelFormat.Format24bppRgb);
Graphics screenGraph = Graphics.FromImage(screenshot);
screenGraph.CopyFromScreen(0,
0,
0,
0,
SystemInformation.VirtualScreen.Size,
CopyPixelOperation.SourceCopy);
return screenshot;
}
The Difference method:
public Bitmap Difference(Bitmap bmp0, Bitmap bmp1)
{
Bitmap bmp2;
int Bpp = 3;
bmp2 = new Bitmap(bmp0.Width, bmp0.Height, bmp0.PixelFormat);
var bmpData0 = bmp0.LockBits(
new Rectangle(0, 0, bmp0.Width, bmp0.Height),
ImageLockMode.ReadOnly, bmp0.PixelFormat);
var bmpData1 = bmp1.LockBits(
new Rectangle(0, 0, bmp1.Width, bmp1.Height),
ImageLockMode.ReadOnly, bmp1.PixelFormat);
var bmpData2 = bmp2.LockBits(
new Rectangle(0, 0, bmp2.Width, bmp2.Height),
ImageLockMode.ReadWrite, bmp2.PixelFormat);
bmp0.UnlockBits(bmpData0);
bmp1.UnlockBits(bmpData1);
bmp2.UnlockBits(bmpData2);
int len = bmpData0.Height * bmpData0.Stride;
// MessageBox.Show(bmpData0.Stride.ToString());
bool changed=false;
byte[] data0 = new byte[len];
byte[] data1 = new byte[len];
byte[] data2 = new byte[len];
Marshal.Copy(bmpData0.Scan0, data0, 0, len);
Marshal.Copy(bmpData1.Scan0, data1, 0, len);
Marshal.Copy(bmpData2.Scan0, data2, 0, len);
for (int i = 0; i < len; i += Bpp)
{
changed = ((data0[i] != data1[i])
|| (data0[i + 1] != data1[i + 1])
|| (data0[i + 2] != data1[i + 2]));
// this.Invoke(new Action(() => this.Text = changed.ToString()));
data2[i] = changed ? data1[i] : (byte)2; // special markers
data2[i + 1] = changed ? data1[i + 1] : (byte)3; // special markers
data2[i + 2] = changed ? data1[i + 2] : (byte)7; // special markers
if (Bpp == 4) data2[i + 3] =
changed ? (byte)255 : (byte)42; // special markers
}
// this.Invoke(new Action(() => this.Text = changed.ToString()));
Marshal.Copy(data2, 0, bmpData2.Scan0, len);
return bmp2;
}
and the SendVarData function:
int total = 0;
byte[] datasize;
private int SendVarData(Socket s, byte[] data)
{
total = 0;
int size = data.Length;
int dataleft = size;
int sent;
datasize = BitConverter.GetBytes(size);
sent = s.Send(datasize);
sent = s.Send(data, total, dataleft, SocketFlags.None);
total += sent;
dataleft -= sent;
// MessageBox.Show("D");
return total;
}
This is the server - I'm just receiving a full image in the beginning, and then just deltas:
public void startListening()
{
prev = byteArrayToImage(ReceiveVarData(client.Client));
theImage.Image = prev;
while (true)
{
data = ReceiveVarData(client.Client);
curr = byteArrayToImage(data) ;
merge = Merge(prev, curr);
theImage.Image = merge;
count++;
prev = merge;
}
}
public static Bitmap Merge(Bitmap bmp0, Bitmap bmp1)
{
int Bpp = 3;
Bitmap bmp2 = new Bitmap(bmp0.Width, bmp0.Height, bmp0.PixelFormat);
var bmpData0 = bmp0.LockBits(
new System.Drawing.Rectangle(0, 0, bmp0.Width, bmp0.Height),
ImageLockMode.ReadOnly, bmp0.PixelFormat);
var bmpData1 = bmp1.LockBits(
new System.Drawing.Rectangle(0, 0, bmp1.Width, bmp1.Height),
ImageLockMode.ReadOnly, bmp1.PixelFormat);
var bmpData2 = bmp2.LockBits(
new System.Drawing.Rectangle(0, 0, bmp2.Width, bmp2.Height),
ImageLockMode.ReadWrite, bmp2.PixelFormat);
bmp0.UnlockBits(bmpData0);
bmp1.UnlockBits(bmpData1);
bmp2.UnlockBits(bmpData2);
int len = bmpData0.Height * bmpData0.Stride;
byte[] data0 = new byte[len];
byte[] data1 = new byte[len];
byte[] data2 = new byte[len];
Marshal.Copy(bmpData0.Scan0, data0, 0, len);
Marshal.Copy(bmpData1.Scan0, data1, 0, len);
Marshal.Copy(bmpData2.Scan0, data2, 0, len);
for (int i = 0; i < len; i += Bpp)
{
bool toberestored = (data1[i] != 2 && data1[i + 1] != 3 &&
data1[i + 2] != 7 && data1[i + 2] != 42);
if (toberestored)
{
data2[i] = data1[i];
data2[i + 1] = data1[i + 1];
data2[i + 2] = data1[i + 2];
if (Bpp == 4) data2[i + 3] = data1[i + 3];
}
else
{
data2[i] = data0[i];
data2[i + 1] = data0[i + 1];
data2[i + 2] = data0[i + 2];
if (Bpp == 4) data2[i + 3] = data0[i + 3];
}
}
Marshal.Copy(data2, 0, bmpData2.Scan0, len);
return bmp2;
}
I think it's coded fine, but I'm still unable to get more than 6~7fps (of 8kb-100kb) when running on 2 different computers with a fast and stable internet connection, and a maximum of 11fps when running both client and server on the same computer. I think it's because of the complexity of the delta and merging algorithms, but i dont know.
I would very appreciate if anyone could suggest how could optimize it further.
You may organize data2 in a different manner to send less data.
The below "compression" algorithm is very basic but will provide an improved compression compared to your implelmentation.
Search for differences, by identifying start and end of consecutive differences. When you find an interval where all pixels are different, store the length of identical data before that interval using 2 bytes, then store the number of consecutive differences, finally write 3 RGB bytes for each different pixel.
In case of 65535 different pixels, block the max interval length to 65535 and after storing the interval values. The next difference interval starting just after the stored interval, the identical count for next interval will be 0.
In case of 65535 identical pixels, just write $FFFF followed by $0000 indicating an empty sequence of different pixels.
Clarification: What I mean by "identifying start and end of consecutive differences" ?
In the above example, letters identify colors, i.e W(white), P(Pink), O(Orange):
The word "identical" refers to a comparison between data 0 and data1 (not comparing data1[i] with data1[i-1]).
Data0 = WWPWWOWOOOWWOOOPP
Data1 = WWPWPWP OOOWWPP OPP
You have 4 identical pixels (WWPW) followed by an interval (length 3) starting on the 4th character and ending on the 6th where all pixels are different. Then five identical pixels, followed by a new interval with 2 differences. At the end, a few common pixels.
The output for data2 will be (the text in parenthesis are not part of the buffer and explains the previous buffer values :
04 00 (4 identical pixels) 03 00 (3 different pixels coded in next 9 bytes)
Pr Pg Pb (3 bytes RGB code for P) Wr Wg Wb (RGB code for W) Pr Pg Pb (RGB code for P)
05 00 (5 identical pixels) 03 00 (2 different pixels coded in next 6 bytes)
Pr Pg Pb (RGB code for P) ...
The code to write differences would look like to the following.
I let you build the code adressing the opposite action, i.e. reading differences to update the previous image.
Data2Index = 0 ; // next index for additions in data2 ;
int idcount = 0 ;
int diffstart = -1 ;
int diffstart = -1 ;
for (int i = 0; i < len; i += Bpp)
{
changed = ((data0[i] != data1[i])
|| (data0[i + 1] != data1[i + 1])
|| (data0[i + 2] != data1[i + 2]));
if (!changed)
{
if (idcount==ushort.MaxValue)
{ // still identical, but there is a limitation on count
// write to data2 the identical count + differencecount equals to 0
AddIdCountDiffCountAndDifferences(idcount,0,0) ;
idcount = 0 ;
}
if (diffstart>0)
{ // after 0 or more identical values, a change was found
// write to data2 the identical count + difference count + different pixels
AddIdCountDiffCountAndDifferences(idcount,diffcount,diffstart) ;
idcount = 0 ;
diffcount= 0 ;
diffstart=-1 ;
}
else identicalcount++ ; // still identical, continue until difference found
}
else if (diffstart<0)
{ // a difference is found after a sequence of identical pixels, store the index of first difference
diffstart=i ; diffcount=1 ;
}
else
{ // different pixel follows another difference (and limitation not reached)
if (diffcount<ushort.MaxVakue) diffcount++ ;
}
else
{ // limitation reached, i.e. diffcount equals 65535
AddIdCountDiffCountAndDifferences(0,diffcount,diffstart) ;
diffstart+=diffcount ;
diffcount=0 ;
}
The procedure used to fill data2 here:
private int Data2Index = 0 ; // to be reset before
private void AddIdCountDiffCountAndDifferences(int idcount,int diffcount,int diffstart)
{
data2[Data2Index++]=(byte)(idcount && 0xFF) ; // low byte of the int
data2[Data2Index++]=(byte)(idcount >> 8 && 0xFF) ; // second byte of the int
data2[Data2Index++]=(byte)(diffcount && 0xFF) ; // low byte of the int
data2[Data2Index++]=(byte)(diffcount >> 8 && 0xFF) ; // second byte of the int
for (int i=0;i<diffcount;i++)
{
data2[Data2Index++]=data1[diffstart+Bpp*i ] ;
data2[Data2Index++]=data1[diffstart+Bpp*i+1] ;
data2[Data2Index++]=data1[diffstart+Bpp*i+2] ;
}
}

32-bit Grayscale Tiff with floating point pixel values to array using LibTIFF.NET C#

I just started using LibTIFF.NET in my c# application to read Tiff images as heightmaps obtained from ArcGIS servers. All I need is to populate an array with image's pixel values for terrain generation based on smooth gradients. The image is a LZW compressed 32-bit Grayscale Tiff with floating point pixel values representing elevaion in meters.
It's been some days now that I struggle to return right values but all I get is just "0" values assuming it's a total black or white image!
Here's the code so far: (Updated - Read Update 1)
using (Tiff inputImage = Tiff.Open(fileName, "r"))
{
int width = inputImage.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = inputImage.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int bytesPerPixel = 4;
int count = (int)inputImage.RawTileSize(0); //Has to be: "width * height * bytesPerPixel" ?
int resolution = (int)Math.Sqrt(count);
byte[] inputImageData = new byte[count]; //Has to be: byte[] inputImageData = new byte[width * height * bytesPerPixel];
int offset = 0;
for (int i = 0; i < inputImage.NumberOfTiles(); i++)
{
offset += inputImage.ReadEncodedTile(i, inputImageData, offset, (int)inputImage.RawTileSize(i));
}
float[,] outputImageData = new float[resolution, resolution]; //Has to be: float[,] outputImageData = new float[width * height];
int length = inputImageData.Length;
Buffer.BlockCopy(inputImageData, 0, outputImageData, 0, length);
using (StreamWriter sr = new StreamWriter(fileName.Replace(".tif", ".txt"))) {
string row = "";
for(int i = 0; i < resolution; i++) { //Change "resolution" to "width" in order to have correct array size
for(int j = 0; j < resolution; j++) { //Change "resolution" to "height" in order to have correct array size
row += outputImageData[i, j] + " ";
}
sr.Write(row.Remove(row.Length - 1) + Environment.NewLine);
row = "";
}
}
}
Sample Files & Results: http://terraunity.com/SampleElevationTiff_Results.zip
Already searched everywhere on internet and couldn't find the solution for this specific issue. So I really appreciate the help which makes it useful for others too.
Update 1:
Changed the code based on Antti Leppänen's answer but got weird results which seems to be a bug or am I missing something? Please see uploaded zip file to see the results with new 32x32 tiff images here:
http://terraunity.com/SampleElevationTiff_Results.zip
Results:
LZW Compressed: RawStripSize = ArraySize = 3081 = 55x55 grid
Unompressed: RawStripSize = ArraySize = 65536 = 256x256 grid
Has to be: RawStripSize = ArraySize = 4096 = 32x32 grid
As you see the results, LibTIFF skips some rows and gives irrelevant orderings and it even gets worse if the image size is not power of 2!
Your example file seems to be tiled tiff and not stripped. Console says:
ElevationMap.tif: Can not read scanlines from a tiled image
I changed your code to read tiles. This way it seems to be reading data.
for (int i = 0; i < inputImage.NumberOfTiles(); i++)
{
offset += inputImage.ReadEncodedTile(i, inputImageData, offset, (int)inputImage.RawTileSize(i));
}
I know it could be late, but I had the same mistake recently and I found the solution, so it could be helpful. The mistake is in the parameter count of the function Tiff.ReadEncodedTile(tile, buffer, offset, count). It must be the decompressed bytes size, not the compressed bytes size. That's the reason why you have not all the information, because you are not saving the whole data in your buffer. See how-to-translate-tiff-readencodedtile-to-elevation-terrain-matrix-from-height.
A fast method to read a floating point tiff.
public static unsafe float[,] ReadTiff(Tiff image)
{
const int pixelStride = 4; // bytes per pixel
int imageWidth = image.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int imageHeight = image.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
float[,] result = new float[imageWidth, imageHeight];
int tileCount = image.NumberOfTiles();
int tileWidth = image.GetField(TiffTag.TILEWIDTH)[0].ToInt();
int tileHeight = image.GetField(TiffTag.TILELENGTH)[0].ToInt();
int tileStride = (imageWidth + tileWidth - 1) / tileWidth;
int bufferSize = tileWidth * tileHeight * pixelStride;
byte[] buffer = new byte[bufferSize];
fixed (byte* bufferPtr = buffer)
{
float* array = (float*)bufferPtr;
for (int t = 0; t < tileCount; t++)
{
image.ReadEncodedTile(t, buffer, 0, buffer.Length);
int x = tileWidth * (t % tileStride);
int y = tileHeight * (t / tileStride);
var copyWidth = Math.Min(tileWidth, imageWidth - x);
var copyHeight = Math.Min(tileHeight, imageHeight - y);
for (int j = 0; j < copyHeight; j++)
{
for (int i = 0; i < copyWidth; i++)
{
result[x + i, y + j] = array[j * tileWidth + i];
}
}
}
}
return result;
}

Unexpected alpha values in DXT5 decompression

I am decompressing an image compressed with DXT5. According to the description each block of 16 bytes starts with 2 bytes alpha data. If i have a look at my file in a hex editor i find that 90% of the image have an alpha value of less than 0.04 (value in the file is < 10) which should not be the case.
If i render the image with OpenGL and let glCompressedTexImage do the work it looks ok. With my code the image is transparent as i would have expected from those small alpha values. The code i use to generate the alpha values looks like that:
byte alpha1 = reader.ReadByte();
byte alpha2 = reader.ReadByte();
uint[] alphaValues = new uint[8]
{
alpha1,
alpha2,
0, 0, 0, 0, 0, 0
};
if (alpha1 > alpha2)
{
for (int i = 0; i < 6; ++i)
{
byte value = (byte)(((6.0f - i) * alpha1 + (1.0f + i) * alpha2) / 7.0f);
alphaValues[i + 2] = value;
}
}
else
{
for (int i = 0; i < 4; ++i)
{
byte value = (byte)(((4.0f - i) * alpha1 + (1.0f + i) * alpha2) / 5.0f);
alphaValues[i + 2] = value;
}
alphaValues[6] = 0;
alphaValues[7] = 255;
}
alpha1 and alpha2 usually are the same (values are like 8 or 3 or 9, the maximum alpha value in the image however is 96).
The colors however are ok. If i render the image without alpha values it looks perfect. Enabling alpha -> transparent.

Categories

Resources