How calculate position of specific pixel inside of graphics stream? This stream is screenshot of my primary monitor.
Lets say I want position for pixel at:
0, 0
0, 10
10, 0
10, 10
Im using DirectX 9 SDK.
Here is part of code what Im using(from this tutorial) to calculate position:
const int Bpp = 4;
int o = 20;
int m = 8;
int screenWidthMinusM = Screen.PrimaryScreen.Bounds.Width - m;
int screenHeightMinusM = Screen.PrimaryScreen.Bounds.Height - m;
int bx = (screenWidthMinusM - m) / 3 + m;
int by = (screenHeightMinusM - m) / 3 + m;
int bx2 = (screenWidthMinusM - m) * 2 / 3 + m;
int by2 = (screenHeightMinusM - m) * 2 / 3 + m;
long x, y;
long pos;
y = m;
for (x = m; x < screenWidthMinusM; x += o)
{
pos = (y * Screen.PrimaryScreen.Bounds.Width + x) * Bpp;
if (x < bx) tlPos.Add(pos);
else if (x > bx && x < bx2) tPos.Add(pos);
else if (x > bx2) trPos.Add(pos);
}
but this returns collection with numbers starting about 53 000 to about 7 000 000(from another similar method). After that color is extracted
Surface s = sc.CaptureScreen();
GraphicsStream gs = s.LockRectangle(LockFlags.None);
byte[] bu = new byte[4];
foreach (long pos in positions)
{
gs.Position = pos;
gs.Read(bu, 0, 4);
r += bu[2];
g += bu[1];
b += bu[0];
i++;
}
I need to make my own collection containing these positions.
I assume you're having all pixels of your screenshot in this graphics stream (doc) and want to get the index within the stream of a specified pixelposition (x,y).
Therefore it should be easily: Index = (x + y * screenwidth) * SizeOf(Pixel)
For reading the data you should combine Seek with Read.
Ok,I found answer, its in my code, I dont exactly understand why,but:
//x - screen width
//y - screen height
Bpp = 4; // ?
gs.Position = (x * Screen.PrimaryScreen.Bounds.Width + y) * Bpp;
Related
I'm doing a loop over a List pointtocolor contain over 50000 pixels coordinates then i'm trying to color them on the Bitmap bmpBackClouds.
First time i tried with the line:
if ((int)p + (int)(x + y) < bD.Stride * bD.Height)
But then it never step in and never did the p[1] = p[2] = (byte)255; lines.
So now i'm not using this IF and it's doing all the lines.
But in the end i'm getting the same Bitmap as original as it was nothing colored.
float x, y;
bD = bmpBackClouds.LockBits(
new System.Drawing.Rectangle(0, 0, bmpBackClouds.Width, bmpBackClouds.Height),
System.Drawing.Imaging.ImageLockMode.ReadWrite,
System.Drawing.Imaging.PixelFormat.Format32bppArgb);
IntPtr s0 = bD.Scan0;
unsafe
{
byte* p;
byte* pBU = (byte*)(void*)s0;
for (int i = 0; i < pointtocolor.Count; i++)
{
p = (byte*)(void*)s0;
x = pointtocolor[i].X * (float)currentFactor;
y = pointtocolor[i].Y * (float)currentFactor;
if ((int)x >= bmpBackClouds.Width || (int)y >= bmpBackClouds.Height)
{
continue;
}
x = (int)(y * (float)bD.Stride);
y = (int)(x * 4F);
p += (int)(x + y);
if ((int)p + (int)(x + y) < bD.Stride * bD.Height)
{
if (x + y > 3)
p -= (p - pBU) % 4;
p[1] = p[2] = (byte)255;
p[0] = (byte)0;
p[3] = (byte)255;
}
}
}
bmpBackClouds.UnlockBits(bD);
This is the unsafe code part:
unsafe
{
byte* p;
byte* pBU = (byte*)(void*)s0;
for (int i = 0; i < pointtocolor.Count; i++)
{
//set pointer to the beggining
p = (byte*)(void*)s0;
x = pointtocolor[i].X * (float)currentFactor;
y = pointtocolor[i].Y * (float)currentFactor;
//check if point is inside bmp
if ((int)x >= bmpBackClouds.Width || (int)y >= bmpBackClouds.Height)
{
continue;
}
//Add offset where the point is. The formula: position = Y * stride + X * 4
x = (int)(y * (float)bD.Stride);
y = (int)(x * 4F);
p += (int)(x + y);
//here check, whether the pointer's at a correct position
if (x + y > 3)
p -= (p - pBU) % 4;
//set yellow color
p[1] = p[2] = (byte)255;
p[0] = (byte)0;
p[3] = (byte)255;
}
}
Hans solution is working i just have a small quation the image i get on the hard disk is colored on the right shape but it's opposite or reversed not sure how to call to it what it should be:
On the screenshot on the left is whaht i got on the har disk the bitmap i wrote to with the LockBits.
On the right it's my program and a rectangle i drawed in red and this rectangle should be the colored area on the Bitmap. But on the Bitmap i see this rectangle are but it seems like opposite or reversed.
The question is if there is something wrong with the LockBits code ? Or it seems more like something with my other code ?
(p - pBU) % 4 produces always zero. The difference is always multiple of 4 because each pixel has 4bytes of information. The p pointer jumbs to multiple of 4 from the beginning where pBU points. If you want to test if the points are in the rectangle do this:
GraphicsPath gp = new GraphicsPath();
gp.AddRectangle(new RectangleF(?, ?, ?, ?)); //fill the values of your rectangle
for (int i = 0; i < pointtocolor.Count; i++)
{
if(gp.IsVisible (pointtocolor[i].X * (float)currentFactor, pointtocolor[i].Y * (float)currentFactor)
{
//is inside
}
}
EDIT
When scanning a bitmap and use bD.Stride to see the width using an int pointer is wrong because stride gives color byte count not pixel count
int *p;
p = (int*)(void*)s0;
//x, y the position coordinates
p += y * bD.Stride + x * 4; //wrong
byte *p;
p = (byte*)(void*)s0;
//x, y the position coordinates
p += y * bD.Stride + x * 4; //correct
The error is here
x = (int)(y * (float)bD.Stride);
y = (int)(x * 4F);
You are using previous x to set y!
The correct code is:
float fx, fy;
int x, y;
bD = bmpBackClouds.LockBits(
new System.Drawing.Rectangle(0, 0, bmpBackClouds.Width, bmpBackClouds.Height),
System.Drawing.Imaging.ImageLockMode.ReadWrite,
System.Drawing.Imaging.PixelFormat.Format32bppArgb);
IntPtr s0 = bD.Scan0;
unsafe
{
byte* p;
//byte* pBU = (byte*)(void*)s0;
for (int i = 0; i < pointtocolor.Count; i++)
{
p = (byte*)(void*)s0;
fx = pointtocolor[i].X * (float)currentFactor;
fy = pointtocolor[i].Y * (float)currentFactor;
if ((int)fx >= bmpBackClouds.Width || (int)fy >= bmpBackClouds.Height)
{
continue;
}
x = (int)fy * bD.Stride;
y = (int)fx * 4;
p += (x + y);
p[1] = p[2] = (byte)255;
p[0] = (byte)0;
p[3] = (byte)255;
}
}
bmpBackClouds.UnlockBits(bD);
valter
I have an fisheye image and what I am trying to do is convert it into landscape.
The code I have written converts it into landscape but when it comes to divding it into different parts it adds black parts to them.
Can anyone help
using System;
using System.Drawing;
namespace fisheye_image
{
class Program
{
static void Main()
{
// assume the source image is square, and its width has even number of pixels
Bitmap bm = (Bitmap)Image.FromFile(#"C:\Users\abc\Desktop\lillestromfisheye.jpg");
int l = bm.Width / 2;
int i, j;
int x, y;
double radius, theta;
// calculated indices in Cartesian coordinates with trailing decimals
double fTrueX, fTrueY;
int iSourceWidth = (2 * l);
int run = 0, lastWidth = 1;
while (run<4)
{
Bitmap bmDestination = new Bitmap(lastWidth*l, l);
for (i = 0; i < bmDestination.Height; ++i)
{
radius = (double)(l - i);
for (j = run*l; j < lastWidth*l ; ++j)
{
// theta = 2.0 * Math.PI * (double)(4.0 * l - j) / (double)(4.0 * l);
theta = 2.0 * Math.PI * (double)(-j) / (double)(4.0 * l);
fTrueX = radius * Math.Cos(theta);
fTrueY = radius * Math.Sin(theta);
// "normal" mode
x = (int)(Math.Round(fTrueX)) + l;
y = l - (int)(Math.Round(fTrueY));
// check bounds
if (x >= 0 && x < iSourceWidth && y >= 0 && y < iSourceWidth)
{
bmDestination.SetPixel(j, i, bm.GetPixel(x, y));
}
}
}
bmDestination.Save(#"C:\Users\abc\Desktop\fisheyelandscape"+run.ToString()+".jpg",System.Drawing.Imaging.ImageFormat.Jpeg);
run++;
lastWidth++;
}
}
}
}
Below are the original and processed images
We just need to add another variable 'k' in the second for loop that goes from zero to the width of destination image.
while (run<4)
{
Bitmap bmDestination = new Bitmap(l, l);
for (i = 0; i < bmDestination.Height; ++i)
{
radius = (double)(l - i);
for (j = run * l, k = 0; j < lastWidth * l||k < bmDestination.Width; ++j, ++k)
{
// theta = 2.0 * Math.PI * (double)(4.0 * l - j) / (double)(4.0 * l);
theta = 2.0 * Math.PI * (double)(-j) / (double)(4.0 * l);
fTrueX = radius * Math.Cos(theta);
fTrueY = radius * Math.Sin(theta);
// "normal" mode
x = (int)(Math.Round(fTrueX)) + l;
y = l - (int)(Math.Round(fTrueY));
// check bounds
if (x >= 0 && x < iSourceWidth && y >= 0 && y < iSourceWidth)
{
bmDestination.SetPixel(k, i, bm.GetPixel(x, y));
}
}
I'm trying to give certain colors to image based on their movement (like vector direction) in Emgu Cv. I have managed to calculate the dense optical flow to my video stream. I have used this
OpticalFlow.Farneback(prev,NextFrame,velx,vely,0.5,1,1,2,5,1.1,Emgu.CV.CvEnum.OPTICALFLOW_FARNEBACK_FLAG.FARNEBACK_GAUSSIAN);
The variable vely and velx contains the velocity of vertical and horizontal directions.Does anyone know how to map colors to these. There are many algorithms that calculates the dense flow. HS also can be used, but I'm not sure what to use.
Any solution would be really appreciated.
EDIT:
Optical Flow Color Map in OpenCV
This is the same thing that i wanted, since I'm using Emgu cv I tried to convert this code to c# but I cannot understand how to pass the dense flow to function "colorflow".
public void colorflow(MCvMat imgColor)
{
MCvMat imgHsv = new MCvMat();
double max_s = 0;
double[] hsv_ptr = new double[3000];
IntPtr[] color_ptr = new IntPtr[3000];
int r = 0, g = 0, b = 0;
double angle = 0;
double h = 0, s = 0, v = 0;
double deltaX = 0, deltaY = 0;
int x = 0, y = 0;
for (y = 0; y < imgColor.rows; y++)
{
for (x = 0; x < imgColor.cols; x++)
{
PointF fxy = new PointF(y, x);
deltaX = fxy.X;
deltaY = fxy.Y;
angle = Math.Atan2(deltaX, deltaY);
if (angle < 0)
angle += 2 * Math.PI;
hsv_ptr[3 * x] = angle * 180 / Math.PI;
hsv_ptr[3 * x + 1] = Math.Sqrt(deltaX * deltaX + deltaY * deltaY);
hsv_ptr[3 * x + 2] = 0.9;
if (hsv_ptr[3 * x + 1] > max_s)
max_s = hsv_ptr[3 * x + 1];
}
}
for (y = 0; y < imgColor.rows; y++)
{
//hsv_ptr=imgHsv.ptr<float>(y);
//color_ptr=imgColor.ptr<unsigned char>(y);
for (x = 0; x < imgColor.cols; x++)
{
h = hsv_ptr[3 * x];
s = hsv_ptr[3 * x + 1] / max_s;
v = hsv_ptr[3 * x + 2];
//hsv2rgb(h,s,v,r,g,b);
Color c = ColorFromHSV(h, s, v);
color_ptr[3 * x] = (IntPtr)c.B;
color_ptr[3 * x + 1] = (IntPtr)c.G;
color_ptr[3 * x + 2] = (IntPtr)c.R;
}
}
drawLegendHSV(imgColor, 15, 25, 15);
}
I having trouble how to covert the two commented lines in the code. Can anyone Help me with this.?
Another thing that the Farneback algorithm gives two images velx and vely. It does not gives the flow( MCvMat). The colorFlow algorithms it takes the MCvMat type parameters.Did i done any wrong with the code. thanks
I have this c# code to iterate through a grid in an inward spiral like this:
1 2 3
8 9 4
7 6 5
Here is the code, but there is something wrong with it, for some reason it is taking much longer than expected to compute. Does anyone know why this is happening?
static void create_spiral_img(int width, int height)
{
Bitmap img = new Bitmap(width, height);
Graphics graph = Graphics.FromImage(img);
int x = 0;
int y = 0;
int size = width * height;
int max = size;
int count = 1;
int i, j;
while (size > 0)
{
for (i = y; i <= y + size - 1; i++)
{
draw_pixel(count++, x, i, graph);
}
for (j = x + 1; j <= x + size - 1; j++)
{
draw_pixel(count++, j, y + size - 1, graph);
}
for (i = y + size - 2; i >= y; i--)
{
draw_pixel(count++, x + size - 1, i, graph);
}
for (i = x + size - 2; i >= x + 1; i--)
{
draw_pixel(count++, i, y, graph);
}
x = x + 1;
y = y + 1;
size = size - 2;
Console.Write(100 * ((float)(count) / (float)max) + "% ");
}
graph.Dispose();
img.Save("./" + width + "x" + height + "_spiril.png", System.Drawing.Imaging.ImageFormat.Png);
img.Dispose();
}
Assuming a square (width=height) it looks like you've got an O(x^4) implementation - that's going to be hideously slow.
I would recommend trying to drop it down to O(x^2). Instead of drawing it spirally, rewrite your algorithm to draw it rectangularly - that is, go by rows & columns, calculating what each pixel should be.
Assuming that
draw_pixel(c,x,y,g)
draws a point in color c at (x,y) coordinates in the graph g, you're going way too far. You're doing
for (i = y; i <= y + size - 1; i++)
to print a line that should have length width, but you're printing a line of length size.
I'm thinking I didn't understand your algorithm. If this doesn't make sense, can you explain the semantics of draw_pixel please ?
I'm updating a plugin for Paint.net which i made some months ago, it's called Simulate Color Depth and it reduces the number of colors in the image to the chosen BPP and for a long time it have had dithering included but NEVER ordered dithering and i thought it would be a nice addition to have that in so i started to search on the internet for something useful, i ended up on this wiki page here http://en.wikipedia.org/wiki/Ordered_dithering, and tried to do as written in the pseudo code
for (int y = 0; x < image.Height; y++)
{
for (int x = 0; x < image.Width; x++)
{
Color color = image.GetPixel(x, y);
color.R = color.R + bayer8x8[x % 8, y % 8];
color.G = color.G + bayer8x8[x % 8, y % 8];
color.B = color.B + bayer8x8[x % 8, y % 8];
image.SetPixel(x, y, GetClosestColor(color, bitdepth);
}
}
but the result is way too bright so i decided to check the wiki page again and then i see that there's a "1/65" to the right of the threshold map which got me thinking of both error diffusing (yes i know, weird huh?) and dividing the value i get from bayer8x8[x % 8, y % 8] with 65 and then multiply the value with the color channels, but either the results were messy or else still too bright (as i remember it) but the results were nothing like i have seen elsewhere, either too bright, too high contrast or too messy and i haven't found anything really useful searching through the internet, so do anyone know how i can get this bayer dithering working properly?
Thanks in advance, Cookies
I don't think there's anything wrong with your original algorithm (from Wikipedia). The brightness disparity is probably an artifact of monitor gamma. Check Joel Yliluoma's Positional Dithering Algorithm, the appendix about gamma correction from this article about a dithering algorithm invented by Joel Yliluoma (http://bisqwit.iki.fi/story/howto/dither/jy/#Appendix%201GammaCorrection) to see an explanation of the effect (NB: page is quite graphics-heavy).
Incidentally, perhaps the (apparently public-domain) algorithm detailed in that article may be the solution to your problem...
Try this:
color.R = color.R + bayer8x8[x % 8, y % 8] * GAP / 65;
Here GAP should be the distance between the two nearest color thresholds. This depends on the bits per pixel.
For example, if you are converting the image to use 4 bits for the red component of each pixel, there are 16 levels of red total. They are: R=0, R=17, R=34, ... R=255. So GAP would be 17.
Found a solution, levels is the amount of colors the destination images should have and d is the divisor (this is normalized from my code (which uses paint.net classes) to simple bitmap editting with GetPixel and SetPixel)
private void ProcessDither(int levels, int d, Bitmap image)
{
levels -= 1;
double scale = (1.0 / 255d);
int t, l;
for ( int y = rect.Top; y < rect.Bottom; y++ )
{
for ( int x = rect.Left; x < rect.Right; x++)
{
Color cp = image.GetPixel(x, y);
int threshold = matrix[y % rows][x % cols];
t = (int)(scale * cp.R * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.R = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
t = (int)(scale * cp.G * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.G = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
t = (int)(scale * cp.B * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.B = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
image.SetPixel(x, y, cp);
}
}
}
private byte Clamp(int val)
{
return (byte)(val < 0 ? 0 : val > 255 ? 255 : val);
}