A function like "glReadPixels" in DirectX / SharpDX - c#

I'm searching for a way to read a pixels color at the mousepoint. In OpenGL it was done by calling the function "glReadPixels" after drawing the scene (or parts of it). I want to make a simple color picking routine in the background, for identifing a shapes / lines in 3D Space.
So, is there any equivalent method/function/suggestion for doing the same in SharpDX (DirectX10 / DirectX11) ?

This is perfectly possible with Direct3D11: simply follow these steps:
Use DeviceContext.CopySubResourceRegion to copy part from the source texture to a staging texture (size of the pixel area you want to readback, same format, but with ResourceUsage.Staging)
Retrieve the pixel from this staging texture using Device.Map/UnMap.
There is plenty of discussion about this topic around the net (for example: "Reading one pixel from texture to CPU in DX11")

Another option is to use a small compute shader, like :
Texture2D<float4> TextureInput: register(t0);
StructuredBuffer<float2> UVBuffer: register(t1);
RWStructuredBuffer<float4> RWColorBuffer : register(u0);
SamplerState Sampler : register( s0 );
[numthreads(1, 1, 1)]
void CSGetPixels( uint3 DTid : SV_DispatchThreadID )
{
float4 c = TextureInput.SampleLevel(Sampler , UVBuffer[DTid.x].xy, 0);
RWColorBuffer [DTid.x] = c;
}
It gives you the advantage of being a bit more "format agnostic".
Process is then like that.
Create a small structured buffer for UV (float2) (pixel position/texture size, don't forget to flip Y axis of course). Copy the pixel position you want to sample into this buffer.
Create a writeable buffer and a staging buffer (float4). Needs to be same element count as your uv buffer.
Bind all and Dispatch
Copy writeable buffer into the staging one.
Map and read float4 data in cpu
Please note I omitted thread group optimization/checks in compute shader for simplicity.

Since you're using C#, my suggestion would be to use GDI+, as there is no such function like "glReadPixels" in DX. GDI+ offers very easy methods of reading the color of a pixel at your mouse pointer. Refer to stackoverflow.com/questions/1483928.
If GDI+ is a no go, as it isn't very fast, can't you stick to the usual object picking using a "Ray"? You want to identify (I suppose 3-dimensional) shapes/lines, this would be easy using a ray (and check for intersection) to pick them.

Related

My application slows down when processing Kinect RGB images with EMGU library

I'm currently using Kinect SDK with C# ( WPF application). I need to get RGB stream and process the images with EMGU library.
The problem is when i try to process the image with EMGU ( like converting image's format and change the colour of some pixels ) the application slows down and takes too long to respond .
I'm using 8GO RAM / Intel HD graphics 4000 / Intel core i7 .
Here's my simple code :
http://pastebin.com/5frLRwMN
Please help me :'(
I have run considerably heavier code (blob analysis) with the Kinect on a per frame basis and got away with great performance on a machine of similar configuration as yours, so I believe we can rule out your machine as the problem. I don't see any EMGU code in your sample however. In your example, you loop through 307k pixels with a pair of for loops. This is naturally a costly procedure to run, depending on the code in your loops. As you might expect, GetPixel and SetPixel are very slow methods to execute.
To speed up your code, first turn your image into an Emgu Image. Then to access your image, use a Byte:
Byte workImageRed = image.Data[x, y, 0];
Byte workImageGreen = image.Data[x, y, 1];
...
The third column refers to the BGR data. To set the pixel to another colour, try something like this:
byte[,,] workIm = image.Data;
workIm[x, y, 0] = 255;
workIm[x, y, 1] = 20;
...
Alternatively, you can set the pixel to a colour directly:
image[x, y] = new Bgr(Color.Blue);
This might be slower however.
Image processing is always slow. And if you do it at 30fps, it's normal that you get you app to hang: real time image processing is always a challenge. You may need to drop some frames in order to increase performace (...or perhaps switch to native C++ and seek a faster library).

How do I get the tint color from a sprite batch in HLSL

All I want to do is be able to obtain the tint color from the sprite batch draw calls from inside the HLSL shader, in the pixel shader.
I asked something similar to this before, and I was told to have a look at the stock effects for the spritebatch. I looked at these and they were puzzling, but it was apparent that the tint was being passed to the pixel shader with the COLOR0 semantic. However, I tried using this semantic by adding the color parameter as seen below, but it did not work.
float4 PixelShaderFunction(float2 texCoord : TEXCOORD0, float4 inputColor : COLOR0) : COlOR0
I assume I am missing something, probably something to do with the vertex shader? I have no experience with the vertex shader, but all I want to do is be able to get the tint color from the sprite batch.
Anyone have experience with this? Help is appreciated
edit: to be more specific about about why it did not work, inputColor was always 0, no matter what I set for the spritebatch tint color
It gets passed through the vertex shader. Note the documenation for semantics and for input modifiers on function arguments.
The basic process looks like this:
[vertex buffer data] -> [vertex shader] -> [pixel shader] -> [output]
The pixel shader only sees what comes out of the vertex shader. At each stage the data is interpreted depending on semantics.
Between the vertex buffer and the vertex shader, the vertex declaration maps binary data into input registers described by semantics.
Between the vertex shader and the pixel shader, outputs are passed to inputs described by matching semantic names. Again, note this list of semantics. (There is also a blend step here, to get input values for pixels in-between vertices.)
Finally, the output of the pixel shader is passed into a fixed set of inputs (with semantics) for the remaining fixed portion of the pipeline: depth-test, blending, etc.
I think what is getting you confused in this case is the fact that there there are two ways of outputting data from a shader: You can return it and give your return value a semantic (or many semantics if you return a structure).
But you can also output a value by assigning it to an argument (or simply leaving it in an argument) with either the out or the inout modifier.
This is what the vertex shader for SpriteBatch is doing. Note that it has inout specified for all its parameters - and that it doesn't modify the colour or texture-coordinate parameters at all. Those parameters are simply passed straight through to the pixel shader (with appropriate blending).

Is there an inexpensive way to transfer colour data from a RenderTarget2D on a per frame basis?

Until recently, our game checked collisions by getting the colour data from a section of the background texture of the scene. This worked very well, but as the design changed, we needed to check against multiple textures and it was decided to render these all to a single RenderTarget2D and check collisions on that.
public bool TouchingBlackPixel(GameObject p)
{
/*
Calculate rectangle under the player...
SourceX,SourceY: Position of top left corner of rectangle
SizeX,SizeY: Aproximated (cast to int from float) size of box
*/
Rectangle sourceRectangle = new Rectangle(sourceX, sourceY,
(int)sizeX, (int)sizeY);
Color[] retrievedColor = new Color[(int)(sizeX * sizeY)];
p.tileCurrentlyOn.collisionLayer.GetData(0, sourceRectangle, retrievedColor,
0, retrievedColor.Count());
/*
Check collisions
*/
}
The problem that we've been having is that, since moving to the render target, we are experiencing massive reductions in FPS.
From what we've read, it seems as if the issue is that in order to get data from the RenderTarget2D, you need to transfer data from the GPU to the CPU and that this is slow. This is compounded by us needing to run the same function twice (once for each player) and not being able to keep the same data (they may not be on the same tile).
We've tried moving the GetData calls to the tile's Draw function and storing the data in a member array, but this does not seem to have solved the problem (As we are still calling GetData on a tile quite regularly - down from twice every update to once every draw).
Any help which you could give us would be great as the power that this collision system affords us is quite fantastic, but the overhead which render targets have introduced will make it impossible to keep.
The simple answer is: Don't do that.
It sounds like offloading the compositing of your collision data to the GPU was a performance optimisation that didn't work - so the correct course of action would be to revert the change.
You should simply do your collision checks all on the CPU. And I would further suggest that it is probably faster to run your collision algorithm multiple times and determine a collision response by combining the results, rather than compositing the whole scene onto a single layer and running collision detection once.
This is particularly the case if you are using the render target to support transformations before doing collision.
(For simple 2D pixel collision detection, see this sample. If you need support for transformations, see this sample as well.)
I suppose, your tile's collision layer does not change. Or at least changes not very frequently. So you can store the colors for each tile in an array or other structure. This would decrease the amount of data that is transfered from the GPU to CPU, but requires that the additional data stored in the RAM is not too big.

HLSL: Returning an array of float4?

I have the following function in HLSL:
float4[] GetAllTiles(float type) {
float4 tiles[128];
int i=0;
[unroll(32768)] for(int x=0;x<MapWidth;x++) {
[unroll(32768)] for(int y=0;y<MapHeight;y++) {
float2 coordinate = float2(x,y);
float4 entry = tex2D(MapLayoutSampler, coordinate);
float entryType=GetTileType(entry);
if(entryType == type) {
tiles[i++]=entry;
}
}
}
return tiles;
}
However, it says that it can't define a return type of float4[]. How do I do this?
In short:
You can't return an array of floats defined in the function in HLSL.
HLSL code (on the GPU) is not like C code on the CPU. It is executed concurrently on many GPU cores.
HLSL code gets executed at every vertex (in the vertex shader) or every at pixel (in the pixel shader). So for every vertex you give the GPU, this code will be executed.
This HLSL introduction should give you a sense of how a few lines of HLSL code get executed on every pixel, producing a new image from the input:
http://www.neatware.com/lbstudio/web/hlsl.html
In your example code, you are looping over the entire map, which is probably not what you want to do at all, as the function you posted will be executed at every pixel (or vertex) given in your input.
Transferring your logic from the CPU to the GPU via HLSL code can be very difficult, as GPUs are not currently designed to do general purpose computation. The task you are trying to do must be very parallel, and if you want it to be fast on the GPU, then you need to express the problem in terms of drawing images, and reading from textures.
Read the tutorial I linked to get started with HLSL :)
Return a structure with the array in it. You can send parametres in as a raw array, but it must be in a structure if its the return value. :)
What Olhovsky said is true tho, if your converting from c to direct c/compute you should have the iterations layed out as separate threads, But dont forget that a gpu also has a lot of series power as well, and u need to take that into account for your budget for maximum efficiency. For example, the least amount of threads you need is the amount of cores on your gpu. For a gtx980, its 2048.

Convert from 32-BPP to 8-BPP Indexed (C#)

I need to take a full color JPG Image and remap it's colors to a Indexed palette. The palette will consist of specific colors populated from a database. I need to map each color of the image to it's "closest" value in the index. I am sure there are different algorithms for comparing and calculating the "closest" value. Looking for C#, .NET managed code libraries only.
(It will be used in a process where we have 120 or so specific colors of buttons, and we want to map any image to those 120 colors to make a collage).
Nothing will help you with GDI. It seems indexed images are too backward a technology for Microsoft to care. All you can do is read and write indexed image files.
There are usually two step when quantizing colors in an image:
1) Find the best palette for the image (Color Quantization)
2) Map the source solors to the found palette (Color Mapping)
From what I understand, you already have the palette in the database, that means the hardest part has been done for you. All you need to do is map the 24 bit colors to the provided palette colors. If you don't have the starting palette, then you will have to compute it yourself using a quantisation algorithm: Octrees or Median Cut are the most well known. Median Cut gives better results but is slower and harder to implement and fine tune.
To map the colors, the simplest algorithm in your case is to calculate the distance from your source color to all the palette colors and pick the nearest.
float ColorDistanceSquared(Color c1, Color c2)
{
float deltaR = c2.R - c1.R;
float deltaG = c2.G - c1.G;
float deltaB = c2.B - c1.B;
return deltaR*deltaR + deltaG*deltaG + deltaB*deltaB;
}
You can also ponderate the channels so that blue has less weight, don't go too overboard with it, else it will give horrible results, specifically 30/59/11 won't work at all:
float ColorDistanceSquared(Color c1, Color c2)
{
float deltaR = (c2.R - c1.R) * 3;
float deltaG = (c2.G - c1.G) * 3;
float deltaB = (c2.B - c1.B) * 2;
return deltaR*deltaR + deltaG*deltaG + deltaB*deltaB;
}
Call that thing for all source and palette colors and find the Min. If you cache your results as you go in a map, this will be very fast.
Also, the source color will rarely fit a palette color enough to not create banding and plain areas and loss of details in your image. To avoid that, you can use dithering. The simplest algorithm and the one that gives the best results is Error Diffusion Dithering.
Once you mapped your colors, you will have to manually lock a Bitmap and write the indices in there as .Net won't let you write to an indexed image.
This process is called Quantization. Since each color represents 3 packed values, you'll need to use Octrees to solve this problem.
Check out this article with example code.
The article focuses on getting the ultimate palette for the image, but your process it would be reverse for the second part, only reduce the most used colors that are close to the given palette.
I had to do this in a big .NET project. There's nothing in the framework for it, but this article quickly led me to a solution: http://codebetter.com/blogs/brendan.tompkins/archive/2004/01/26/6103.aspx
The JPEG word should ring alarm bells. The images are very likely to already be in a heavily quantised colour space, and further resampling will potentially introduce aliasing. If you can, work from uncompressed images to reduce this effect.
The answer to your question is yes - you can save the images in an alternate format - but I'm not sure if the native functionality is adequate for what sounds like a quite complex requirement. If you are able to define the colour palette from the collection of images, you will likely improve the quality of the output.
The already referenced blog entry entitled Use 'GDI+ to Save Crystal-Clear GIF Images with .NET' contains useful references to code.

Categories

Resources