I wrote a c# program that worked with the old kinect.
My graphics library dealing with the kinect. Since the old Kinect is no longer officially sold, i'm now updating it to the newer kinect one, with its Microsoft SDK2.0. For my updated library i try to keep most coding equal.
So i can release an update library instead of the updating the whole program(s)
What i am wondering is, does the new kinect depth data still contains player data as it did in 1.7 SDK i did a bitmask operation to remove that with:
realDepth[i16] = (short)(realDepth[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth);
Is this still needed, i couldnt find any information about its raw depth format.
Also the old Kinect, had some values for
distance unknown
distance to close
distance to far
Does it still provide this ?.
In Microsoft Kinect SDK 2.0, you do not need the bitmask operation to get the correct depth value.
When you access a DepthFrame object, you can use something like this:
private void ProcessDepthFrame(DepthFrame frame)
{
int width = frame.FrameDescription.Width;
int height = frame.FrameDescription.Height;
ushort minDepth = frame.DepthMinReliableDistance;
ushort maxDepth = frame.DepthMaxReliableDistance;
ushort[] depthData = new ushort[width * height];
frame.CopyFrameDataToArray(depthData);
int colorIndex = 0;
for (int depthIndex = 0; depthIndex < depthData.Length; ++depthIndex)
{
ushort depth = depthData[depthIndex];
byte intensity = (byte)(depth >= minDepth && depth <= maxDepth ? depth : 0);
// Do what you want
}
}
Also notice that, in the above example, DepthMinReliableDistance and DepthMaxReliableDistance properties of the DepthFrame class are used to figure out if the depth value is valid or not (which is a little different from SDK v1.x).
For more info, check also this tutorial.
Related
thats how i wrote your beautiful code(some simple changes for me for easier understanding)
private void Form1_Load(object sender, EventArgs e)
{
prev = GetDesktopImage();//get a screenshot of the desktop;
cur = GetDesktopImage();//get a screenshot of the desktop;
var locked1 = cur.LockBits(new Rectangle(0, 0, cur.Width, cur.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
var locked2 = prev.LockBits(new Rectangle(0, 0, prev.Width, prev.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
ApplyXor(locked1, locked2);
compressionBuffer = new byte[1920* 1080 * 4];
// Compressed buffer -- where the data goes that we'll send.
int backbufSize = LZ4.LZ4Codec.MaximumOutputLength(this.compressionBuffer.Length) + 4;
backbuf = new CompressedCaptureScreen(backbufSize);
MessageBox.Show(compressionBuffer.Length.ToString());
int length = Compress();
MessageBox.Show(backbuf.Data.Length.ToString());//prints the new buffer size
}
the compression buffer length is for example 8294400
and the backbuff.Data.length is 8326947
I didn't like the compression suggestions, so here's what I would do.
You don't want to compress a video stream (so MPEG, AVI, etc are out of the question -- these don't have to be real-time) and you don't want to compress individual pictures (since that's just stupid).
Basically what you want to do is detect if things change and send the differences. You're on the right track with that; most video compressors do that. You also want a fast compression/decompression algorithm; especially if you go to more FPS that will become more relevant.
Differences. First off, eliminate all branches in your code, and make sure memory access is sequential (e.g. iterate x in the inner loop). The latter will give you cache locality. As for the differences, I'd probably use a 64-bit XOR; it's easy, branchless and fast.
If you want performance, it's probably better to do this in C++: The current C# implementation doesn't vectorize your code, and that will help you a great deal here.
Do something like this (I'm assuming 32bit pixel format):
for (int y=0; y<height; ++y) // change to PFor if you like
{
ulong* row1 = (ulong*)(image1BasePtr + image1Stride * y);
ulong* row2 = (ulong*)(image2BasePtr + image2Stride * y);
for (int x=0; x<width; x += 2)
row2[x] ^= row1[x];
}
Fast compression and decompression usually means simpler compression algorithms. https://code.google.com/p/lz4/ is such an algorithm, and there's a proper .NET port available for that as well. You might want to read on how it works too; there is a streaming feature in LZ4 and if you can make it handle 2 images instead of 1 that will probably give you a nice compression boost.
All in all, if you're trying to compress white noise, it simply won't work and your frame rate will drop. One way to solve this is to reduce the colors if you have too much 'randomness' in a frame. A measure for randomness is entropy, and there are several ways to get a measure of the entropy of a picture ( https://en.wikipedia.org/wiki/Entropy_(information_theory) ). I'd stick with a very simple one: check the size of the compressed picture -- if it's above a certain limit, reduce the number of bits; if below, increase the number of bits.
Note that increasing and decreasing bits is not done with shifting in this case; you don't need your bits to be removed, you simply need your compression to work better. It's probably just as good to use a simple 'AND' with a bitmask. For example, if you want to drop 2 bits, you can do it like this:
for (int y=0; y<height; ++y) // change to PFor if you like
{
ulong* row1 = (ulong*)(image1BasePtr + image1Stride * y);
ulong* row2 = (ulong*)(image2BasePtr + image2Stride * y);
ulong mask = 0xFFFCFCFCFFFCFCFC;
for (int x=0; x<width; x += 2)
row2[x] = (row2[x] ^ row1[x]) & mask;
}
PS: I'm not sure what I would do with the alpha component, I'll leave that up to your experimentation.
Good luck!
The long answer
I had some time to spare, so I just tested this approach. Here's some code to support it all.
This code normally run over 130 FPS with a nice constant memory pressure on my laptop, so the bottleneck shouldn't be here anymore. Note that you need LZ4 to get this working and that LZ4 is aimed at high speed, not high compression ratio's. A bit more on that later.
First we need something that we can use to hold all the data we're going to send. I'm not implementing the sockets stuff itself here (although that should be pretty simple using this as a start), I mainly focused on getting the data you need to send something over.
// The thing you send over a socket
public class CompressedCaptureScreen
{
public CompressedCaptureScreen(int size)
{
this.Data = new byte[size];
this.Size = 4;
}
public int Size;
public byte[] Data;
}
We also need a class that will hold all the magic:
public class CompressScreenCapture
{
Next, if I'm running high performance code, I make it a habit to preallocate all the buffers first. That'll save you time during the actual algorithmic stuff. 4 buffers of 1080p is about 33 MB, which is fine - so let's allocate that.
public CompressScreenCapture()
{
// Initialize with black screen; get bounds from screen.
this.screenBounds = Screen.PrimaryScreen.Bounds;
// Initialize 2 buffers - 1 for the current and 1 for the previous image
prev = new Bitmap(screenBounds.Width, screenBounds.Height, PixelFormat.Format32bppArgb);
cur = new Bitmap(screenBounds.Width, screenBounds.Height, PixelFormat.Format32bppArgb);
// Clear the 'prev' buffer - this is the initial state
using (Graphics g = Graphics.FromImage(prev))
{
g.Clear(Color.Black);
}
// Compression buffer -- we don't really need this but I'm lazy today.
compressionBuffer = new byte[screenBounds.Width * screenBounds.Height * 4];
// Compressed buffer -- where the data goes that we'll send.
int backbufSize = LZ4.LZ4Codec.MaximumOutputLength(this.compressionBuffer.Length) + 4;
backbuf = new CompressedCaptureScreen(backbufSize);
}
private Rectangle screenBounds;
private Bitmap prev;
private Bitmap cur;
private byte[] compressionBuffer;
private int backbufSize;
private CompressedCaptureScreen backbuf;
private int n = 0;
First thing to do is capture the screen. This is the easy part: simply fill the bitmap of the current screen:
private void Capture()
{
// Fill 'cur' with a screenshot
using (var gfxScreenshot = Graphics.FromImage(cur))
{
gfxScreenshot.CopyFromScreen(screenBounds.X, screenBounds.Y, 0, 0, screenBounds.Size, CopyPixelOperation.SourceCopy);
}
}
As I said, I don't want to compress 'raw' pixels. Instead, I'd much rather compress XOR masks of previous and the current image. Most of the times this will give you a whole lot of 0's, which is easy to compress:
private unsafe void ApplyXor(BitmapData previous, BitmapData current)
{
byte* prev0 = (byte*)previous.Scan0.ToPointer();
byte* cur0 = (byte*)current.Scan0.ToPointer();
int height = previous.Height;
int width = previous.Width;
int halfwidth = width / 2;
fixed (byte* target = this.compressionBuffer)
{
ulong* dst = (ulong*)target;
for (int y = 0; y < height; ++y)
{
ulong* prevRow = (ulong*)(prev0 + previous.Stride * y);
ulong* curRow = (ulong*)(cur0 + current.Stride * y);
for (int x = 0; x < halfwidth; ++x)
{
*(dst++) = curRow[x] ^ prevRow[x];
}
}
}
}
For the compression algorithm I simply pass the buffer to LZ4 and let it do its magic.
private int Compress()
{
// Grab the backbuf in an attempt to update it with new data
var backbuf = this.backbuf;
backbuf.Size = LZ4.LZ4Codec.Encode(
this.compressionBuffer, 0, this.compressionBuffer.Length,
backbuf.Data, 4, backbuf.Data.Length-4);
Buffer.BlockCopy(BitConverter.GetBytes(backbuf.Size), 0, backbuf.Data, 0, 4);
return backbuf.Size;
}
One thing to note here is that I make it a habit to put everything in my buffer that I need to send over the TCP/IP socket. I don't want to move data around if I can easily avoid it, so I'm simply putting everything that I need on the other side there.
As for the sockets itself, you can use a-sync TCP sockets here (I would), but if you do, you will need to add an extra buffer.
The only thing that remains is to glue everything together and put some statistics on the screen:
public void Iterate()
{
Stopwatch sw = Stopwatch.StartNew();
// Capture a screen:
Capture();
TimeSpan timeToCapture = sw.Elapsed;
// Lock both images:
var locked1 = cur.LockBits(new Rectangle(0, 0, cur.Width, cur.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
var locked2 = prev.LockBits(new Rectangle(0, 0, prev.Width, prev.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
try
{
// Xor screen:
ApplyXor(locked2, locked1);
TimeSpan timeToXor = sw.Elapsed;
// Compress screen:
int length = Compress();
TimeSpan timeToCompress = sw.Elapsed;
if ((++n) % 50 == 0)
{
Console.Write("Iteration: {0:0.00}s, {1:0.00}s, {2:0.00}s " +
"{3} Kb => {4:0.0} FPS \r",
timeToCapture.TotalSeconds, timeToXor.TotalSeconds,
timeToCompress.TotalSeconds, length / 1024,
1.0 / sw.Elapsed.TotalSeconds);
}
// Swap buffers:
var tmp = cur;
cur = prev;
prev = tmp;
}
finally
{
cur.UnlockBits(locked1);
prev.UnlockBits(locked2);
}
}
Note that I reduce Console output to ensure that's not the bottleneck. :-)
Simple improvements
It's a bit wasteful to compress all those 0's, right? It's pretty easy to track the min and max y position that has data using a simple boolean.
ulong tmp = curRow[x] ^ prevRow[x];
*(dst++) = tmp;
hasdata |= tmp != 0;
You also probably don't want to call Compress if you don't have to.
After adding this feature you'll get something like this on your screen:
Iteration: 0.00s, 0.01s, 0.01s 1 Kb => 152.0 FPS
Using another compression algorithm might also help. I stuck to LZ4 because it's simple to use, it's blazing fast and compresses pretty well -- still, there are other options that might work better. See http://fastcompression.blogspot.nl/ for a comparison.
If you have a bad connection or if you're streaming video over a remote connection, all this won't work. Best to reduce the pixel values here. That's quite simple: apply a simple 64-bit mask during the xor to both the previous and current picture... You can also try using indexed colors - anyhow, there's a ton of different things you can try here; I just kept it simple because that's probably good enough.
You can also use Parallel.For for the xor loop; personally I didn't really care about that.
A bit more challenging
If you have 1 server that is serving multiple clients, things will get a bit more challenging, as they will refresh at different rates. We want the fastest refreshing client to determine the server speed - not slowest. :-)
To implement this, the relation between the prev and cur has to change. If we simply 'xor' away like here, we'll end up with a completely garbled picture at the slower clients.
To solve that, we don't want to swap prev anymore, as it should hold key frames (that you'll refresh when the compressed data becomes too big) and cur will hold incremental data from the 'xor' results. This means you can basically grab an arbitrary 'xor'red frame and send it over the line - as long as the prev bitmap is recent.
H264 or Equaivalent Codec Streaming
There are various compressed streaming available which does almost everything that you can do to optimize screen sharing over network. There are many open source and commercial libraries to stream.
Screen transfer in Blocks
H264 already does this, but if you want to do it yourself, you have to divide your screens into smaller blocks of 100x100 pixels, and compare these blocks with previous version and send these blocks over network.
Window Render Information
Microsoft RDP does lot better, it does not send screen as a raster image, instead it analyzes screen and creates screen blocks based on the windows on the screen. It then analyzes contents of screen and sends image only if needed, if it is a text box with some text in it, RDP sends information to render text box with a text with font information and other information. So instead of sending image, it sends information on what to render.
You can combine all techniques and make a mixed protocol to send screen blocks with image and other rendering information.
Instead of handling data as an array of bytes, you can handle it as an array of integers.
int* p = (int*)((byte*)scan0.ToPointer() + y * stride);
int* p2 = (int*)((byte*)scan02.ToPointer() + y * stride2);
for (int x = 0; x < nWidth; x++)
{
//always get the complete pixel when differences are found
if (*p2 != 0)
*p = *p2
++p;
++p2;
}
I'm working on an application which will stream the color, depth, and IR video data from the Kinect V2 sensor. Right now I'm just putting together the color video part of the app. I've read through some tutorials and actually got some video data coming into my app - the problem seems to be that the byte order seems to be in the wrong order which gives me an oddly discolored image (see below).
So, let me explain how I got here. In my code, I first open the sensor and also instantiate a new multi source frame reader. After I've created the reader, I create an event handler called Reader_MultiSourceFrameArrived:
void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
{
if (proccessing || gotframe) return;
// Get a reference to the multi-frame
var reference = e.FrameReference.AcquireFrame();
// Open color frame
using (ColorFrame frame = reference.ColorFrameReference.AcquireFrame())
{
if (frame != null)
{
proccessing = true;
var description = frame.ColorFrameSource.FrameDescription;
bw2 = description.Width / 2;
bh2 = description.Height / 2;
bpp = (int)description.BytesPerPixel;
if (imgBuffer == null)
{
imgBuffer = new byte[description.BytesPerPixel * description.Width * description.Height];
}
frame.CopyRawFrameDataToArray(imgBuffer);
gotframe = true;
proccessing = false;
}
}
}
Now, every time a frame is received (and not processing) it should copy the frame data into an array called imgBuffer. When my application is ready I then call this routine to convert the array into a Windows Bitmap that I can display on my screen.
if (gotframe)
{
if (theBitmap.Rx != bw2 || theBitmap.Ry != bh2) theBitmap.SetSize(bw2, bh2);
int kk = 0;
for (int j = 0; j < bh2; ++j)
{
for (int i = 0; i < bw2; ++i)
{
kk = (j * bw2 * 2 + i) * 2 * bpp;
theBitmap.pixels[i, bh2 - j - 1].B = imgBuffer[kk];
theBitmap.pixels[i, bh2 - j - 1].G = imgBuffer[kk + 1];
theBitmap.pixels[i, bh2 - j - 1].R = imgBuffer[kk + 2];
theBitmap.pixels[i, bh2 - j - 1].A = 255;
}
}
theBitmap.needupdate = true;
gotframe = false;
}
}
So, after this runs theBitmap now contains the image information needed to draw the image on the screen... however, as seen in the image above - it looks quite strange. The most obvious change is to simply change the order of the pixel B,G,R values when they get assigned to the bitmap in the double for loop (which I tried)... however, this simply results in other strange color images and none provide an accurate color image. Any thoughts where I might be going wrong?
Is this Bgra?
The normal "RGB" in Kinect v2 for C# is BGRA.
Using the Kinect SDK 2.0, you don't need all of those "for" cycles.
The function used to allocate the pixels in the bitmap is this one:
colorFrame.CopyConvertedFrameDataToIntPtr(
this.colorBitmap.BackBuffer,
(uint)(colorFrameDescription.Width * colorFrameDescription.Height * 4),
ColorImageFormat.Bgra);
1) Get the Frame From the kinect, using Reader_ColorFrameArrived (go see ColorSamples - WPF);
2) Create the colorFrameDescription from the ColorFrameSource using Bgra format;
3) Create the bitmap to display;
If you have any problems, please say. But if you follow the sample it's actually pretty clean there how to do it.
I was stuck on this problem forever. Problem is, that all almost all examples you find, are WPF examples. But for Windows Forms its a different story.
frame.CopyRawFrameDataToArray(imgBuffer);
gets you the rawdata whitch is
ColorImageFormat.Yuy2
By converting it to RGB you should be able to fix your color problem. The transformation from YUY2 to RGB is very expensive, you might want to use a Parallel foreach loop to maintain your framerate
I've faced a strange issue in my monogame (3.1) windows phone 8 app. When app is deactivated and then activated all textures become black. This happened also after the lock screen (UserIdleDetectionMode is enabled).
I've checked GraphicsDevice.IsDisposed, GraphicsDevice.IsContentLost, GraphicsDevice.ResourcesLost but everything looks ok. I've implemented reload of all my textures on Activated and Unobscured events, but full texture reload takes too much time. In the same time on Marketplace I see monogame apps easily handling desactivate-activate. Moreover, the same app for windows phone 7 written on xna, restores very quickly. What do I do wrong with monogame?
My app is based on monogame WP8 template.
Update:
Just have found out that all textures which loaded via Content.Load(...) are restored very quickly. But all my textures are written by a hand: I load a file from TileContainer, unpack it, read its data with ImageTools, create Texture2D and set its pixels with loaded data. Jpeg files also are rendered to RenderTarget2D as BGR565 to consume space.
Moreover I widely use RenderTarget2D for rendering text labels with shadows, sprite runtime compositions and so on. So it looks like that Monogame just don't want to restore images loaded not by Content.Load.
Continue investigating...
I just got a response from Tom Spillman in the Monogame forums and apparently the Content.Load stuff is restored normally and other data needs to be reinitialized by the program. What you can do is hook up to GraphicsDevice.DeviceResetting event to get notified when this reset is taking place.
According to monogame devs lost textures is a normal situation.
I've made full texture reload in GraphicsDevice.DeviceReset event. To make it work fast I've implemented load from xnb uncompressed files. It's pretty simple as long as this format just have pixel values in it. This is the only solution.
Here's how to read from uncompressed xnb:
private static Texture2D TextureFromUncompressedXnbStream(GraphicsDevice graphicsDevice, Stream stream)
{
BinaryReader xnbReader = new BinaryReader(stream);
byte cx = xnbReader.ReadByte();
byte cn = xnbReader.ReadByte();
byte cb = xnbReader.ReadByte();
byte platform = xnbReader.ReadByte();
if (cx != 'X' || cn != 'N' || cb != 'B')
return null;
byte version = xnbReader.ReadByte();
byte flags = xnbReader.ReadByte();
bool compressed = (flags & 0x80) != 0;
if (version != 5 && version != 4)
return null;
int xnbLength = xnbReader.ReadInt32();
xnbReader.ReadBytes(0x9D);//skipping processor string
SurfaceFormat surfaceFormat = (SurfaceFormat)xnbReader.ReadInt32();
int width = (xnbReader.ReadInt32());
int height = (xnbReader.ReadInt32());
int levelCount = (xnbReader.ReadInt32());
int levelCountOutput = levelCount;
Texture2D texture = texture = new Texture2D(graphicsDevice, width, height, false, SurfaceFormat.Color);
for (int level = 0; level < levelCount; level++)
{
int levelDataSizeInBytes = (xnbReader.ReadInt32());
byte[] levelData = xnbReader.ReadBytes(levelDataSizeInBytes);
if (level >= levelCountOutput)
continue;
texture.SetData(level, null, levelData, 0, levelData.Length);
}
xnbReader.Dispose();
return texture;
}
I have a compute shader and the C# script which goes with it used to modify an array of vertices on the y axis simple enough to be clear.
But despite the fact that it runs fine the shader seems to forget the first vertex of my shape (except when that shape is a closed volume?)
Here is the C# class :
Mesh m;
//public bool stopProcess = false; //Useless in this version of exemple
MeshCollider coll;
public ComputeShader csFile; //the compute shader file added the Unity way
Vector3[] arrayToProcess; //An array of vectors i'll use to store data
ComputeBuffer cbf; //the buffer CPU->GPU (An early version with exactly
//the same result had only this one)
ComputeBuffer cbfOut; //the Buffer GPU->CPU
int vertexLength;
void Awake() { //Assigning my stuff
coll = gameObject.GetComponent<MeshCollider>();
m = GetComponent<MeshFilter>().sharedMesh;
vertexLength = m.vertices.Length;
arrayToProcess = m.vertices; //setting the first version of the vertex array (copy of mesh)
}
void Start () {
cbf = new ComputeBuffer(vertexLength,32); //Buffer in
cbfOut = new ComputeBuffer(vertexLength,32); //Buffer out
csFile.SetBuffer(0,"Board",cbf);
csFile.SetBuffer(0,"BoardOut",cbfOut);
}
void Update () {
csFile.SetFloat("time",Time.time);
cbf.SetData(m.vertices);
csFile.Dispatch(0,vertexLength,vertexLength,1); //Dispatching (i think there is my mistake)
cbfOut.GetData(arrayToProcess); //getting back my processed vertices
m.vertices = arrayToProcess; //assigning them to the mesh
//coll.sharedMesh = m; //collider stuff useless in this demo
}
And my compute shader script :
#pragma kernel CSMain
RWStructuredBuffer<float3> Board : register(s[0]);
RWStructuredBuffer<float3> BoardOut : register(s[1]);
float time;
[numthreads(1,1,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
float valx = (sin((time*4)+Board[id.x].x));
float valz = (cos((time*2)+Board[id.x].z));
Board[id.x].y = (valx + valz)/5;
BoardOut[id.x] = Board[id.x];
}
At the beginning I was reading and writing from the same buffer, but as I had my issue I tried having separate buffers, but with no success. I still have the same problem.
Maybe I misunderstood the way compute shaders are supposed to be used (and I know I could use a vertex shader but I just want to try compute shaders for further improvements.)
To complete what I said, I suppose it is related with the way vertices are indexed in the Mesh.vertices Array.
I tried a LOT of different Blocks/Threads configuration but nothing seems to solve the issue combinations tried :
Block Thread
60,60,1 1,1,1
1,1,1 60,60,3
10,10,3 3,1,1
and some others I do not remember. I think the best configuration should be something with a good balance like :
Block : VertexCount,1,1 Thread : 3,1,1
About the closed volume: I'm not sure about that because with a Cube {8 Vertices} everything seems to move accordingly, but with a shape with an odd number of vertices, the first (or last did not checked that yet) seems to not be processed
I tried it with many different shapes but subdivided planes are the most obvious, one corner is always not moving.
EDIT :
After further study i found out that it is simply the compute shader which does not compute the last (not the first i checked) vertices of the mesh, it seems related to the buffer type, i still dont get why RWStructuredBuffer should be an issue or how badly i use it, is it reserved to streams? i cant understand the MSDN doc on this one.
EDIT : After resolution
The C# script :
using UnityEngine;
using System.Collections;
public class TreeObject : MonoBehaviour {
Mesh m;
public bool stopProcess = false;
MeshCollider coll;
public ComputeShader csFile;
Vector3[] arrayToProcess;
ComputeBuffer cbf;
ComputeBuffer cbfOut;
int vertexLength;
// Use this for initialization
void Awake() {
coll = gameObject.GetComponent<MeshCollider>();
m = GetComponent<MeshFilter>().mesh;
vertexLength = m.vertices.Length+3; //I add 3 because apparently
//vertexnumber is odd
//arrayToProcess = new Vector3[vertexLength];
arrayToProcess = m.vertices;
}
void Start () {
cbf = new ComputeBuffer(vertexLength,12);
cbfOut = new ComputeBuffer(vertexLength,12);
csFile.SetBuffer(0,"Board",cbf);
csFile.SetBuffer(0,"BoardOut",cbfOut);
}
// Update is called once per frame
void Update () {
csFile.SetFloat("time",Time.time);
cbf.SetData(m.vertices);
csFile.Dispatch(0,vertexLength,1,1);
cbfOut.GetData(arrayToProcess);
m.vertices = arrayToProcess;
coll.sharedMesh = m;
}
}
I had already rolled back to a
Blocks VCount,1,1
Before your answer because it was logic that i was using VCount*VCount so processing the vertices "square-more" times than needed.
To complete, you were absolutely right the Stride was obviously giving issues could you complete your answer with a link to doc about the stride parameter? (from anywhere because Unity docs are VOID and MSDN did not helped me to get why it should be 12 and not 32 (as i thought 32 was the size of a float3)
so Doc needed please
In the mean time i'll try to provide a flexible enough (generic?) version of this to make it stronger, and start adding some nice array processing functions in my shader...
I'm familiar with Compute Shaders but have never touched Unity, but having looked over the documentation for Compute Shaders in Unity a couple of things stand out.
The cbf and cbfOut ComputeBuffers are created with a stride of 32 (bytes?). Both your StructuredBuffers contain float3s which have a stride of 12 bytes, not 32. Where has 32 come from?
When you dispatch your compute shader you're requesting a two-dimensional dispatch (vertexLength,vertexLength, 1) but you're operating on a 1D array of float3s. You will end up with a race condition where many different threads think they're responsible for updating each element of the array. Although awful for performance, if you want a thread group size of [numthreads(1,1,1)] then you should dispatch (vertexLength, 1, 1) numbers of waves/wavefronts when calling Dispatch (ie, Dispatch (60,1,1) with numThreads(1,1,1)).
For best/better performance the number of threads in your thread group / wave should at least be a multiple of 64 for best efficiency on AMD hardware. You then need only dispatch ceil(numVertices/64) wavefronts and then simply insert some logic into the shader to ensure id.x is not out of bounds for any given thread.
EDIT:
The documentation for the ComputeBuffer constructor is here: Unity ComputeBuffer Documentation
While it doesn't explicitly say "stride" is in bytes, it's the only reasonable assumption.
What I am looking to do sounds really simple, but no where on the Internet so far have I found a way to do this in DotNet nor found a 3rd party component that does this either (without spending thousands on completely unnecessary features).
Here goes:
I have a jpeg of a floor tile (actual photo) that I create a checkerboard pattern with.
In dotnet, it is easy to rotate and stitch photos together and save the final image as a jpeg.
Next, I want to take that final picture and make it appear as if the "tiles" are laying on a floor for a generic "room scene". Basically adding a 3D perspective to make it appear as if it is actually in the room scene.
Heres a website that is doing something similar with carpeting, however I need to do this in a WinForms application:
Flor Website
Basically, I need to create a 3D perspective of a jpeg, then save it as a new jpeg (then I can put an overlay of the generic room scene).
Anyone have any idea on where to get a 3rd party DotNet image processing module that can do this seemingly simple task?
It is not so simple because you need a 3D transformation, which is more complicated and computationally expensive than a simple 2D transformation such as rotation, scaling or shearing. For you to have an idea of the difference in the math, 2D transformations require 2 by 2 matrices, whereas a projection transformation (which is more complicated than other 3D transforms) requires a 4 by 4 matrix...
What you need is some 3D rendering engine in which you can draw polygons (in a perspective view) and them cover them with a texture (like a carpet). For .Net 2.0, I'd recommend using SlimDX which is a port of DirectX that would allow you to render polygons, but there is some learning curve. If you are using WPF (.Net 3.0 and up), there is a built in 3D canvas that allows you to draw textured polygons in perspective. That might be easier/better to learn than SlimDX for your purposes. I'm sure that there is a way to redirect the output of the 3D canvas towards a jpeg...
You might simplify the problem a lot if you don't require great performance and if you restrict the orientation of the texture (eg. always a horizontal floor or always a vertical wall). If so, you could probably render it yourself with a simple drawing loop in .Net 2.0.
If you just want a plain floor, your code would look like this. WARNING: Obtaining your desired results will take some significant time and refinement, specially if you don't know the math very well. But on the other hand, it is always fun to play with code of this type... (:
Find some sample images below.
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Windows.Forms;
namespace floorDrawer
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
ResizeRedraw = DoubleBuffered = true;
Width = 800;
Height = 600;
Paint += new PaintEventHandler(Form1_Paint);
}
void Form1_Paint(object sender, PaintEventArgs e)
{
// a few parameters that control the projection transform
// these are the parameters that you can modify to change
// the output
double cz = 10; // distortion
double m = 1000; // magnification, usually around 1000 (the pixel width of the monitor)
double y0 = -100; // floor height
string texturePath = #"c:\pj\Hydrangeas.jpg";//#"c:\pj\Chrysanthemum.jpg";
// screen size
int height = ClientSize.Height;
int width = ClientSize.Width;
// center of screen
double cx = width / 2;
double cy = height / 2;
// render destination
var dst = new Bitmap(width, height);
// source texture
var src = Bitmap.FromFile(texturePath) as Bitmap;
// texture dimensions
int tw = src.Width;
int th = src.Height;
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
{
double v = m * y0 / (y - cy) - cz;
double u = (x - cx) * (v + cz) / m;
int uu = ((int)u % tw + tw) % tw;
int vv = ((int)v % th + th) % th;
// The following .SetPixel() and .GetPixel() are painfully slow
// You can replace this whole loop with an equivalent implementation
// using pointers inside unsafe{} code to make it much faster.
// Note that by casting u and v into integers, we are performing
// a nearest pixel interpolation... It's sloppy but effective.
dst.SetPixel(x, y, src.GetPixel(uu, vv));
}
// draw result on the form
e.Graphics.DrawImage(dst, 0, 0);
}
}
}