I'm sorry about the vague title, but I'm not really sure how to ask this without being very specific. If you suggest a title which is more clear, I'll change it as soon as I can.
Anyway, I don't think I can ask my question very succinctly without first providing a little background information. In a 2D space, I am creating "acres", which contain "tiles".
[One Acre with 64 Tiles]
For the sake of clarity, we'll assume that in this specific instance, there are 12 acres, four in the first row, four in the second, and four in the third. Each acre has 64 tiles in it, in an eight by eight grid.
[Twelve Acres, each with 64 Tiles]
I am generating a texture the width and height of the desired number of acres, multiplied by the number of tiles in each acre (in our example, the texture would be 32 pixels wide [the number of acres in a horizontal row {4} multiplied by the number of tiles in an acre {8}], and 24 pixels tall [the number of acres in a vertical column {3} multiplied by the number of tiles in an acre {8}]). The texture is then filled with perlin noise, which I would like to use to colour each tile.
[Single Acre, with 64 Tiles, next to the Perlin image generated for it (scaled up). This has a slight random colour variation applied to each tile.]
I would like to generate one image for all of the acres, and read from it each time a new acre is created, but therein lies the problem, and the subject of my question. How do I get the offset, so that each adjacent acre continues the pattern?
[What I want (to get this, I just created a single larger tile)]
The method I'm currently using doesn't seem to work, however, and ends up creating something like the following.
Strange Result http://2catstudios.github.io/images/StackOverflow/150113_Grid_Offset/Perlin_Twelve_Acres_NoSpace.png
[Strange Result]
Following is the code which I'm currently using to find the (incorrect, I assume) offset. The link directs to a Gist, where the perlin generation function, and acre/tile generation functions are pasted.
int xOffset = ( parentAcreXIndex * desiredWidth );
int yOffset = ( parentAcreYIndex * desiredHeight );
new Color ( 0.000f, 0.502f + ( parentWorld.worldPerlin.GetPixel ( xOffset + ( desiredWidth - tileXIndex ), yOffset + ( desiredHeight - tileYIndex )).grayscale * 0.3f ), 0.000f, 1 );
Full class (Links to GitHub's Gist), the above line is at 100
I don't really know what else to say; my mind is a bit "foggy" from trying to figure this out, so please forgive me if I've left something important out. Do let me know, and I'll update my post with the required information.
Also, I'm sorry about this question, it must be pretty hard to understand. I'm going to read over this a few times, after I publish it, to see if I can improve the wording.
Thank you for your time!
Michael
Edit
Thank you for taking a look at this! It turns out the problem was that the plane I was using for visualization was actually upside down. I'll make sure to check simple things like that in the future, sorry for the confusion! I have left the question up, because I was given enough points here to post images, and when I tried to delete it, the points were revoked. When I earn more points, I will come back to delete this. Thanks!
You seem to be asking this:
If I have a grid of pixels, which are grouped into 'acres' (32x32), how do I map from a given pixel (as the row and col of an acre) to the overall pixel?
Acres begin at every 32nd pixel, so for a given acre (acreX, acreY)
acreOriginInTextureX = acreX * 8;
acreOriginInTextureY = acreY * 8;
So a given tile (tileX, tileY) within an acre will be:
tilePosInTextureX = acreOriginInTextureX + tileX
= acreX * 8 + tileX
tilePosInTextureY = acreY * 8 + tileY
Really this is:
tilePosInTextureX = acreX * tilesPerAcreX + tileX
... and same for Y.
NB: I'm assuming zero indexing everywhere. If not, you'll need to subtract 1 from acreX, acreY, tileX, tileY, but not tilesPerAcreX or Y.
Related
I have a project on monogame platform. The purpose of the project is to make the calculation of the viewfactor of geometry put into the platform using the ortographic method. In a basic level, I put a basic cube and camera across from the cube. Here as I look into the cube through the camera, I am required to count the number of pixels of an object seen from a perspective by the ortographic method. I already have a solution but it is very slow. In my solution, I count the number of pixels with a certain color and then divide that number to the total number of pixels on the screen. I have heard of a technique that involves using OcclusionQuery. But I guess I have to do some shader programming in order to use that technique, of which I do not have a clue. Can you guys do some suggestions if there is another technique that is easy to implement and faster than what I recently do or explain how that OcclusionQuery works.Here for example I count the total number of grey pixels then divide it to total screen area
here you can find my code written below;
private void CalculateViewFactor(Color[] data)
{
int objectPixelCount = 0;
var color = new Color();
color.R = data[0].R;
color.G = data[0].G;
color.B = data[0].B;
foreach (Color item in data)
if (item.R != color.R && item.G != color.G && item.B != color.B)
objectPixelCount++;
Console.WriteLine(objectPixelCount);
Console.WriteLine(data.Length);
Console.WriteLine( (float) objectPixelCount / data.Length);
}
due to fact that the color of the first pixel of the screen is also color of the background, I take the RGB values of the first pixel and compare these RGB values to all the other pixels on the screen and count the number of pixels which has a different color from the first pixel.
But as I know that this method is pretty slow, I want to adapt OcclusionQuery into my code. If you could help me, I would be grateful.
This is pretty tricky to do right, and I can only suggest an "alternative", not necessarily more performant or better design-wise approach.
In case you don't really need to know exact number of drawn pixels, you can approximate it. There is a technique called Monte Carlo Integration.
Start off by creating N points on the screen with random coordinates. You check and count colors at these points. Divide the number of points with color of your object by the total number of tested points (that is N). What you get is an approximate ratio of pixels that your object occupies on the final screen. If now you multiply this ratio by the total number of pixels on the screen (that is WidthPx * HeightPx) you'll get an approximate number of pixels occupied by the object.
Advantages:
Select bigger N for more accurate result, select lesser N for better performance
Algorithm is simple, harder to screw it up
Disadvantages:
It's random and never deterministic (you'll get a different result every time)
It's approximate and never exact
You'll need to generate 2 * N random values (two for each of test points), generating random values is a long operation
I'm sure later you'll want to draw textures/shading on the screen, and then this technique won't work as you'll not be able to distinguish pixels of your object and the others. You can still manage a smaller unseen buffer, where you draw the same objects, but without any shading, and each object having same unique color, then you apply Monte Carlo algorithm on it, but of course, that'll cost computing resources.
I am trying to write an algorithm (in c#) that will stitch two or more unrelated heightmaps together so there is no visible seam between the maps. Basically I want to mimic the functionality found on this page :
http://www.bundysoft.com/wiki/doku.php?id=tutorials:l3dt:stitching_heightmaps
(You can just look at the pictures to get the gist of what I'm talking about)
I also want to be able to take a single heightmap and alter it so it can be tiled, in order to create an endless world (All of this is for use in Unity3d). However, if I can stitch multiple heightmaps together, I should be able to easily modify the algorithm to act on a single heightmap, so I am not worried about this part.
Any kind of guidance would be appreciated, as I have searched and searched for a solution without success. Just a simple nudge in the right direction would be greatly appreciated! I understand that many image manipulation techniques can be applied to heightmaps, but have been unable to find a image processing algorithm that produces the results I'm looking for. For instance, image stitching appears to only work for images that have overlapping fields of view, which is not the case with unrelated heightmaps.
Would utilizing a FFT low pass filter in some way work, or would that only be useful in generating a single tileable heightmap?
Because the algorithm is to be used in Unit3d, any c# code will have to be confined to .Net 3.5, as I believe that's the latest version Unity uses.
Thanks for any help!
Okay, seems I was on the right track with my previous attempts at solving this problem. My initial attemp at stitching the heightmaps together involved the following steps for each point on the heightmap:
1) Find the average between a point on the heightmap and its opposite point. The opposite point is simply the first point reflected across either the x axis (if stitching horizontal edges) or the z axis (for the vertical edges).
2) Find the new height for the point using the following formula:
newHeight = oldHeight + (average - oldHeight)*((maxDistance-distance)/maxDistance);
Where distance is the distance from the point on the heightmap to the nearest horizontal or vertical edge (depending on which edge you want to stitch). Any point with a distance less than maxDistance (which is an adjustable value that effects how much of the terrain is altered) is adjusted based on this formula.
That was the old formula, and while it produced really nice results for most of the terrain, it was creating noticeable lines in the areas between the region of altered heightmap points and the region of unaltered heightmap points. I realized almost immediately that this was occurring because the slope of the altered regions was too steep in comparison to the unaltered regions, thus creating a noticeable contrast between the two. Unfortunately, I went about solving this issue the wrong way, looking for solutions on how to blur or smooth the contrasting regions together to remove the line.
After very little success with smoothing techniques, I decided to try and reduce the slope of the altered region, in the hope that it would better blend with the slope of the unaltered region. I am happy to report that this has improved my stitching algorithm greatly, removing 99% of the lines reported above.
The main culprit from the old formula was this part:
(maxDistance-distance)/maxDistance
which was producing a value between 0 and 1 linearly based on the distance of the point to the nearest edge. As the distance between the heightmap points and the edge increased, the heightmap points would utilize less and less of the average (as defined above), and shift more and more towards their original values. This linear interpolation was the cause of the too step slope, but luckily I found a built in method in the Mathf class of Unity's API that allows for quadratic (I believe cubic) interpolation. This is the SmoothStep Method.
Using this method (I believe a similar method can be found in the Xna framework found here), the change in how much of the average is used in determining a heightmap value becomes very severe in middle distances, but that severity lessens exponentially the closer the distance gets to maxDistance, creating a less severe slope that better blends with the slope of the unaltered region. The new forumla looks something like this:
//Using Mathf - Unity only?
float weight = Mathf.SmoothStep(1f, 0f, distance/maxDistance);
//Using XNA
float weight = MathHelper.SmoothStep(1f, 0f, distance/maxDistance);
//If you can't use either of the two methods above
float input = distance/maxDistance;
float weight = 1f + (-1f)*(3f*(float)Math.Pow(input, 2f) - 2f*(float)Math.Pow(input, 3f));
//Then calculate the new height using this weight
newHeight = oldHeight + (average - oldHeight)*weight;
There may be even better interpolation methods that produce better stitching. I will certainly update this question if I find such a method, so anyone else looking to do heightmap stitching can find the information they need. Kudos to rincewound for being on the right track with linear interpolation!
What is done in the images you posted looks a lot like simple linear interpolation to me.
So basically: You take two images (Left, Right) and define a stitching region. For linear interpolation you could take the leftmost pixel of the left image (in the stitching region) and the rightmost pixel of the right image (also in the stitching region). Then you fill the space in between with interpolated values.
Take this example - I'm using a single line here to show the idea:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
Lets say our overlap is 4 pixels wide:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
^ ^ ^ ^ overlap/stitiching region.
The leftmost value of the left image would be 10
The rightmost value of the right image would be 1.
Now we interpolate linearly between 10 and 1 in 2 steps, our new stitching region looks as follows
stitch = [10, 07, 04, 01]
We end up with the following stitched line:
line = [11,11,11,10,07,04,01,02,02,02]
If you apply this to two complete images you should get a result similar to what you posted before.
I'm making a platformer, with tile based map (like a lot of people). I begin in video games's developing so it's a little hard. I wan't to learn by myself but on this problem I'm stuck .
My maps are made with a list like this :
mapList[x][y] = tile
With this list, I can loop on all the Tiles and draw them.
What I want to do is to "Loop" (repeat) the map. I mean, when the character reach the right limit (or left), the map repeats. I don't understand how to do this, I search all the forums and all the question, but I found nothing :(
For instance
I don't know if I'm making myself clear but English is not my best language and i'm sorry for this :p.
Thanks in advance for trying to help me or just for reading my issue.
When you have a grid that is WxH cells, the valid ranges for X are 0..W-1
So as a first approach :
int nextX = (X+1) % W; // wraps around to 0
but you'll also need something for prevX (X-1) and maybe for X+d where d can be positive or negative.
You don't want to mess with the modulo of negative numbers, so
int MoveX(int d) { return (X+W+d) % W; }
I'm measuring some system performance data to store it in a database. From those data points I'm drawing line graphs over time. In their nature, those data points are a bit noisy, ie. every single point deviates at least a bit from the local mean value. When drawing the line graph straight from one point to the next, it produces jagged graphs. At a large time scale like > 10 data points per pixel, this noise is compressed into a wide jagged line area that is, say, 20px high instead of 1px as in smaller scales.
I've read about line smoothing, anti-aliasing, simplifying and all these things. But everything I've found seems to be about something else.
I don't need anti-aliasing, .NET already does that for me when drawing the line on the screen.
I don't want simplification. I need the extreme values to remain visible, at least most of them.
I think it goes in the direction of spline curves but I couldn't find much example images to evaluate whether the described thing is what I want. I did find a highly scientific book at Google Books though, full of half-page long formulas, which I wasn't like reading through now...
To give you an example, just look at Linux/Gnome's system monitor application. I draws the recent CPU/memory/network usage with a smoothed line. This may be a bit oversimplified, but I'd give it a try and see if I can tweak it.
I'd prefer C# code but algorithms or code in other languages is fine, too, as long as I can port it to C# without external references.
You can do some data-smoothing. Instead of using the real data, apply a simple smoothing algorithm that keeps the peaks like a Savitzky-Golayfilter.
You can get the coefficients here.
The easiest to do is:
Take the top coefficients from the website I linked to:
// For np = 5 = 5 data points
var h = 35.0;
var coeff = new float[] { 17, 12, -3 }; // coefficients from the site
var easyCoeff = new float[] {-3, 12, 17, 12, -3}; // Its symmetrical
var center = 2; // = the center of the easyCoeff array
// now for every point from your data you calculate a smoothed point:
smoothed[x] =
((data[x - 2] * easyCoeff[center - 2]) +
(data[x - 1] * easyCoeff[center - 1]) +
(data[x - 0] * easyCoeff[center - 0]) +
(data[x + 1] * easyCoeff[center + 1]) +
(data[x + 2] * easyCoeff[center + 2])) / h;
The first 2 and last 2 points you cannoth smooth when using 5 points.
If you want your data to be more "smoothed" you can experiment with coefficents with larger data points.
Now you can draw a line through your "smoothed" data. The larger your np = number of points, the smoother your data. But you also loose peak accuracy, but not as much when simply averaging some points together.
You cannot fix this in the graphics code. If your data is noisy then the graph is going to be noisy as well, no matter what kind of line smoothing algorithm you use. You'll need to filter the data first. Create a second data set with points that are interpolated from the original data. A Least Squares fit is a common technique. Averaging is simple to implement but tends to hide extremes.
I think what you are looking for is a routine to provide 'splines'. Here is a link describing splines:
http://en.wikipedia.org/wiki/Spline_(mathematics)
If that is the case I don't have any recommendations for a spline library, but an initial google search turned up a bunch.
Sorry for no code, but hopefully knowing the terminology will aid you in your search.
Bob
Reduce the number of data points, using MIN/MAX/AVG before you display them. It'll look nicer and it'll be faster
Graphs of network traffic often use a weighted average. You can sample once per second into a circular list of length 10 and for the graph, at each sample, graph the average of the samples.
If 10 isn't enough you can store many more. You don't need to recalculate the average from scratch, either:
new_average = (old_average*10 - replaced_sample + new_sample)/10
If you don't want to store all 10, however, you can approximate with this:
new_average = old_average*9/10 + new_sample/10
Lots of routers use this to save on storage. This ramps toward the current traffic rate exponentially.
If you do implement this, do something like this:
new_average = old_average*min(9,number_of_samples)/10 + new_sample/10
number_of_samples++
to avoid the initial ramp-up. You should also adjust the 9/10, 1/10 ratio to actually reflect the time preiod of each sample because your timer won't fire exactly once per second.
I am using XNA to build a project where I can draw "graffiti" on my wall using an LCD projector and a monochrome camera that is filtered to see only hand held laser dot pointers. I want to use any number of laser pointers -- don't really care about differentiating them at this point.
The wall is 10' x 10', and the camera is only 640x480 so I'm attempting to use sub-pixel measurement using a spline curve as outlined here: tpub.com
The camera runs at 120fps (8-bit), so my question to you all is the fastest way to to find that subpixel laser dot center. Currently I'm using a brute force 2D search to find the brightest pixel on the image (0 - 254) before doing the spline interpolation. That method is not very fast and each frame takes longer to computer than they are coming in.
Edit: To clarify, in the end my camera data is represented by a 2D array of bytes indicating pixel brightness.
What I'd like to do is use an XNA shader to crunch the image for me. Is that practical? From what I understand, there really isn't a way to keep persistent variables in a Pixel Shader such as running totals, averages, etc.
But for arguments sake, let's say I found the brightest pixels using brute force, then stored them and their neighboring pixels for the spline curve into X number of vertices using texcoords. Is is practical then to use HLSL to compute a spline curve using texcoords?
I am also open to suggestions outside of my XNA box, be it DX10/DX11, maybe some sort of FPGA, etc. I just don't really have much experience with ways of crunching data in this way. I figure if they can do something like this on a Wii-Mote using 2 AA batteries than I'm probably going about this the wrong way.
Any ideas?
If by Brute-forcing you mean looking at every pixel independently, it is basically the only way of doing it. You will have to scan through all the images pixels, no matter what you want to do with the image. Althought you might not need to find the brightest pixels, you can filter the image by color (ex.: if your using a red laser). This is easily done using a HSV color coded image. If you are looking for some faster algorithms, try OpenCV. It's been optimized again and again for image treatment, and you can use it in C# via a wrapper:
[http://www.codeproject.com/KB/cs/Intel_OpenCV.aspx][1]
OpenCV can also help you easily find the point centers and track each points.
Is there a reason you are using a 120fps camera? you know the human eye can only see about 30fps right? I'm guessing it's to follow very fast laser movements... You might want to consider bringning it down, because real-time processing of 120fps will be very hard to acheive.
running through 640*480 bytes to find the highest byte should run within a ms. Even on slow processors. No need to take the route of shaders.
I would advice to optimize your loop.
for instance: this is really slow (because it does a multiplication with every array lookup):
byte highest=0;
foundX=-1, foundY=-1;
for(y=0; y<480; y++)
{
for(x=0; x<640; x++)
{
if(myBytes[x][y] > highest)
{
highest = myBytes[x][y];
foundX = x;
foundY = y;
}
}
}
this is much faster:
byte [] myBytes = new byte[640*480];
//fill it with your image
byte highest=0;
int found=-1, foundX=-1, foundY=-1;
int len = 640*480;
for(i=0; i<len; i++)
{
if(myBytes[i] > highest)
{
highest = myBytes[i];
found = i;
}
}
if(found!=-1)
{
foundX = i%640;
foundY = i/640;
}
This is off the top of my head so sorry for errors ;^)
You're dealing with some pretty complex maths if you want sub-pixel accuracy. I think this paper is something to consider. Unfortunately, you'll have to pay to see it using that site. If you've got access to a suitable library, they may be able to get hold of it for you.
The link in the original post suggested doing 1000 spline calculations for each axis - it treated x and y independantly, which is OK for circular images but is a bit off if the image is a skewed ellipse. You could use the following to get a reasonable estimate:
xc = sum (xn.f(xn)) / sum (f(xn))
where xc is the mean, xn is the a point along the x-axis and f(xn) is the value at the point xn. So for this:
*
* *
* *
* *
* *
* *
* * *
* * * *
* * * *
* * * * * *
------------------
2 3 4 5 6 7
gives:
sum (xn.f(xn)) = 1 * 2 + 3 * 3 + 4 * 9 + 5 * 10 + 6 * 4 + 7 * 1
sum (f(xn)) = 1 + 3 + 9 + 10 + 4 + 1
xc = 128 / 28 = 4.57
and repeat for the y-axis.
Brute-force is the only real way, however your idea of using a shader is good - you'd be offloading the brute-force check from the CPU, which can only look at a small number of pixels simultaneously (roughly 1 per core), to the GPU, which likely has 100+ dumb cores (pipelines) that can simultaneously compare pixels (your algorithm may need to be modified a bit to work well with the 1 instruction-many cores arrangement of a GPU).
The biggest issue I see is whether or not you can move that data to the GPU fast enough.
Another optimization to consider: if you're drawing, then the current location of the pointer is probably close the last location of the pointer. Remember the last recorded position of the pointer between frames, and only scan a region close to that position... say a 1'x1' area. Only if the pointer isn't found in that area should you scan the whole surface.
Obviously, there will be a tradeoff between how quickly your program can scan, and how quickly you'll be able to move your mouse before the camera "loses" the pointer and has to go to the slow, full-image scan. A little experimentation will probably reveal the optimum value.
Cool project, by the way.
Put the camera slightly out of focus and bitblt against a neutral sample. You can quickly scan rows for non 0 values. Also if you are at 8 bits and pick up 4 bytes at a time you can process the image faster. As other pointed out you might reduce the frame rate. If you have less fidelity than the resulting image there isn't much point in the high scan rate.
(The slight out of focus camera will help get just the brightest points and reduce false positives if you have a busy surface... of course assuming you are not shooting a smooth/flat surface)
Start with a black output buffer. Forget about subpixel for now. Every frame, every pixel, do this:
outbuff=max(outbuff,inbuff);
Do subpixel filtering to a third "clean" buffer when you're done with the image. Or do a chunk or a line of the screen at a time in real time. Advantage: real-time "rough" view of the drawing, cleaned up as you go.
When you convert from the rough output buffer to the "clean" third buffer, you can clear the rough to black. This lets you keep drawing forever without slowing down.
By drawing the "clean" over top the "rough," maybe in a slightly different color, you'll have the best of both worlds.
This is similar to what paint programs do--if you draw really fast, you see a rough version, then the paint program "cleans up" the image when it has time.
Some comments on the algorithm:
I've seen a lot of cheats in this arena. I've played Sonic on a Sega Genesis emulator that upsamples. and it has some pretty wild algorithms that work very well and are very fast.
You actually have some advantages you can gain because you might know the brightness and the radius on the dot.
You might just look at each pixel and its 8 neighbors and let those 9 pixels "vote" according to their brightness for where the subpixel lies.
Other thoughts
Your hand is not that accurate when you control a laser pointer. Try getting all the dots every 10 frames or so, identifying which beams are which (based on previous motion, and accounting for new dots, turned-off lasers, and dots that have entered or left the visual field), then just drawing a high resolution curve. Don't worry about sub pixel in the input--just draw the curve into the high res output.
Use a Catmull-Rom spline, which goes through all control points.