Related
I get three IntPtr's to the arrays of the RGB channels from an external library.
At the moment I merge the three arrays to one and create an ImageSource from the new array.
But the images can be really huge (at the moment up to 8000 x 4000 px), so the conversion of the data, which are already laying in the memory takes too long.
Is there a way to use these pointers to show the image in a canvas without copying? I.e. a derived class of ImageSource with costum OnRender method or something else?
I didn't found anything belongs to my problem.
Update:
My current code looks like this:
int unmapByes = Math.Abs(stride) - (width * 3);
byte* _ptrR = (byte*)ptrR;
byte* _ptrG = (byte*)ptrG;
byte* _ptrB = (byte*)ptrB;
BitmapSource bmpsrc = null;
App.Current.Dispatcher.Invoke(() =>
{
bmpsrc = BitmapSource.Create(width,
height,
96,
96,
PixelFormats.Bgr24,
null,
new byte[bytes],
stride);
});
BitmapBuffer bitmapBuffer = new BitmapBuffer(bmpsrc);
byte* buffer = (byte*)bitmapBuffer.BufferPointer;
Parallel.For(0, bytes / 3 - height, (offset) =>
{
int i = offset * 3 + (((offset + 1) / width)) * unmapByes;
*(buffer + i) = *(_ptrB + offset);
*(buffer + i + 1) = *(_ptrG + offset);
*(buffer + i + 2) = *(_ptrR + offset);
});
return bmpsrc;
WPF image sources are actually textures residing on the GPU, so they have to have very specific formats. You won't get by with your three arrays in this world.
However, 8000x4000 is only 32MB (times color bytes), that's nothing to copy around in RAM. If you really profiled your slowdown to be about this, I'd wager you're doing something wrong (using List<> or similar growable arrays instead of preallocating the whole buffer, redoing the computation more than once etc.)
As one optimizing tip off the top of my head, I suggest not using the naive implementation with 3 pointers advancing at the same time, do one array at a time to preserve it in your L1 cache.
The right answer is:
Get rid of computations in the loop, which have high costs. In this case it is the division. Computations with high costs are every computation which is not in the instruction set of the CPU.
The second one is, that a Parallel.For loop can increase the speed, but only if every Thread of the loop has a bigger amount of work. Otherwise the costs for the handling are too high.
So now I changed my code and use a Parallel.For loop for each line and an inner for loop for each pixel in this line.
Now I can convert an image with the size of 8000x4000 24rgb in 32ms (on my system I can say 1 Megapixel = 1ms).
For the future: Everyone who has a question wants to know, why his question was voted down. If you don't know the answer or only write bull***t, stop it.
I am drawing a line on a graph from numbers read from a text file. There is a number on each line of the file which corresponds to the X co-ordinate while the Y co-ordinate is the line it is on.
The requirements have now changed to include "special events" where if the number on the line is followed by the word special a spike will appear like image below:
Currently the only way I can find is to use a line for each spike, however there could be a large of these special events and so needs to be modular. This seems an efficient and bad way to program it.
Is it possible to add the spikes to the same graph line? Or is it possible to use just one additional line and have it broken (invisible) and only show where the spikes are meant to be seen?
I have looked at using bar graphs but due to other items on the graph I cannot.
The DataPoints of a Line Chart are connected so it is not possble to really break it apart. However each segment leading to a DataPoint can have its own color and that includes Color.Transparent which lends itself to a simple trick..
Without adding extra Series or Annotations, your two questions can be solved like this:
To simply add the 'spikes' you show us in the 2nd graph, all you need to do is to insert 2 suitable datapoints, the 2nd being identical to the point the spike is connected to.
To add an unconnected line you need to 'jump' to its beginning by adding one extra point with a transparent color.
Here are two example methods:
void addSpike(Series s, int index, double spikeWidth)
{
DataPoint dp = s.Points[index];
DataPoint dp1 = new DataPoint(dp.XValue + spikeWidth, dp.YValues[0]);
s.Points.Insert(index+1, dp1);
s.Points.Insert(index+2, dp);
}
void addLine(Series s, int index, double spikeDist, double spikeWidth)
{
DataPoint dp = s.Points[index];
DataPoint dp1 = new DataPoint(dp.XValue + spikeDist, dp.YValues[0]);
DataPoint dp2 = new DataPoint(dp.XValue + spikeWidth, dp.YValues[0]);
DataPoint dp0 = dp.Clone();
dp1.Color = Color.Transparent;
dp2.Color = dp.Color;
dp2.BorderWidth = 2; // optional
dp0.Color = Color.Transparent;
s.Points.Insert(index + 1, dp1);
s.Points.Insert(index + 2, dp2);
s.Points.Insert(index + 3, dp0);
}
You can call them like this:
addSpike(chart1.Series[0], 3, 50d);
addLine(chart1.Series[0], 6, 30d, 80d);
Note that they add 2 or 3 DataPoints to the Points collection!
Of course you can set the Color and width (aka BorderWidth) of the extra lines as you wish and also include them in the params list..
If you want to keep the points collection unchanged you also can simply create one 'spikes series' and add the spike points there. The trick is to 'jump' to the new points with a transparent line!
I've spent a few hours today researching how random terrain generation tends to be done and after reading about the plasma fractal (midpoint displacement and diamond square algo's) I decided to try and have a go at implementing one. My result was actually not terriable, but I have these horrible square/line/grid type artefacts that I just can not seem to get rid of!
When rendered as a gray scale image my height map looks something like:
height map http://sphotos-d.ak.fbcdn.net/hphotos-ak-ash3/535816_10151739010123327_225111175_n.jpg
Obviously there is a fair amount of code involved in this but I will try to post what is only relevant. I've not not posted the code that turns it into a texture for example, but do not worry I have already tried just filling my height array with a smooth gradient and the texture comes out fine :)
I begin by setting the four corners of the map to random values between 0 and 1 and then start the recursive displacement algo:
public void GenerateTerrainLayer()
{
//set the four corners of the map to have random values
TerrainData[0, 0] = (float)RandomGenerator.NextDouble();
TerrainData[GenSize, 0] = (float)RandomGenerator.NextDouble();
TerrainData[0, GenSize] = (float)RandomGenerator.NextDouble();
TerrainData[GenSize, GenSize] = (float)RandomGenerator.NextDouble();
//begin midpoint displacement algorithm...
MidPointDisplace(new Vector2_I(0, 0), new Vector2_I(GenSize, 0), new Vector2_I(0, GenSize), new Vector2_I(GenSize, GenSize));
}
TerrainData is simply a 2D array of floats*. Vector2_I is just my own integer vector class. The last four functions are MidPointDisplace which is the recursive function, CalculateTerrainPointData which averages 2 data values and adds some noise, CalculateTerrainPointData2 which averages 4 data values and adds some noise and has a slightly higher scale value (its only used for center points) and finally my noise function which atm is just some random noise and not a real noise like perlin etc. They look like this:
private void MidPointDisplace(Vector2_I topleft, Vector2_I topright, Vector2_I bottomleft, Vector2_I bottomright)
{
//check size of square working on.. if its shorter than a certain amount stop the algo, we've done enough
if (topright.X - topleft.X < DisplacementMaxLOD)
{
return;
}
//calculate the positions of all the middle points for the square that has been passed to the function
Vector2_I MidLeft, MidRight, MidTop, MidBottom, Center;
MidLeft.X = topleft.X;
MidLeft.Y = topleft.Y + ((bottomleft.Y - topleft.Y) / 2);
MidRight.X = topright.X;
MidRight.Y = topright.Y + ((bottomright.Y - topright.Y) / 2);
MidTop.X = topleft.X + ((topright.X - topleft.X) / 2);
MidTop.Y = topleft.Y;
MidBottom.X = bottomleft.X + ((bottomright.X - bottomleft.X) / 2);
MidBottom.Y = bottomleft.Y;
Center.X = MidTop.X;
Center.Y = MidLeft.Y;
//collect the existing data from the corners of the area passed to algo
float TopLeftDat, TopRightDat, BottomLeftDat, BottomRightDat;
TopLeftDat = GetTerrainData(topleft.X, topleft.Y);
TopRightDat = GetTerrainData(topright.X, topright.Y);
BottomLeftDat = GetTerrainData(bottomleft.X, bottomleft.Y);
BottomRightDat = GetTerrainData(bottomright.X, bottomright.Y);
//and the center
//adverage data and insert for midpoints..
SetTerrainData(MidLeft.X, MidLeft.Y, CalculateTerrainPointData(TopLeftDat, BottomLeftDat, MidLeft.X, MidLeft.Y));
SetTerrainData(MidRight.X, MidRight.Y, CalculateTerrainPointData(TopRightDat, BottomRightDat, MidRight.X, MidRight.Y));
SetTerrainData(MidTop.X, MidTop.Y, CalculateTerrainPointData(TopLeftDat, TopRightDat, MidTop.X, MidTop.Y));
SetTerrainData(MidBottom.X, MidBottom.Y, CalculateTerrainPointData(BottomLeftDat, BottomRightDat, MidBottom.X, MidBottom.Y));
SetTerrainData(Center.X, Center.Y, CalculateTerrainPointData2(TopLeftDat, TopRightDat, BottomLeftDat, BottomRightDat, Center.X, Center.Y));
debug_displacement_iterations++;
//and recursively fire off new calls to the function to do the smaller squares
Rectangle NewTopLeft = new Rectangle(topleft.X, topleft.Y, Center.X - topleft.X, Center.Y - topleft.Y);
Rectangle NewTopRight = new Rectangle(Center.X, topright.Y, topright.X - Center.X, Center.Y - topright.Y);
Rectangle NewBottomLeft = new Rectangle(bottomleft.X, Center.Y, Center.X - bottomleft.X, bottomleft.Y - Center.Y);
Rectangle NewBottomRight = new Rectangle(Center.X , Center.Y, bottomright.X - Center.X, bottomright.Y - Center.Y);
MidPointDisplace(new Vector2_I(NewTopLeft.Left, NewTopLeft.Top), new Vector2_I(NewTopLeft.Right, NewTopLeft.Top), new Vector2_I(NewTopLeft.Left, NewTopLeft.Bottom), new Vector2_I(NewTopLeft.Right, NewTopLeft.Bottom));
MidPointDisplace(new Vector2_I(NewTopRight.Left, NewTopRight.Top), new Vector2_I(NewTopRight.Right, NewTopRight.Top), new Vector2_I(NewTopRight.Left, NewTopRight.Bottom), new Vector2_I(NewTopRight.Right, NewTopRight.Bottom));
MidPointDisplace(new Vector2_I(NewBottomLeft.Left, NewBottomLeft.Top), new Vector2_I(NewBottomLeft.Right, NewBottomLeft.Top), new Vector2_I(NewBottomLeft.Left, NewBottomLeft.Bottom), new Vector2_I(NewBottomLeft.Right, NewBottomLeft.Bottom));
MidPointDisplace(new Vector2_I(NewBottomRight.Left, NewBottomRight.Top), new Vector2_I(NewBottomRight.Right, NewBottomRight.Top), new Vector2_I(NewBottomRight.Left, NewBottomRight.Bottom), new Vector2_I(NewBottomRight.Right, NewBottomRight.Bottom));
}
//helper function to return a data value adveraged from two inputs, noise value added for randomness and result clamped to ensure a good value
private float CalculateTerrainPointData(float DataA, float DataB, int NoiseX, int NoiseY)
{
return MathHelper.Clamp(((DataA + DataB) / 2.0f) + NoiseFunction(NoiseX, NoiseY), 0.0f, 1.0f) * 1.0f;
}
//helper function to return a data value adveraged from four inputs, noise value added for randomness and result clamped to ensure a good value
private float CalculateTerrainPointData2(float DataA, float DataB, float DataC, float DataD, int NoiseX, int NoiseY)
{
return MathHelper.Clamp(((DataA + DataB + DataC + DataD) / 4.0f) + NoiseFunction(NoiseX, NoiseY), 0.0f, 1.0f) * 1.5f;
}
private float NoiseFunction(int x, int y)
{
return (float)(RandomGenerator.NextDouble() - 0.5) * 0.5f;
}
Ok thanks for taking the time to look - hopefully someone knows where this grid-like pattern is appearing from :)
*edit - accidently wrote ints, corrected to floats
I identified 3 problems in your code. (2 of which are related)
You don't scale down the randomness in each step. There must be a reduction of the randomness in each step. Otherwise you get white(-ish) noise. You choose a factor (0.5-0.7 worked fine for my purposes) and multiply the reduction by alpha in each recursion and scale the generated random number by that factor.
You swapped the diamond and square step. First the diamonds, then the squares. The other way round is impossible (see next).
Your square step uses only points in one direction. This one probably causes the rectangular structures you are talking about. The squares must average the values to all four sides. This means that the square step depends on the point generated by the diamond step. And not only the diamond step of the rectangle you are currently looking at, also of the rectangles next to it. For values outside of the map, you can either wrap, use a fixed value or only average 3 values.
I see a problem in your CalculateTerrainPointData implementation: you're not scaling down the result of NoiseFunction with each iteration.
See this description of the Midpoint Displacement algorithm:
Start with a single horizontal line segment.
Repeat for a sufficiently large number of times:
Repeat over each line segment in the scene:
Find the midpoint of the line segment.
Displace the midpoint in Y by a random amount.
Reduce the range for random numbers.
A fast way to do it in your code without changing too much is by adding some scale parameter to MidPointDisplace (with default set to 1.0f) and CalculateTerrainPointData; use it in CalculateTerrainPointData to multiply result of NoiseFunction; and reduce it with each recursive call to MidPointDisplace(..., 0.5f * scale).
Not sure though if that is the only cause to your image looking wrong or there are other problems.
According to Wikipedia's summary of midpoint displacement, only the average for the center most point get noise added to it - try only adding noise via CalculateTerrainPointData2 & removing the noise in CalculateTerrainPointData.
I need to detect a spiral shaped spring and count its coil turns.
I have tried as follows:
Image<Bgr, Byte> ProcessImage(Image<Bgr, Byte> img)
{
Image<Bgr, Byte> imgClone = new Image<Bgr,byte>( img.Width, img.Height);
imgClone = img.Clone();
Bgr bgrRed = new Bgr(System.Drawing.Color.Red);
#region Algorithm 1
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone.PyrUp();
imgClone.PyrDown();
imgClone._EqualizeHist();
imgClone._Dilate(20);
imgClone._EqualizeHist();
imgClone._Erode(10);
Image<Gray, Byte> imgCloneGray = new Image<Gray, byte>(imgClone.Width, imgClone.Height);
CvInvoke.cvCvtColor(imgClone, imgCloneGray, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_BGR2GRAY);
imgCloneGray = imgCloneGray.Canny(c_thresh, c_threshLink);//, (int)c_threshSize);
Contour<System.Drawing.Point> pts = imgCloneGray.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
CvInvoke.cvCvtColor(imgCloneGray, imgCloneYcc, Emgu.CV.CvEnum.COLOR_CONVERSION.CV_GRAY2BGR);
if (null != pts)
{
imgClone.Draw(pts, bgrRed, 2);
imgClone.Draw(pts.BoundingRectangle, bgrRed, 2);
}
#endregion
return imgClone;
}
I am some how able to get the spring but how to get the counts. I am looking for algorithms.
I am currently not looking for speed optimization.
This is similar like counting fingers. Spring spiral is very thin to get using contour. What else can be done. http://www.luna-arts.de/others/misc/HandsNew.zip
You have a good final binarization over there, but it looks like to be too restricted to this single case. I would do a relatively simpler, but probably more robust, preprocessing to allow a relatively good binarization. From Mathematical Morphology, there is a transform called h-dome, which is used to remove irrelevant minima/maxima by suppressing minima/maxima of height < h. This operation might not be readily available in your image processing library, but it is not hard to implement it. To binarize this preprocessed image I opted for Otsu's method, since it is automatic and statistically optimal.
Here is the input image after h-dome transformations, and the binary image:
Now, to count the number of "spiral turns" I did something very simple: I split the spirals so I can count them as connected components. To split them I did a single morphological opening with a vertical line, followed by a single dilation by an elementary square. This produces the following image:
Counting the components gives 15. Since you have 13 of them that are not too close, this approach counted them all correctly. The groups at left and right were counted as a single one.
The full Matlab code used to do these steps:
f = rgb2gray(imread('http://i.stack.imgur.com/i7x7L.jpg'));
% For this image, the two next lines are optional as they will to lead
% basically the same binary image.
f1 = imhmax(f, 30);
f2 = imhmin(f1, 30);
bin1 = ~im2bw(f2, graythresh(f2));
bin2 = bwmorph(imopen(bin1, strel('line', 15, 90)), 'dilate');
I am looking for a way to extract information out of a chromatogram out of a GC or HPLC. A chromatogram looks like this:
I am not really into image processing/analysis so I'm looking for a tool/algorithim to extract the length (and the surface under a peak if possible) of a peak from those chromatograms. The solutions can either be in Python or in C#.
Thanks in advance.
I've written some quick python code that will extract chromatogram (or any single-valued) data from an image file.
It has the following requirements:
Image is clean (no text or other data).
Curve is single valued, ie. curve pixel width of one (it will still work without this, but it will always take the upper value).
Scales are linear.
It is very simple, and just iterates through each column of the image and takes the first black value as the data point. It uses PIL. These data points are initially in the image co-ordinate system, so need to be rescaled to the data co-ordinate system, if all your images share the same axis, this is straight forward, otherwise it needs to be done manually on a per image basis (automation would be more involved).
The image below shows where I extracted your image (I removed the text) for processing (non-pink region), so for re-scaling we just take the white box region in the data co-ordinate system: x_range = 4.4 - 0.55, x_offset = 0.55, y_range = 23000 - 2500, and y_offset = 2500.
Here is the extracted data replotted with pyplot:
Here is the code:
import Image
import numpy as np
def get_data(im, x_range, x_offset, y_range, y_offset):
x_data = np.array([])
y_data = np.array([])
width, height = im.size
im = im.convert('1')
for x in xrange(width):
for y in xrange(height):
if im.getpixel((x, y)) == 0:
x_data = np.append(x_data, x)
y_data = np.append(y_data, height - y)
break
x_data = (x_data / width) * x_range + x_offset
y_data = (y_data / height) * y_range + y_offset
return x_data, y_data
im = Image.open('clean_data_2.png')
x_data, y_data = get_data(im,4.4-0.55,0.55,23000-2500,2500)
from pylab import *
plot(x_data, y_data)
grid(True)
savefig('new_data.png')
show()
Once you have your data as numpy arrays, there are many options you can use to find peaks and the corresponding areas under them (see this discussion for some approaches). Noise is a large concern, so a general approach would be to convolve the data to smooth the noise out (or you could threshold if your peaks are sharp) then differentiate to find peaks. To find areas under peaks you can do numerical integration across the peak region.
I've made a couple of assumptions and written some simple code (below), to illustrate a possible approach. I've thresholded the data so only peaks above 5000 survive, then we iterate through the data finding the peaks, and using the trapeze rule, np.trapz, to find the area under each peak. Where peaks overlap the areas are split at the overlap point (I doubt this is standard..). Also this code will only recognize peaks that are local maxima (shoulders will not be detected). I've graphed the results, writing the area values for each peak at the corresponding peak position:
def find_peak(start, grad):
for index, gr in enumerate(grad[start:]):
if gr < 0:
return index + start
def find_end(peak, grad):
for index, gr in enumerate(grad[peak:]):
if gr >= 0:
return index + peak + 1
def find_peaks(grad):
peaks=[]
i = 0
while i < len(grad[:-1]):
if grad[i] > 0:
start = i
peak_index = find_peak(start, grad)
end = find_end(peak_index, grad)
area = np.trapz(y_data[start:end], x_data[start:end])
peaks.append((x_data[peak_index], y_data[peak_index], area))
i = end - 1
else:
i+=1
return peaks
y_data = np.where(y_data > 5000, y_data, 0)
grad = np.diff(y_data)
peaks = find_peaks(grad)
from pylab import *
plot(x_data, y_data)
for peak in peaks:
text(peak[0], 1.01*peak[1], '%d'%int(peak[2]))
grid(True)
show()
Whatever approach you take at this point requires assumptions about your data (which I am not really in a position to make! Although I made a few above!), how do you deal with overlapping peaks? etc.. I am sure there are standard approaches in chromatography, so really you need to check that out first. Hope this helps!
When i use this code I get the following image
The code is the same as above (with slight modifications)
from PIL import Image
import numpy as np
def get_data(im, x_range, x_offset, y_range, y_offset):
x_data = np.array([])
y_data = np.array([])
width, height = im.size
im = im.convert('1')
for x in range(width):
for y in range(height):
if im.getpixel((x, y)) == 0:
x_data = np.append(x_data, x)
y_data = np.append(y_data, height - y)
break
x_data = (x_data / width) * x_range + x_offset
y_data = (y_data / height) * y_range + y_offset
return x_data, y_data
im = Image.open('C:\Python\HPLC.png')
x_data, y_data = get_data(im,4.4-0.55,0.55,23000-2500,2500)
from pylab import *
plot(x_data, y_data)
grid(True)
savefig('new_data.png')
show()
I am not quite sure what the problem might be.