I am looking for a way to extract information out of a chromatogram out of a GC or HPLC. A chromatogram looks like this:
I am not really into image processing/analysis so I'm looking for a tool/algorithim to extract the length (and the surface under a peak if possible) of a peak from those chromatograms. The solutions can either be in Python or in C#.
Thanks in advance.
I've written some quick python code that will extract chromatogram (or any single-valued) data from an image file.
It has the following requirements:
Image is clean (no text or other data).
Curve is single valued, ie. curve pixel width of one (it will still work without this, but it will always take the upper value).
Scales are linear.
It is very simple, and just iterates through each column of the image and takes the first black value as the data point. It uses PIL. These data points are initially in the image co-ordinate system, so need to be rescaled to the data co-ordinate system, if all your images share the same axis, this is straight forward, otherwise it needs to be done manually on a per image basis (automation would be more involved).
The image below shows where I extracted your image (I removed the text) for processing (non-pink region), so for re-scaling we just take the white box region in the data co-ordinate system: x_range = 4.4 - 0.55, x_offset = 0.55, y_range = 23000 - 2500, and y_offset = 2500.
Here is the extracted data replotted with pyplot:
Here is the code:
import Image
import numpy as np
def get_data(im, x_range, x_offset, y_range, y_offset):
x_data = np.array([])
y_data = np.array([])
width, height = im.size
im = im.convert('1')
for x in xrange(width):
for y in xrange(height):
if im.getpixel((x, y)) == 0:
x_data = np.append(x_data, x)
y_data = np.append(y_data, height - y)
break
x_data = (x_data / width) * x_range + x_offset
y_data = (y_data / height) * y_range + y_offset
return x_data, y_data
im = Image.open('clean_data_2.png')
x_data, y_data = get_data(im,4.4-0.55,0.55,23000-2500,2500)
from pylab import *
plot(x_data, y_data)
grid(True)
savefig('new_data.png')
show()
Once you have your data as numpy arrays, there are many options you can use to find peaks and the corresponding areas under them (see this discussion for some approaches). Noise is a large concern, so a general approach would be to convolve the data to smooth the noise out (or you could threshold if your peaks are sharp) then differentiate to find peaks. To find areas under peaks you can do numerical integration across the peak region.
I've made a couple of assumptions and written some simple code (below), to illustrate a possible approach. I've thresholded the data so only peaks above 5000 survive, then we iterate through the data finding the peaks, and using the trapeze rule, np.trapz, to find the area under each peak. Where peaks overlap the areas are split at the overlap point (I doubt this is standard..). Also this code will only recognize peaks that are local maxima (shoulders will not be detected). I've graphed the results, writing the area values for each peak at the corresponding peak position:
def find_peak(start, grad):
for index, gr in enumerate(grad[start:]):
if gr < 0:
return index + start
def find_end(peak, grad):
for index, gr in enumerate(grad[peak:]):
if gr >= 0:
return index + peak + 1
def find_peaks(grad):
peaks=[]
i = 0
while i < len(grad[:-1]):
if grad[i] > 0:
start = i
peak_index = find_peak(start, grad)
end = find_end(peak_index, grad)
area = np.trapz(y_data[start:end], x_data[start:end])
peaks.append((x_data[peak_index], y_data[peak_index], area))
i = end - 1
else:
i+=1
return peaks
y_data = np.where(y_data > 5000, y_data, 0)
grad = np.diff(y_data)
peaks = find_peaks(grad)
from pylab import *
plot(x_data, y_data)
for peak in peaks:
text(peak[0], 1.01*peak[1], '%d'%int(peak[2]))
grid(True)
show()
Whatever approach you take at this point requires assumptions about your data (which I am not really in a position to make! Although I made a few above!), how do you deal with overlapping peaks? etc.. I am sure there are standard approaches in chromatography, so really you need to check that out first. Hope this helps!
When i use this code I get the following image
The code is the same as above (with slight modifications)
from PIL import Image
import numpy as np
def get_data(im, x_range, x_offset, y_range, y_offset):
x_data = np.array([])
y_data = np.array([])
width, height = im.size
im = im.convert('1')
for x in range(width):
for y in range(height):
if im.getpixel((x, y)) == 0:
x_data = np.append(x_data, x)
y_data = np.append(y_data, height - y)
break
x_data = (x_data / width) * x_range + x_offset
y_data = (y_data / height) * y_range + y_offset
return x_data, y_data
im = Image.open('C:\Python\HPLC.png')
x_data, y_data = get_data(im,4.4-0.55,0.55,23000-2500,2500)
from pylab import *
plot(x_data, y_data)
grid(True)
savefig('new_data.png')
show()
I am not quite sure what the problem might be.
Related
I'd like to implement a variation of a rectangle packing algorithm in C#. In my case the rectangles have a width and height and a "desired" position in a 2D plane (on the screen). They must however not overlap. I want the algorithm to find the positions of the rectangles that minimizes the distances of their desired positions. I am aware that the order in which the rectangles are placed plays a role but I can't even find a performant algorithm for a fixed or random order. Anyone got an idea or references?
More formal definiton of the problem here
I implemented #tiliavirga's suggestion and it works quite well.
Some notes:
I made the repulsive force proportional to only the square root of the overlapping area because otherwise, the first few iterations had huge repulsive forces blowing the constellation apart. (On the other hand, it leads to quick termination which could be important, see below)
I reduced the attractive force over time towards 0, because otherwise, the alg oscillates in some cases, where overlapping rectangles are pushed away, then in the next iteration pulled together, then pushed away, and so on
The algorithm can take very long, depending on the parameters (1) how quickly the attractive force weakens, (2) how large the motion of the rectangles is in each iteration, and (3) the limit of the total overlapping area which can be tolerated, terminating the algorithm. In time-critical applications, e.g. in games where this computation is done every frame, these parameters should be adjusted to result in a quick termination with a not-so-optimal solution.
All in all, a good enough solution for me. Python code below:
DATA STRUCTURE:
class Rect(object):
def __init__(self, centerX, centerY, width, height):
self.centerX = centerX
self.centerY = centerY
self.desired_centerX = centerX
self.desired_centerY = centerY
self.left = centerX - width / 2
self.right = centerX + width / 2
self.bottom = centerY - height / 2
self.top = centerY + height / 2
self.width = width
self.height = height
def move(self, x, y):
self.centerX += x
self.centerY += y
self.left += x
self.right += x
self.bottom += y
self.top += y
UTILITY:
def normalize(vector):
length = np.linalg.norm(vector)
# define the normalization of the zero vector like this, because we need to move rectangles
# somewhere when they are perfectly centered on each other
if length == 0:
return np.random.rand(vector.shape[0])
else:
return vector / length
def isOverlapping(r1, r2):
#we define that a rects doesn't overlap with itself
if r1 is r2:
return False
if r1.left > r2.right or r1.right < r2.left or r1.bottom > r2.top or r1.top < r2.bottom:
return False
return True
def getOverlappingArea(r1, r2):
if not isOverlapping(r1, r2):
return 0
else:
return (min(r1.right, r2.right) - max(r1.left, r2.left)) * \
(min(r1.right, r2.right) - max(r1.left, r2.left))
#pointing from "r1" to "r2"
def getScaledPushingForce(r1, r2):
overlappingArea = getOverlappingArea(r1, r2)
if overlappingArea < 0:
raise ValueError("Something went wrong, negative overlapping area calculated!")
if overlappingArea == 0:
return np.array([0,0])
return np.sqrt(overlappingArea) * normalize( \
np.array([r2.centerX - r1.centerX, r2.centerY - r1.centerY]))
PARAMETERS:
# the strength of the pulling force towards the desired position decays to easy termination
# higher value = slower decay
# faster decay means faster termination but worse results
pullingForceHalfTime = 10
# the overlapping area which is considered to be small enough to stop the algorithm
# (recommended to assign according to the number and size of the rectangles)
acceptableOverlap = 2*len(rects)
# the scaling of the total forces, that moves the rectangles
# larger portions mean faster termination but possibly worse results
# (recommended 1/2<= forceScaling <= 1/20, the smaller pullingStrength is, the lower should forceScaling also be
# e.g. forceScaling = 1/20 * pullingStrength)
forceScaling = 1/10
ALGORITHM:
# calculates pulling and pushing forces and moves the rectangles a bit in the direction of the combination of these forces
# in every iteration. Stops when the overlapping area is sufficiently small
def unstack():
i = 1
#iterate until break
while True:
#pulling forces towards the desired position
#weakened over the course of the iteration (depending on d), since no overlapping is the stronger constraint
pulling_forces = [np.array([r.desired_centerX - r.centerX, r.desired_centerY - r.centerY]) * \
np.power(0.5, i/pullingForceHalfTime) for r in rects]
#pushing forces resulting from overlapping rectangles
#the directions of the forces for a pair of overlapping rectangles has the direction of the connecting vector
#between their centers and the magnitude is proportional to the are of the overlap
pushing_forces = [np.sum([getScaledPushingForce(r_, r) for r_ in rects], axis=0) for r in rects]
total_forces = np.sum([pulling_forces, pushing_forces], axis=0) * forceScaling
#move the rectangles by a portion of the total forces (smaller steps => more iterations but better results)
for j in range(len(rects)):
rects[j].move(total_forces[j][0], total_forces[j][1])
#stop iterating when the total overlapping area is sufficiently small
if np.sum(np.square([getOverlappingArea(r[0], r[1]) for r in itertools.combinations(rects, 2)])) <= acceptableOverlap:
break
i += 1
#print results
finalDistancesFromDesired = [np.array([r.desired_centerX - r.centerX, r.desired_centerY - r.centerY]) for r in rects]
print("Total distances to desired positions: " + str(np.sum(np.linalg.norm(finalDistancesFromDesired, axis = 1))))
and an example run through:
Example
Basically I want to take a fixed straight line across the devices point of view and determine if anything intercepts it but in my example I want to make the "laser line" configurable with regards to the distance from the top of the field of view.
Now it's easy enough to get the depth data at a given pixel point simply by doing this.
var depthInMM = DepthImagePixel.Depth;
and its also easy to simply say I want to focus on the 100th line of pixels from the top by doing something like this.
for (int i = 0; i < this._DepthPixels.Length; ++i) //_DepthPixels.Length is obviously 307200 for 640x480
{
if (i >= 64000 && i <= 64640) //Hundredth vertical pixel line
{
//Draw line or whatever
}
}
Which ends up with something like this.
BUT for example I might want to have the line intercept at 50 cm from the top of the field of view at 3 meters depth. Now obviously I understand that as the depth increases so does the area represented but I cannot find any reference or myself work out how to calculate this relationship.
So, how can one calculate the coordinate space represented at a given depth utilizing the Kinect sensor. Any help sincerely appreciated.
EDIT:
So if I understand correctly this can be implemented as such in C#
double d = 2; //2 meters depth
double y = 100; //100 pixels from top
double vres = 480; //480 pixels vertical resolution
double vfov = 43; //43 degrees vertical field of view of Kinect
double x = (2 * Math.Sin(Math.PI * vfov / 360) * d * y) / vres;
//x = 0.30541768893691434
//x = 100 pixels down is 30.5 cm from top field of view at 2 meters depth
2 sin(PI VFOV / 360) D Y
X = --------------------------
VRES
X: distance of your line from the top of the image in meters
D: distance - orthogonal to the image plane - of your line from the camera in meters
Y: distance of your line from the top of the image in pixels
VRES: vertical resolution of the image in pixels
VFOV: vertical field of view of the camera in degrees
I am currently working on a project in which I am required to write software that compares two images made up of the same area and draws a box around the differences. I wrote the program in c# .net in a few hours but soon realized it was INCREDIBLY expensive to run. Here are the steps I implemented it in.
Created a Pixel class that stores the x,y coordinates of each pixel and a PixelRectangle class that stores a list of pixels along with width,height,x and y properties.
Looped through every pixel of each image, comparing the colour of each corresponding pixels. If the colour was different I then created a new pixel object with the x,y coordinates of that pixel and added it to a pixelDifference List.
Next I wrote a method that recursively checks each pixel in the pixelDifference list to create PixelRectangle objects that only contain pixels that are directly next to each other. (Pretty sure this bad boy is causing the majority of the destruction as it gave me a stack overflow error.)
I then worked out the x,y coordinates and dimensions of the rectangle based on the pixels that were stored in the list of the PixelRectangle Object and drew a rectangle over the original image to show where the differences were.
My questions are: Am I going about this the correct way? Would a quad tree hold any value for this project? If you could give me the basic steps on how something like this is normally achieved I would be grateful. Thanks in advance.
Dave.
looks like you want to implement blob detection. my suggestion is not to reinvent the wheel and just use openCVSharp or emgu to do this. google 'blob detection' & opencv
if you want to do it yourself here my 2 cents worth:
first of all, let's clarify what you want to do. really two separate things:
compute the difference between two images (i am assuming they are
the same dimensions)
draw a box around 'areas' that are 'different' as measured by 1. questions here are what is an 'area' and what is considered 'different'.
my suggestion for each step:
(my assumption is both images a grey scale. if not, compute the sum of colours for each pixel to get grey value)
1) cycle through all pixels in both images and subtract them. set a threshold on the absolute difference to determine if their difference is sufficient to represent and actual change in the scene (as opposed to sensor noise etc if the images are from a camera). then store the result in a third image. 0 for no difference. 255 for a difference. if done right this should be REALLY fast. however, in C# you must use pointers to get a decent performance. here an example of how to do this (note: code not tested!!) :
/// <summary>
/// computes difference between two images and stores result in a third image
/// input images must be of same dimension and colour depth
/// </summary>
/// <param name="imageA">first image</param>
/// <param name="imageB">second image</param>
/// <param name="imageDiff">output 0 if same, 255 if different</param>
/// <param name="width">width of images</param>
/// <param name="height">height of images</param>
/// <param name="channels">number of colour channels for the input images</param>
unsafe void ComputeDiffernece(byte[] imageA, byte[] imageB, byte[] imageDiff, int width, int height, int channels, int threshold)
{
int ch = channels;
fixed (byte* piA = imageB, piB = imageB, piD = imageDiff)
{
if (ch > 1) // this a colour image (assuming for RGB ch == 3 and RGBA == 4)
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width * ch;
byte* pB = piB + r * width * ch;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
//assuming three colour channels. if channels is larger ignore extra (as it's likely alpha)
int LA = pA[c * ch] + pA[c * ch + 1] + pA[c * ch + 2];
int LB = pB[c * ch] + pB[c * ch + 1] + pB[c * ch + 2];
if (Math.Abs(LA - LB) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
else //single grey scale channels
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width;
byte* pB = piB + r * width;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
if (Math.Abs(pA[c] - pB[c]) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
}
}
2)
not sure what you mean by area here. several solutions depending on what you mean. from simplest to hardest.
a) colour each difference pixel red in your output
b) assuming you only have one area of difference (unlikely) compute the bounding box of all 255 pixels in your output image. this can be done using a simple max / min for both x and y positions on all 255 pixels. single pass through the image and should be very fast.
c) if you have lots of different areas that change - compute the "connected components". that is a collection of pixels that are connected to each other. of course this only works in a binary image (i.e. on or off, or 0 and 255 as in our case). you can implement this in c# and i have done this before. but i won't do this for you here. it's a bit involved. algorithms are out there. again opencv or google connected components.
once you have a list of CC's draw a box around each. done.
You're pretty much going about it the right way. Step 3 shouldn't be causing a StackOverflow exception if it's implemented correctly so I'd take a closer look at that method.
What's most likely happening is that your recursive check of each member of PixelDifference is running infinitely. Make sure you keep track of which Pixels have been checked. Once you check a Pixel it no longer needs to be considered when checking neighbouring Pixels. Before checking any neighbouring pixel make sure it hasn't already been checked itself.
As an alternative to keeping track of which Pixels have been checked you can remove an item from PixelDifference once it has been checked. Of course, this may require a change in the way you implement your algorithm since removing an element from a List while checking it can bring a whole new set of issues.
There's a much simpler way of finding the difference of two images.
So if you have two images
Image<Gray, Byte> A;
Image<Gray, Byte> B;
You can get their differences fast by
A - B
Of course, images don't store negative values so to get differences in cases where pixels in image B are greater than image A
B - A
Combining these together
(A - B) + (B - A)
This is ok, but we can do even better.
This can be evaluated using Fourier transforms.
CvInvoke.cvDFT(A.Convert<Gray, Single>().Ptr, DFTA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT(B.Convert<Gray, Single>().Ptr, DFTB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT((DFTB - DFTA).Convert<Gray, Single>().Ptr, AB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
CvInvoke.cvDFT((DFTA - DFTB).Ptr, BA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
I find that the results from this method are much better.
You can make a binary image out of this, ie: threshold the image so pixels with no change have 0 while pixels that have changes store 255.
Now as far as the second part of the problem goes, I suppose there's a simple crude solution:
Partition the image into rectangular regions. Perhaps there's no need to go as far as using quad trees. Say, an 8x8 grid... (For different results, you can experiment with different grid sizes).
Then use the convex hull function within these regions. These convex hulls can be turned into rectangles by finding the min and max x an y coordinates of their vertices.
Should be fast and simple
I am a beginner to dicom development group . I need to create a localizer image line on dicom image . So, is there any good ideas . Any Geeks .
David Brabant put you already in the right direction (if you want to work with DICOM you should definitely read and treasure dclunie's medical image FAQ). Let's see if I can elaborate on it and make it easier for you to implement.
I assume you have a tool/library to extract tags from a DICOM file (Offis' DCMTK?). For the sake of exemplification I'll refer to a CT scan (many slices, i.e. many images) and a scout image, onto which you want to display localizer lines. Each DICOM image, including your CT slices and your scout, contain full information about their location in space, in these two tags:
Group,Elem VR Value Name of the tag
---------------------------------------------------------------------
(0020,0032) DS [-249.51172\-417.51172\-821] # ImagePositionPatient
X0 Y0 Z0
(0020,0037) DS [1\0\0\0\1\0] # ImageOrientationPatient
A B C D E F
ImagePositionPatient has the global coordinates in mm of the first pixel transmitted (the top left-hand corner pixel, to be clear) expressed as (x,y,z). I marked them X0, Y0, Z0. ImageOrientationPatient contains two vectors, both of three components, specifying the direction cosines of the first row of pixels and first column of pixels of the image. Understanding direction cosines doesn't hurt (see e.g. http://mathworld.wolfram.com/DirectionCosine.html), but the method suggested by dclunie works directly with them, so for now let's just say they give you the orientation in space of the image plane. I marked them A-F to make formulas easier.
Now, in the code given by dclunie (I believe it's intended to be C, but it's so simple it should work as well as Java, C#, awk, Vala, Octave, etc.) the conventions are the following:
scr_* = refers to the soruce image, i.e. the CT slice
dst_* = refers to the destination image, i.e. the scout
*_pos_x, *_pos_y, *_pos_z = the X0, Y0, Z0 above
*_row_dircos_x, *_row_dircos_y, *_row_dircos_z = the A, B, C above
*_col_dircos_x, *_col_dircos_y, *_col_dircos_z = the D, E, F above
After setting the right values just apply these:
dst_nrm_dircos_x = dst_row_dircos_y * dst_col_dircos_z
- dst_row_dircos_z * dst_col_dircos_y;
dst_nrm_dircos_y = dst_row_dircos_z * dst_col_dircos_x
- dst_row_dircos_x * dst_col_dircos_z;
dst_nrm_dircos_z = dst_row_dircos_x * dst_col_dircos_y
- dst_row_dircos_y * dst_col_dircos_x;
src_pos_x -= dst_pos_x;
src_pos_y -= dst_pos_y;
src_pos_z -= dst_pos_z;
dst_pos_x = dst_row_dircos_x * src_pos_x
+ dst_row_dircos_y * src_pos_y
+ dst_row_dircos_z * src_pos_z;
dst_pos_y = dst_col_dircos_x * src_pos_x
+ dst_col_dircos_y * src_pos_y
+ dst_col_dircos_z * src_pos_z;
dst_pos_z = dst_nrm_dircos_x * src_pos_x
+ dst_nrm_dircos_y * src_pos_y
+ dst_nrm_dircos_z * src_pos_z;
Or, if you have some fancy matrix class, you can build this matrix and multiply it with your point coordinates.
[ dst_row_dircos_x dst_row_dircos_y dst_row_dircos_z -dst_pos_x ]
M = [ dst_col_dircos_x dst_col_dircos_y dst_col_dircos_z -dst_pos_y ]
[ dst_nrm_dircos_x dst_nrm_dircos_y dst_nrm_dircos_z -dst_pos_z ]
[ 0 0 0 1 ]
That would be like this:
Scout_Point(x,y,z,1) = M * CT_Point(x,y,z,1)
Said all that, which points of the CT should we convert to create a line on the scout? Also for this dclunie already suggests a general solution:
"My approach is to project the square that is the bounding box of the source image (i.e. lines joining the TLHC, TRHC,BRHC and BLHC of the slice)."
If you project the four corner points of the CT slice, you'll have a line for CT slices perpendicular to the scout, and a trapezoid in case of non perpendicular slices. Now, if your CT slice is aligned with the coordinate axes (i.e. ImageOrientationPatient = [1\0\0\0\1\0]), the four points are trivial. You compute the width/height of the image in mm using the number of rows/columns and the pixel distance along x/y direction and sum things up appropriately. If you want to implement the generic case, then you need a little trigonometry... or maybe not. It's maybe time you read the definition of the direction cosines if you haven't yet.
I'll try to put you on track. E.g. working on the TRHC, you know where the voxel is in the image plane:
# Pixel location of the TRHC
x_pixel = number_of_columns-1 # Counting from 0
y_pixel = 0
z_pixel = 0 # We're on a plane!
The pixel distance values in DICOM are referred to the image plane, so you can simply multiply x and y by those values to have their position in mm, while z is 0 (both pixels and mm). I am talking about these values:
(0028,0011) US 512 # 2, 1 Columns
(0028,0010) US 512 # 2, 1 Rows
(0028,0030) DS [0.9765625\0.9765625] # 20, 2 PixelSpacing
The matrix M above, is a generic transformation from global to image coordinates, having the direction cosines available. What you need now is something that does the inverse job (image to global) and on the source images (the CT slices). I'll let you go and dig in the geometry books to be sure, but I think it should be something like this (the rotation part is transposed, translation has no sign change and of course we use the src_* values):
[src_row_dircos_x src_col_dircos_x src_nrm_dircos_x src_pos_x ]
M2 = [src_row_dircos_y src_col_dircos_y src_nrm_dircos_y src_pos_y ]
[src_row_dircos_z src_col_dircos_z src_nrm_dircos_z src_pos_z ]
[0 0 0 1 ]
Convert points in the CT slice (e.g. the four corners) to millimeters and then apply M2 to have them in global coordinates. Then you can feed them to the procedure reported by dclunie. Cross-check my maths before using it e.g. for patient diagnostics! ;-)
Hope this helps understanding better dclunie's method. Cheers
Lets Say I have a 3d Cartesian grid. Lets also assume that there are one or more log spirals emanating from the origin on the horizontal plane.
If I then have a point in the grid I want to test if that point is in one of the spirals. I acutally want to test if it within a certain range of the spirals but determining if it is on the point is a good start.
So I guess the question has a couple parts.
How to generate the arms from parameters (direction, tightness)
How to tell if a point in the grid is in one of the spiral arms
Any ideas? I have been googling all day and don't feel I am any closer to a solution than when I started.
Here is a bit more information that might help:
I don't actually need to render the spirals. I want to set the pitch and rotation and then pass a point to a method that can tell me if the point I passed is within the spiral (within a given range of any point on the spiral). Based on the value returned (true or false) my program will make a decision on whether or not something exists at the point in space.
How to parametrically define the log spirals (pitch and rotation and ??)
Test if a point (x, y, z) is withing a given range of any point on the spiral.
Note: Both of the above would be just on the horizontal plane
These are two functions defining an anti-clockwise spiral:
PolarPlot[{
Exp[(t + 10)/100],
Exp[t/100]},
{t, 0, 100 Pi}]
Output:
These are two functions defining a clockwise spiral:
PolarPlot[{
- Exp[(t + 10)/100],
- Exp[t/100]},
{t, 0, 100 Pi}]
Output:
Cartesian coordinates
The conversion Cartesian <-> Polar is
(1) Ro = Sqrt[x^2+y^2]
t = ArcTan[y/x]
(2) x = Ro Cos[t]
y = Ro Sin[t]
So, If you have a point in Cartesian Coords (x,y) you transform it to your equivalent polar coordinates using (1). Then you use the forula for the spiral function (any of the four mentinoned above the plots, or similar ones) putting in there the value for t, and obtaining Ro. The last step is to compare this Ro with the one we got from the coordinates converion. If they are equal, the point is on the spiral.
Edit Answering your comment
For a Log spiral is almost the same, but with multiple spirals you need to take care of the logs not going to negative values. That's why I used exponentials ...
Example:
PolarPlot[{
Log[t],
If[t > 3, Log[ t - 2], 0],
If[t > 5, Log[ t - 4], 0]
}, {t, 1, 10}]
Output:
Not sure this is what you want, but you can reverse the log function (or "any" other for that matter).
Say you have ln A = B, to get A from B you do e^B = A.
So you get your point and pass it as B, you'll get A. Then you just need to check if that A (with a certain +- range) is in the values you first passed on to ln to generate the spiral.
I think this might work...
Unfortunately, you will need to know some mathematics notation anyway - this is a good read about the logarithmic sprial.
http://en.wikipedia.org/wiki/Logarithmic_spiral
we will only need the top 4 equations.
For your question 1
- to control the tightness, you tune the parameter 'a' as in the wiki page.
- to control the direction, you offset theta by a certain amount.
For your question 2
In floating point arithmetic, you will never get absolute precision, which mean there will be no point falling exactly on the sprial. On the screen, however, you will know which pixel get rendered, and you can test whether you are hitting a point that is rendered.
To render a curve, you usually render it as a sequence of line segments, short enough so that overall it looks like a curve. If you want to know whether a point lies within certain distance from the spiral, you can render the curve (on a off-screen buffer if you wish) by having thicker lines.
here a C++ code drawing any spiral passing where the mouse here
(sorry for my English)
int cx = pWin->vue.right / 2;
int cy = pWin->vue.bottom / 2;
double theta_mouse = atan2((double)(pWin->y_mouse - cy),(double)(pWin->x_mouse - cx));
double square_d_mouse = (double)(pWin->y_mouse - cy)*(double)(pWin->y_mouse - cy)+
(double)(pWin->x_mouse - cx)*(double)(pWin->x_mouse - cx);
double d_mouse = sqrt(square_d_mouse);
double theta_t = log( d_mouse / 3.0 ) / log( 1.19 );
int x = cx + (3 * cos(theta_mouse));
int y = cy + (3 * sin(theta_mouse));
MoveToEx(hdc,x,y,NULL);
for(double theta=0.0;theta < PI2*5.0;theta+=0.1)
{
double d = pow( 1.19 , theta ) * 3.0;
x = cx + (d * cos(theta-theta_t+theta_mouse));
y = cy + (d * sin(theta-theta_t+theta_mouse));
LineTo(hdc,x,y);
}
Ok now the parameter of spiral is 1.19 (slope) and 3.0 (radius at center)
Just compare the points where theta is a mutiple of 2 PI = PI2 = 6,283185307179586476925286766559
if any points is near of a non rotated spiral like
x = cx + (d * cos(theta));
y = cy + (d * sin(theta));
then your mouse is ON the spiral... I searched this tonight and i googled your past question