I am having trouble implementing this into my current path finding algorithm.
Currently I have Dijkstra written and works like it should, but I need to step further away and add a limit (range). I can better explain with an image:
Let's say I have range of 80. I want to go from A to E. My current algorithm, works as it should, so it results in A->B-E.
However, I need to go only on paths with weight not more than the range - 80, which would mean that A->B->E is not the option any more, but A->C->D->B->E (considering that range/limit resets on every stop)
So far, I have implemented a bool named Possible which would return for the single part of path (e.g. A->B) is it possible comparing to my limit / range.
My main problem is that I do not know where/how to start. My only idea was to see where Possible is false (A->B on the total route A->B->E) and run the algorithm from A to A->E again without / excluding B stop/vertex.
Is this a good approach? Because of that my big O notation would increment twice (as far as I understand it).
I see two ways of doing this
Create a new graph G' that contains only edges < 80, and look for shortest path there... reduction time is O(V+E), and additional O(V+E) memory usage
You can change Dijkstra's algorithm, to ignore edges > 80, just skip edges >80, when giving values to neighbor vertices, the complexity and memory usage will stay the same in this case
Create a temporary version of your graph, and set all weights above the threshold to infinity. Then run the ordinary Dijkstra algorithm on it.
Complexity will increase or not, depending on your version of the algorithm:
if you have O(V^2) then it will increase to O(E + V^2)
if you have the O(ElogV) version then it will increase to O(E + ElogV)
if you have the O(E + VlogV) version it will remain the same
As noted by ArsenMkrt you can as well remove these edges, which makes even more sense but will make the complexity a bit worse. Modifying the algorithm to just skip those edges seems to be the best option though, as he suggested in his answer.
Related
I wonder if there's any described algorithm that can convert isochrones into approximate area to show a range of some feature (in my problem this feature is a road network).
Example. I have something like on the image beneath:
It's a simple network (where I can arrive from the start point in X minutes or going Y kilometers). I have information of all the nodes and links. Now I need to create an isochrone map that show an approximate range where I can arrive.
Problems:
Convex hull - sucks because of too general approximation,
I can create buffors on roads - so I will get some polygon that shows range, but I will also have the holes by roads that connect into circles.
What I need to obtain is something like this:
I've found some potentially useful information HERE, but there are only some ideas how it could be done. If anyone has any concept, please, help me to solve my problem.
Interesting problem, to get better answers you might want to define exactly what will this area that shows the range (isochrone map) be used for? For example is it illustrative? If you define what kind of approximation you want it could help you solve the problem.
Now here are some ideas.
1) Find all the cycles in the graph (see link), then eliminate edges that are shared between two cycles. Finally take the convex hull of the remaining cycles, this together with all the roads, so that the outliers that do not form cycles are included, will give a good approximation for an isochrome map.
2) A simpler solution is to define a thickness around each point of every road, this thickness should be inversely proportional to how long it takes to arrive at that point from the starting point. I.e. the longer it takes to arrive at the point the less thick. You can then scale the thickness of all points until all wholes are filled, and then you will have an approximate isochrome map. One possible way of implementing this is to run an algorithm that takes all possible routes simultaneously from the starting point, branching off at every new intersection, while tracking how long it took to arrive at each point. During its execution, at every instant of time all previously discovered route should be thickened. At the end you can scale this thickness so as to fill all wholes.
Hopefully this will be of some help. Good luck.
I have solved the problem (it's not so fast and robust, but has to be enough for now).
I generated my possible routes using A* (A-Star) algorithm.
I used #Artur Gower's idea from point one to eliminate cycles and simplify my geometry.
Later I decided to generate 2 types of gemetries (1st - like on the image, 2nd - simple buffers):
1st one:
3. Then I have removed the rest of unnecessary points using Douglas-Peucker algorithm (very fast!).
4. In the end I used Concave Hull algorithm (aka Alpha-Shapes or Non-Convex Hull).
2nd one:
3. Apply a buffer to the existing geometry and take the exterior ring (JTS library made that really easier:)).
Imagine I want to, say, compute the first one million terms of the Fibonacci sequence using the GPU. (I realize this will exceed the precision limit of a 32-bit data type - just used as an example)
Given a GPU with 40 shaders/stream processors, and cheating by using a reference book, I can break up the million terms into 40 blocks of 250,000 strips, and seed each shader with the two start values:
unit 0: 1,1 (which then calculates 2,3,5,8,blah blah blah)
unit 1: 250,000th term
unit 2: 500,000th term
...
How, if possible, could I go about ensuring that pixels are processed in order? If the first few pixels in the input texture have values (with RGBA for simplicity)
0,0,0,1 // initial condition
0,0,0,1 // initial condition
0,0,0,2
0,0,0,3
0,0,0,5
...
How can I ensure that I don't try to calculate the 5th term before the first four are ready?
I realize this could be done in multiple passes but setting a "ready" bit whenever a value is calculated, but that seems incredibly inefficient and sort of eliminates the benefit of performing this type of calculation on the GPU.
OpenCL/CUDA/etc probably provide nice ways to do this, but I'm trying (for my own edification) to get this to work with XNA/HLSL.
Links or examples are appreciated.
Update/Simplification
Is it possible to write a shader that uses values from one pixel to influence the values from a neighboring pixel?
You cannot determine the order the pixels are processed. If you could, that would break the massive pixel throughput of the shader pipelines. What you can do is calculating the Fibonacci sequence using the non-recursive formula.
In your question, you are actually trying to serialize the shader units to run one after another. You can use the CPU right away and it will be much faster.
By the way, multiple passes aren't as slow as you might think, but they won't help you in your case. You cannot really calculate any next value without knowing the previous ones, thus killing any parallelization.
I have very little data for my analysis, and so I want to produce more data for analysis through interpolation.
My dataset contain 23 independent attributes and 1 dependent attribute.....how can this done interpolation?
EDIT:
my main problem is of shortage of data, i hv to increase the size of my dataset, n attributes are categorical for example attribute A may be low, high, meduim, so interpolation is the right approach for it or not????
This is a mathematical problem but there is too little information in the question to properly answer. Depending on distribution of your real data you may try to find a function that it follows. You can also try to interpolate data using artificial neural network but that would be complex. The thing is that to find interpolations you need to analyze data you already have and that defeats the purpose. There is probably more to this problem but not explained. What is the nature of the data? Can you place it in n-dimensional space? What do you expect to get from analysis?
Roughly speaking, to interpolate an array:
double[] data = LoadData();
double requestedIndex = /* set to the index you want - e.g. 1.25 to interpolate between values at data[1] and data[2] */;
int previousIndex = (int)requestedIndex; // in example, would be 1
int nextIndex = previousIndex + 1; // in example, would be 2
double factor = requestedIndex - (double)previousIndex; // in example, would be 0.25
// in example, this would give 75% of data[1] plus 25% of data[2]
double result = (data[previousIndex] * (1.0 - factor)) + (data[nextIndex] * factor);
This is really pseudo-code; it doesn't perform range-checking, assumes your data is in an object or array with an indexer, and so on.
Hope that helps to get you started - any questions please post a comment.
If the 23 independent variables are sampled in a hyper-grid (regularly spaced), then you can choose to partition into hyper-cubes and do linear interpolation of the dependent value from the vertex closest to the origin along the vectors defined from that vertex along the hyper-cube edges away from the origin. In general, for a given partitioning, you project the interpolation point onto each vector, which gives you a new 'coordinate' in that particular space, which can then be used to compute the new value by multiplying each coordinate by the difference of the dependent variable, summing the results, and adding to the dependent value at the local origin. For hyper-cubes, this projection is straightforward (you simply subtract the nearest vertex position closest to the origin.)
If your samples are not uniformly spaced, then the problem is much more challenging, as you would need to choose an appropriate partitioning if you wanted to perform linear interpolation. In principle, Delaunay triangulation generalizes to N dimensions, but it's not easy to do and the resulting geometric objects are a lot harder to understand and interpolate than a simple hyper-cube.
One thing you might consider is if your data set is naturally amenable to projection so that you can reduce the number of dimensions. For instance, if two of your independent variables dominate, you can collapse the problem to 2-dimensions, which is much easier to solve. Another thing you might consider is taking the sampling points and arranging them in a matrix. You can perform an SVD decomposition and look at the singular values. If there are a few dominant singular values, you can use this to perform a projection to the hyper-plane defined by those basis vectors and reduce the dimensions for your interpolation. Basically, if your data is spread in a particular set of dimensions, you can use those dominating dimensions to perform your interpolation, since you don't really have much information in the other dimensions anyway.
I agree with the other commentators, however, that your premise may be off. You generally don't want to interpolate to perform analysis, as you're just choosing to interpolate your data in different ways and the choice of interpolation biases the analysis. It only makes sense if you have a compelling reason to believe that a particular interpolation is physically consistent and you simply need additional points for a particular algorithm.
May I suggest Cubic Spline Interpolation
http://www.coastrd.com/basic-cubic-spline-interpolation
unless you have a very specific need, this is easy to implement and calculates splines well.
Have a look at the regression methods presented in Elements of statistical learning; most of them may be tested in R. There are plenty of models that can be used: linear regression, local models and so on.
I know there are quite some questions out there on generating combinations of elements, but I think this one has a certain twist to be worth a new question:
For a pet proejct of mine I've to pre-compute a lot of state to improve the runtime behavior of the application later. One of the steps I struggle with is this:
Given N tuples of two integers (lets call them points from here on, although they aren't in my use case. They roughly are X/Y related, though) I need to compute all valid combinations for a given rule.
The rule might be something like
"Every point included excludes every other point with the same X coordinate"
"Every point included excludes every other point with an odd X coordinate"
I hope and expect that this fact leads to an improvement in the selection process, but my math skills are just being resurrected as I type and I'm unable to come up with an elegant algorithm.
The set of points (N) starts small, but outgrows 64 soon (for the "use long as bitmask" solutions)
I'm doing this in C#, but solutions in any language should be fine if it explains the underlying idea
Thanks.
Update in response to Vlad's answer:
Maybe my idea to generalize the question was a bad one. My rules above were invented on the fly and just placeholders. One realistic rule would look like this:
"Every point included excludes every other point in the triagle above the chosen point"
By that rule and by choosing (2,1) I'd exclude
(2,2) - directly above
(1,3) (2,3) (3,3) - next line
and so on
So the rules are fixed, not general. They are unfortunately more complex than the X/Y samples I initially gave.
How about "the x coordinate of every point included is the exact sum of some subset of the y coordinates of the other included points". If you can come up with a fast algorithm for that simply-stated constraint problem then you will become very famous indeed.
My point being that the problem as stated is so vague as to admit NP-complete or NP-hard problems. Constraint optimization problems are incredibly hard; if you cannot put extremely tight bounds on the problem then it very rapidly becomes not analyzable by machines in polynomial time.
For some special rule types your task seems to be simple. For example, for your example rule #1 you need to choose a subset of all possible values of X, and than for each value from the subset assign an arbitrary Y.
For generic rules I doubt that it's possible to build an efficient algorithm without any AI.
My understanding of the problem is: Given a method bool property( Point x ) const, find all points the set for which property() is true. Is that reasonable?
The brute-force approach is to run all the points through property(), and store the ones which return true. The time complexity of this would be O( N ) where (a) N is the total number of points, and (b) the property() method is O( 1 ). I guess you are looking for improvements from O( N ). Is that right?
For certain kind of properties, it is possible to improve from O( N ) provided suitable data structure is used to store the points and suitable pre-computation (e.g. sorting) is done. However, this may not be true for any arbitrary property.
I have a directed graph with weighted edges (weights are all positive).
Now, I'm looking for an efficient algorithm or code (specifically, C#) to find the longest path between two given vertices.
This is exactly equivalent to a shortest-path algorithm with all negative weights. To do that, you need to verify that there are no negative-weight cycles (which in your original case is probably equivalent to verifying no positive-weight cycles). Best bet is to take the additive inverse of the weights and run Bellman-Ford, then take the additive inverse of the result.
David Berger's answer is correct, unless you mean a simple path, where each vertex can occur at most once, in which case Bellman-Ford will not give the longest path. Since you say the weights are positive, it's not possible for a longest path to exist when the graph has a cycle (reachable from the source), unless you mean simple path. The longest simple path problem is NP-complete. See Wikipedia.
So, let's assume you mean a directed acyclic graph (DAG). In linear time, you can compute the longest path to each vertex v from the start vertex s, given that you know the longest path from s->*u for each u where u->v directly. This is easy - you can do a depth first search on your directed graph and compute the longest path for the vertices in reverse order of visiting them. You can also detect back edges whole you DFS using a 3-color marking (opened but not finished vertices are gray). See Wikipedia again for more information. Longest/shortest path finding on a DAG is sometimes called the Viterbi algorithm (even though it was given assuming a specific type of DAG).
I'd attempt the linear time dynamic programming solution first. If you do have cycles, then Bellman-Ford won't solve your problem anyway.
Please refer to the QuickGraph project as it provides .NET data structures implementing graphs, and also provides algorithms to operate on such data structures. I'm certain the algorithm you are looking for is implemented in the library.
Just in case it helps anyone, as I was looking for this for a while but couldn't find it, I used QuickGraph to solve a problem where I had to find the longest path that also complies with a certain rule. It is not very elegant as I did it a bit on brute force once I get the first result, but here it is.
https://github.com/ndsrf/random/blob/master/LongestSkiPath/LongestSkiPath/SkiingResolver.cs#L129-L161
To get the longest path you use an algorithm to find the shortest with lenghts = -1. And then to find subsequent longest paths I start removing edges from that longest path to see if I manage to get a "better" (based on the conditions of the problem) longest path.