how to improve performance of Tchart in real-time plotting? - c#

I am developing an application which has minimum one tchart with 4 fastlines in it . maximum number of tcharts are 16 depending on few criteria.each fastline contains different sample numbers in different cases. here is the problem now. If I have sample number less than 200-250. then I can see the graph is being plotted real-time. now as the sample number increases the delay goes so long in plotting the graph. so let's say if we have 1000 samples for each fasline then total of 4000 samples in chart. and there can be maximum 16 charts like this. I noticed that delay is highly dependent on number of samples fastline contains and number of fastlines in the chart.
I already made changes regarding autorepaint = false. I have chart1.autorepaint = false
series1.autorepaint = false and also series 2 , 3, 4.
each time I add a value in fastline , I have to manually do chart1.refresh() which takes so much time again as it refreshes all the 4 fastlines in it.
Delay can also be related to series1.add(),
but I am not sure.
Is there anything I can do to avoid the delays ?
Here is the code I am using .
public void PlotActualValuesUpToSampleNumber(int SampleNumber)
{
int DataPoint;
Chart1.AutoRepaint = false;
for (DataPoint = LastActualSamplePlotted + 1; DataPoint <= SampleNumber; DataPoint ++ )
{
if (Imp.ThisSampleContainsFault[ChannelNumber, DataPoint])
{
Chart1.Panel.Gradient.Visible = false;
Chart1.Panel.Color = Imp.ChartBackgroundColorIfFault;
}
Series4.Add(Imp.ActualValue[ChannelNumber, DataPoint], "", Color.Yellow);
LastActualSamplePlotted ++;
}
Chart1.Refresh();
Chart1.AutoRepaint = true;
}
Is there anything I can do to avoid the delays?
I already referred to these links.
http://www.teechart.net/reference/articles/VCLRealtime.htm http://www.teechart.net/support/viewtopic.php?p=47388 http://www.teechart.net/support/viewtopic.php?t=5127 http://stackoverflow.com/questions/11977423/performance-issue-with-tchart
but no success.

Performance is mainly affected by the amount of data the chart has to handle. Different code solutions and environments may also be pretty influential here. So my suggestions are:
Have you tried injecting data arrays directly into the series as the second example Sandra posted here? This is the same principle as in the VCL Real-time Charting article.
have you tried the Direct 2D version of TeeChart? You can find a white paper regarding its performance here.
I'd strongly recommend you to check the examples at the sections below in the features demo available at TeeChart's program group.
*All Features\Welcome !\Chart styles\Standard\Fast Line*
All Features\Welcome !\Speed
If you still don't get the results you expected please send us a simple example project we can run "as-is" to reproduce the problem here.

Related

Smoothing the results of a series server-side to return fewer data points (.NET)

I need to show a chart from data returned from an API.
This API could potentially return millions of results, but it would tax the server heavily.
Thus, I'm looking for a way to return a fewer number of results and still show a trend in the chart. Basically, I'm looking to "smooth" the line of the graph by showing only relevant points.
Is there a .NET library that could help me in this implementation? Or perhaps a "smoothing" function that takes a limit on the number of points to results?
What would be your target number of results? One approach would be to just take a sampling of the points. For every 10 points you have, return 1, for instance. In which case, you could use Linq to accomplish this: Sampling a list with linq
This doesn't address the "showing only relevant points" part of your question, though. That's a little harder to solve programmatically. What does "relevant" mean in your data? Exceeding a certain deviation?
So maybe a moving average of your data would work. Take 10 points at a time, average them, return 1 point. Like this example: Smoothing data from a sensor
With either of those approaches, you can trade off accuracy and 'smoothness' by varying the "10" in the above examples. The higher the number, the "smoother" your result.

Spikes when offsetting with C# clipper

I am getting quite a few spikes when offsetting Polygons with the clipper library, this is unfortunately not acceptable in my use case and I have no idea how to get rid of it. I have tried all type of join type settings but could not achieve anything. Any help would be greatly appreciated.
My application layers a model and calculates the outline polygons. It then also has to offset the outlines. Layers with a lot of curves in then tend to get one or more spikes each such as this:
Now this does not seem to bad but once it happens to a lot of layers a model becomes like this:
It is important to note that without offsetting the outlines I get none of these spikes.
Here is a file containing the input polygons:
http://sdrv.ms/H7ysUC
Here is a file containing the output polygons:
http://sdrv.ms/1fLoZjT
The parameters for the operation were an offset operation with jtRound JointType with default limit. The delta was -25000. I have also tried all the other JoinTypes with limits ranging from 0 to 1000 but they all created the exact same spike. The other JoinTypes though had some other added strange effects.
OK, I can confirm there's a bug. It happens when adjacent polygon edges are almost collinear.
Here's the fix (that hasn't been heavily tested yet) at about line 4220 in clipper.cs
void OffsetPoint(JoinType jointype)
{
m_sinA = (normals[m_k].X * normals[m_j].Y - normals[m_j].X * normals[m_k].Y);
if (Math.Abs(m_sinA) < 0.00005) return; //ADD THIS LINE (todo - check this!)
else if (m_sinA > 1.0) m_sinA = 1.0;
else if (m_sinA < -1.0) m_sinA = -1.0;
Note: 0.00005 is just a value that's close enough to zero to remove the spike in your supplied sample, but it may need to be readjusted with further testing.

A few doubts about charts in C# .NET

I have a few question about charts in .NET in C#, I'm currently working on an app which will display data comming from serial port in real time.
I'm using a simple chart from .NET toolbox with spline series and there are two main feautures which I want to achieve.
The first one - I'd like the comming data points to be displayed on the right part of the chart and I'd like them to move to the left as the new data comes, I don't know how to describe it properly, I hope you can get the main idea since I can't post images yet.
I've achieved this kind of behaviour by reloading all the data points each time I refresh the chart, at first they are all empty and as the data comes I put the bigger and bigger queue of values in the end.
To clarify that the code may be like that:
chartAccelerationX.Series["XSeries"].Points.Clear();
dataPointTable = dataPointQueue.ToArray();
for (int i = 0; i < 1000; i++)
{
DataPoint dataPointX = new DataPoint();
if (1000 - dataPointTable.Length < i)
{
dataPointX.SetValueXY(i + 1, dataPointTable[i - 1000 + dataPointTable.Length].x);
}
else
{
dataPointX.IsEmpty = true;
}
chartAccelerationX.Series["XSeries"].Points.Add(dataPointX);
}
chartAccelerationX.Update();
It works pretty well, but as I said, I'm creating all the 1000 data points each time I update the chart (and I do it every 100ms) so it's probably disastrous in terms of performance (I'll need to have about 6-8 charts in total) and it's limited by the exact number od data points (1000 here).
Isn't there any easier way to achieve something like that with increasing amount of data points dynamically and with the second feature that I wanted to have - an auto matching scroll bar showing for example only 50 records at once?
I've been using a scrollbar as well, but it was just set to show 50 records out of 1000 and I could have scroll through all these empty data points right now.
I can probably make the scrollbar appear after an exact number of data points are on the chart and update it every time the data is added but maybe theres any easier way?
I hope you understood what I was trying to say, I'm still working on my English.
If you can use WPF, there is http://dynamicdatadisplay.codeplex.com/ and for the older WPF version http://dynamicdatadisplay.codeplex.com/wikipage?title=D3v1
which is pretty good, I've used it for some pretty high speed data sampling before

C# comboxbox disable item alternative?

I am trying to write a kiosk program for my print center at a school for students to select size and media type and then have it generate a price. I am currently doing this with radio buttons which works fine but we are adding many more options and a drop down list box would be more appropriate. Also, my code for calculating the price is out of control. I would like to set this up so that calculating the price should be easy. Setup is something like this:
File1 - Paper Size (9 options) - Media Type (18 options)
File2, File 3, etc.
I was going to store this in a 3D array filename[]papersize[]media[] for processing the price.
The problem is that not all media types are available in all sizes. I see that you can not disable items in the list. If you pick one size, I can selectively populate the other drop down with or without items, but it changes my index numbers. If I could disable items, I could keep the same index and make an easy loop for processing prices. As it is now I would have to manually specify and loop for each paper size since the media types are different indexes.
I hope this makes sense, I am not really a programmer, I am just trying to make something simple to improve our workflow and accuracy at the print center. I can provide a screen shot of the old program and a mockup of my new program if it would help. Can anyone think of a more elegant solution for this?
Thanks!!
EDIT
Yikes.. Ok, this is ending up to be more difficult than I was expecting: Thanks everyone for your input, it is much appreciated. I did not really expect any responses and there were a lot. Thank You. I attempted the table idea mentioned below but I am not exactly sure how to implement it. I will comment on that post for what I tried. Let me provide more detail in the event that someone else has another idea.
For example of what I am trying to do:
Size = 8.5x11 has media = matte, double sided matte, luster, gloss, acetate, resume
Size = 11x17 has media = same minus acetate and resume
and so on, up to a 60" roll with backlit media and all sorts of stuff.
Price for 8.5x11 is 1.50 regardless of paper and then each paper has its own price
Price for 11x17 is 3.00 and each paper has its own price which is more than their 8.5x11 counterparts
8.5x11 matte = .25
11x17 matte = .50
8.5x11 matte total = 1.75
11x17 matte total = 3.50
I am trying to do this in as little steps as possible. Currently I have radio buttons which take up a bunch a screen real-estate and do things like: When 11x17_1.Checked acetate_1.Disable, etc. Also for calculating price I have hundreds of lines of code doing things like:
If(8.5x11_1.Checked)
{
If(matte_1.Checked)
price = 8.5x11matte_1;
if(luster_1.Checked)
price = 8.5x11luster_1;
...etc.
}
Rolls require more data (height) to be processed as we charge by linear inch for these. Currently for each file I just have a height box that they are required to fill out. I could just put a height field next to each file for my new version. Then if they select a roll, the height box bust be filled out, which requires more IF's... which I currently have hundreds of. Any thoughts on a more elegant way to do this?
I just don’t have the programming background to simplify this, but I know this can probably be done in 10 lines of code using an array and drop down lists:
It has been a long time since I used arrays but I was thinking of something like:
Selection[file_1][combobox_size.Index][combobox_media.Index]
I think I would have to manually define each array value since the prices are arbitrary?
[0][0][0] = 1.75
[0][0][1] = 1.75
[0][0][2] = 2.00
And so on.
My winform would have let’s say 12 rows for them to enter a file name and then pick the drop down lists. If filename != null then I will process a price for the file and selections.
So if file 1 was 11x17 gloss my array would be something like:
[0][1][3] which I would have pre-defined with a value of $4.00 for example
If it is a roll then I would just multiply by the required height box.
Is this logic sound or grossly inefficient?
EDIT #3
Ok, almost there I think. Sadly, I was unable to figure out the other solutions offered by the community, but I wrote a "get_index" function that looks like this:
public static int get_index(string index)
{
if (index == "Matte")
return (1);
....
if (index == "Luster")
return (3);
....
else
return (0);
}
in my main program I define prices like this:
for (int x = 0; x < filenum; x++)
{
pricegrid[x, 0, 0] = 1.75; // 8.5x11 Resume
pricegrid[x, 0, 1] = 1.75; // 8.5x11 Matte
pricegrid[x, 0, 2] = 1.75; // 8.5x11 Double Sided Matte
pricegrid[x, 0, 3] = 2.35; // 8.5x11 Luster
.....
}
Then to calculate the price I am doing something like this calling that get_index function:
private void calculate_price()
{
getindex[0] = get_index(media1.SelectedItem.ToString());
....
}
You should populate your ComboBox dynamically as you do.
Instead of using SelectedIndex you can use SelectedValue which does not depend on number of elements.
See for example: Using ValueMember in ComboBox
You shouldn't use one three-dimensional array. You will need 3 tables to that. Store all your types in a database.
3 tables are:
1. paper
2. media
3. papermedia
so. you populate first dropdown with papersizes.
then when item is selected in papersizes you run a query to populate dropdown media (you join through intersection table). this way you will only show media that is only available for this papersize. or you can do the other way around.
Does this answer your question?
There are several ways of doing it.
You could create a class FileWithDetail that'd store
[class File / class paper size / class Mediatype ]
Initialize your list at the start of your app
List<FileWithDetail> LstFileWithDetail = new FileWithDetail(){...}
Then with Linq to Object you would easily be able to query the object and bind it to your comboboxes according to your selected value
var LstPaperSize = from p in FileWithDetail
where p.FileName == SelectedFileName
Select ...

Optimizing a Recursive Function for Very Large Lists .Net

I have built an application that is used to simulate the number of products that a company can produce in different "modes" per month. This simulation is used to aid in finding the optimal series of modes to run in for a month to best meet the projected sales forecast for the month. This application has been working well, until recently when the plant was modified to run in additional modes. It is now possible to run in 16 modes. For a month with 22 work days this yields 9,364,199,760 possible combinations. This is up from 8 modes in the past that would have yielded a mere 1,560,780 possible combinations. The PC that runs this application is on the old side and cannot handle the number of calculations before an out of memory exception is thrown. In fact the entire application cannot support more than 15 modes because it uses integers to track the number of modes and it exceeds the upper limit for an integer. Baring that issue, I need to do what I can to reduce the memory utilization of the application and optimize this to run as efficiently as possible even if it cannot achieve the stated goal of 16 modes. I was considering writing the data to disk rather than storing the list in memory, but before I take on that overhead, I would like to get people’s opinion on the method to see if there is any room for optimization there.
EDIT
Based on a suggestion by few to consider something more academic then merely calculating every possible answer, listed below is a brief explanation of how the optimal run (combination of modes) is chosen.
Currently the computer determines every possible way that the plant can run for the number of work days that month. For example 3 Modes for a max of 2 work days would result in the combinations (where the number represents the mode chosen) of (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) For each mode a product produces at a different rate of production, for example in mode 1, product x may produce at 50 units per hour where product y produces at 30 units per hour and product z produces at 0 units per hour. Each combination is then multiplied by work hours and production rates. The run that produces numbers that most closely match the forecasted value for each product for the month is chosen. However, because some months the plant does not meet the forecasted value for a product, the algorithm increases the priority of a product for the next month to ensure that at the end of the year the product has met the forecasted value. Since warehouse space is tight, it is important that products not overproduce too much either.
Thank you
private List<List<int>> _modeIterations = new List<List<int>>();
private void CalculateCombinations(int modes, int workDays, string combinationValues)
{
List<int> _tempList = new List<int>();
if (modes == 1)
{
combinationValues += Convert.ToString(workDays);
string[] _combinations = combinationValues.Split(',');
foreach (string _number in _combinations)
{
_tempList.Add(Convert.ToInt32(_number));
}
_modeIterations.Add(_tempList);
}
else
{
for (int i = workDays + 1; --i >= 0; )
{
CalculateCombinations(modes - 1, workDays - i, combinationValues + i + ",");
}
}
}
This kind of optimization problem is difficult but extremely well-studied. You should probably read up in the literature on it rather than trying to re-invent the wheel. The keywords you want to look for are "operations research" and "combinatorial optimization problem".
It is well-known in the study of optimization problems that finding the optimal solution to a problem is almost always computationally infeasible as the problem grows large, as you have discovered for yourself. However, it is frequently the case that finding a solution guaranteed to be within a certain percentage of the optimal solution is feasible. You should probably concentrate on finding approximate solutions. After all, your sales targets are already just educated guesses, therefore finding the optimal solution is already going to be impossible; you haven't got complete information.)
What I would do is start by reading the wikipedia page on the Knapsack Problem:
http://en.wikipedia.org/wiki/Knapsack_problem
This is the problem of "I've got a whole bunch of items of different values and different weights, I can carry 50 pounds in my knapsack, what is the largest possible value I can carry while meeting my weight goal?"
This isn't exactly your problem, but clearly it is related -- you've got a certain amount of "value" to maximize, and a limited number of slots to pack that value into. If you can start to understand how people find near-optimal solutions to the knapsack problem, you can apply that to your specific problem.
You could process the permutation as soon as you have generated it, instead of collecting them all in a list first:
public delegate void Processor(List<int> args);
private void CalculateCombinations(int modes, int workDays, string combinationValues, Processor processor)
{
if (modes == 1)
{
List<int> _tempList = new List<int>();
combinationValues += Convert.ToString(workDays);
string[] _combinations = combinationValues.Split(',');
foreach (string _number in _combinations)
{
_tempList.Add(Convert.ToInt32(_number));
}
processor.Invoke(_tempList);
}
else
{
for (int i = workDays + 1; --i >= 0; )
{
CalculateCombinations(modes - 1, workDays - i, combinationValues + i + ",", processor);
}
}
}
I am assuming here, that your current pattern of work is something along the lines
CalculateCombinations(initial_value_1, initial_value_2, initial_value_3);
foreach( List<int> list in _modeIterations ) {
... process the list ...
}
With the direct-process-approach, this would be
private void ProcessPermutation(List<int> args)
{
... process ...
}
... somewhere else ...
CalculateCombinations(initial_value_1, initial_value_2, initial_value_3, ProcessPermutation);
I would also suggest, that you try to prune the search tree as early as possible; if you can already tell, that certain combinations of the arguments will never yield something, which can be processed, you should catch those already during generation, and avoid the recursion alltogether, if this is possible.
In new versions of C#, generation of the combinations using an iterator (?) function might be usable to retain the original structure of your code. I haven't really used this feature (yield) as of yet, so I cannot comment on it.
The problem lies more in the Brute Force approach that in the code itself. It's possible that brute force might be the only way to approach the problem but I doubt it. Chess, for example, is unresolvable by Brute Force but computers play at it quite well using heuristics to discard the less promising approaches and focusing on good ones. Maybe you should take a similar approach.
On the other hand we need to know how each "mode" is evaluated in order to suggest any heuristics. In your code you're only computing all possible combinations which, anyway, will not scale if the modes go up to 32... even if you store it on disk.
if (modes == 1)
{
List<int> _tempList = new List<int>();
combinationValues += Convert.ToString(workDays);
string[] _combinations = combinationValues.Split(',');
foreach (string _number in _combinations)
{
_tempList.Add(Convert.ToInt32(_number));
}
processor.Invoke(_tempList);
}
Everything in this block of code is executed over and over again, so no line in that code should make use of memory without freeing it. The most obvious place to avoid memory craziness is to write out combinationValues to disk as it is processed (i.e. use a FileStream, not a string). I think that in general, doing string concatenation the way you are doing here is bad, since every concatenation results in memory sadness. At least use a stringbuilder (See back to basics , which discusses the same issue in terms of C). There may be other places with issues, though. The simplest way to figure out why you are getting an out of memory error may be to use a memory profiler (Download Link from download.microsoft.com).
By the way, my tendency with code like this is to have a global List object that is Clear()ed rather than having a temporary one that is created over and over again.
I would replace the List objects with my own class that uses preallocated arrays to hold the ints. I'm not really sure about this right now, but I believe that each integer in a List is boxed, which means much more memory is used than with a simple array of ints.
Edit: On the other hand it seems I am mistaken: Which one is more efficient : List<int> or int[]

Categories

Resources