I have a FIFO Real Time chart (pretty much taken from their published Example) of a SciChart graph. As it renders, it starts out completely zoomed in very close and as the line is drawn, it zooms out to accommodate the full size of the line.
<s:SciChartSurface.XAxis>
<s:NumericAxis x:Name="axisX" MinHeight="50" AutoRange="Always" AxisTitle="{Binding Path=XAxisTitle}" DrawMinorGridLines="False" DrawMinorTicks="False" TextFormatting="0.##">
<s:NumericAxis.GrowBy>
<s:DoubleRange Max="0.1" Min="0.1" />
</s:NumericAxis.GrowBy>
</s:NumericAxis>
</s:SciChartSurface.XAxis>
However, what I would like is for it to begin zoomed out by a certain amount already - e.g. the X axis would already be displaying from (for example) 0 - 10 and as the line is drawn it proceeds across the screen, only zooming if the line happens to get bigger than the space provided.
I've tried setting the VisibleRangeLimit, but while this does allow me to define the range of the chart area, the zoom doesn't kick in when the curve gets too big (so it literally goes "off the chart")
How can this be accomplished?
The reason for this is the Fifo Example in sciChart WPF uses XAxis AutoRange set to Always to scale the axis to fit the data. When the example starts, even if the Fifo buffer has a capacity of 10,000 points, it has no data in it, hence the axis is scaled small to accommodate the data.
There are two ways around this:
Is to pre-fill your FIFO DataSeries with X=xValue, Y=double.NaN. Given enough values the chart will think it has to draw all these points so the XAxis will scale accordingly
Is to take control of XAxis.VisibleRange yourself (do not use AutoRange). In this case, you need to set XAxis.VisibleRange to a window size to accommodate N points, and as you update data, update the window.
The FAQ 'How to create a StripChart in SciChart' demonstrates technique (2), how to update the visible-range of the XAxis to achieve scrolling behaviour.
Disclosure, I am the tech lead of the SciChart WPF Team
Related
I want to measure the amount of empty space on a slide (in order to overcome slide overcrowding) in a PowerPoint Add-In. Having access to each shape on a slide, I was planning to calculate the amount of area each shape takes and then subtract it from the total area available. I was wondering if this is the most efficient method, or if could use something else, eg. using image processing techniques.
Unless you know that the slide background will always be plain/solid color, I don't think image processing techniques would help, and would probably necessitate exporting each slide as an image, which'd be more time consuming that stepping through the shapes on each slide.
Summing the area occupied by each shape and comparing it to the overall slide size would be a good rough answer. To do a better job, you'd want to account for overlapping shapes; two squares, one atop the other, would only occupy the area of one of them, assuming they're the same size. You may also want to consider the shapes on each slide's layout, and you'd want to test placeholder shapes to see if they're empty or not; they occupy space in editing views, but if empty, won't appear in printouts or slide shows.
I'm trying to draw something in a System.Windows.Media.DrawingVisual but I need to draw thins in millimeter unit. How can I do that?
In WPF, you can't even draw something in pixel units without at least some extra effort. WPF uses "device independent units" where each unit is 1/96th of an inch. Even that is only a theoretical relationship, as it depends on the display device correctly reporting its resolution, which in turn is dependent on the display, its configuration, and what the user has set e.g. in the "large fonts" setting (i.e. in the screen resolution settings, clicking the link that reads "Make text and other items larger or smaller").
All of these affect WPF's interpretation of the available display resolution information, which in turn affect how WPF chooses to render its "device independent" 1/96th of a inch units.
The bottom line is that the link commenter Sheridan offered really is the closest you can come to displaying in millimeters, barring a lot of extra work and help from the user. By scaling your input units, intended as millimeters, by the factor value provided (i.e. 96/ 25.4…in the expression, you can see the 25.4 to convert from millimeters to inches, then the 96 that converts inches to 1/96ths of an inch), you can convert your millimeters into the 96 dpi units that WPF uses natively.
Assuming the display is configured correctly (an optimistic assumption, but it does happen :) ), this will result in reasonably accurate presentation on the screen according to your desired millimeter-based dimensions.
Note that you can accomplish this scaling through the use of a transform on your rendered UI elements. The easiest thing to do would be to set the LayoutTransform property of the outer-most container object where you want the millimeter-based rendering. Then you can just lay out your objects in that container using the millimeters values for their location and size, and WPF will use the transform to present the container and the rendered objects within at the scale you want.
ZedGraph super users, I have done my best working with ZedGraph to make an analog signal chart. I need some help with the finishing touches.
Currently the graph "grows" from the left to the right as new data comes in (No real world data yet, just a random point every timer tick), if the data grows too large for the default zooming options then the data points are "compressed" on the graph and the right hand side shows more white space, and the X axis scales larger.
What I would like is for the data to grow from right to left (Basically "flip" the graph over the Y axis so +x is to the left and -x is to the right?). Also I would like a "sliding window" for the graph to show only the newest data from the source (basically a 5 second sliding window).
Does ZedGraph have the ability to implement either of these features by default?
Otherwise I plan to negate all of the time stamps on the data (I guess and never show the x axis?) so that the data "grows" from right to left as it comes in. For the sliding window, I was going to only keep (5sec/timeBetweenData) # of data points and remove the rest from the LineItem representing the signal and store them (in case I want to show them to the user again). But If I don't have to do that it would be nice.
So to fix this I did end up negating all of the time stamps so that the new data would "enter" the graph on the left and move to the right as it got older.
Then to create the sliding window effect I removed the oldest point in the graph for every point that I added.
Another unforeseen issue was that the GraphPane by default would add some padding in front of and behind the min and max points, so to fix this I found the minimum and maximum points in my curve and set the dimensions of the axis' scales to that.
I have a chart that displays several lines showing signal strengths over a frequency band.
Each chart is composed of one 'area' and four 'series'. On the parent form there are several graphs like the one shown below. All of them are created dynamically and will have different widths.
What I am trying to do is add a tooltip or annotation (or something) when the mouse hovers over a specific area of the chart as shown in the mockup below:
If the mouse moved to the other side of the chart a different channel number and frequency would be shown in a box surrounding that area of the chart.
It doesn't have to be exactly as shown in the mockup although an outline would be preferred in order to show the user how wide the channel is regardless of the waveform shown in that area at the time. For example, the waveform shown above might only be 8MHz wide but channel 1 itself might have an allocation that is 10MHz wide (the device varies its bandwidth based on its offered load.)
The X axis is MHz and a channel is defined in terms of MHz so it would be ideal to define the outline in terms of the X axis instead of pixels.
Also, note that this is a realtime chart that is updated up to 10 times per second. Therefore it would be best if the information was not required to be updated each time new data arrived.
I was able to combine a couple of items to make the following solution:
[credit LICEcap]
The highlight is a rectangle filled in the 'OnPaint' method of the chart control.
The text is a simple TextAnnotation that is applied during the mousemove event.
It took quite a bit of coordinate conversion to get all the pieces in the right spot - especially the text. I needed to convert between pixels, position and value.
The first conversion was to pixels in order to center the text using MeasureString. I then converted it from pixel location to X axis value and then needed to convert it to position since the annotation requires using 'position' coordinates. There is not a function to convert from pixels to position. There is a pixels to value and a value to position which is the way I went.
I don't claim this to be the best or even a proper way to do it but it works. If anyone else has a better solution or a way to improve my code please post.
Here's my code for positioning the text:
double temp = chart1.ChartAreas[0].AxisX.ValueToPixelPosition(Convert.ToDouble(ce.sChannelFrequency) * 1000);
using (System.Drawing.Graphics graphics = System.Drawing.Graphics.FromImage(new Bitmap(1, 1))) {
SizeF size = graphics.MeasureString(freq.Text, new Font("eurostile", 13, FontStyle.Bold, GraphicsUnit.Pixel));
temp -= (size.Width/2+10);
}
if (temp < 0) temp = 0;
temp = chart1.ChartAreas[0].AxisX.PixelPositionToValue(temp);
freq.X = chart1.ChartAreas[0].AxisX.ValueToPosition(temp);
I do NOT want the system trying to scale my drawing, I want to do it entirely on my own as any attempt to squeeze/stretch the graphics will produce ugly results. The problem is that as the image gets bigger I want to add more detail rather than have it simply scale up.
Right now I'm looking at two sets of stripes. One is black/white, the other is black/white/white. The pen width is set to 1.
When the line is drawn horizontally it's correct. The same logic drawing vertical lines appears to be doing some antialiasing, bleeding the black onto the nearby white. The black/white/white doesn't look as good as the horizontal, the black/white looks more like medium++ gray/medium-- gray.
The same code is generating the coordinates in all cases, the transform logic is simply selecting what offset to apply where as I am only supporting orientations on the cardinals. Since there's no floating point involved I can't be looking at precision issues.
How do I get the system to leave my graphics alone???
(Yeah, I realize this won't work at very high resolution and eventually I'll have to scale up the lines. Over any reasonable on-screen zoom factor this won't matter, for printer use I'll have to play with it and see where I need to scale. The basic problem is that I'm trying to shoehorn things into too few pixels without just making blobs.)
Edit: There is no scaling going on. I'm generating a bitmap the exact size of the target window. All lines are drawn at integer coordinates. The recommendation of setting SmoothingMode to None changes the situation: Now the black/white/white draws as a very clear gray/gray/white and the black/white draws as a solid gray box. Now that this is cleaned up I can see some individual vertical lines that were supposed to be black are actually doing the same thing of drawing as 2-pixel gray bars. It's like all my vertical lines are off by 1/2 pixel--yet every drawing command gets only integers.
Edit again: I've learned more about the problem. The image is being drawn correctly but trashed when displayed to the screen. (Saving it to disk and viewing it on the very same monitor shows it drawn correctly.)
You really should let the system manage it for you. You have described a certain behavior that is specific to the hardware you are using. Given different hardware, the problem may not exist at all, or it may exist horizontally but not vertically, or may only exist at much smaller or much larger resolutions, etc. etc.
The basic problem you described sounds like the vertical lines are being drawn "between" vertical stacks of pixels, which is causing the system to draw an anti-aliased line. The alternative to anti-aliasing the line is to shift it. The problem with that is the lines will "jitter" or "jerk" if the image is moved around, animated, or scaled or transformed in any other way. Generally, jerk is MUCH less desirable than anti-aliasing because it is more distracting.
You should be able to turn off anti-aliasing using the SmoothingMode enum, or you could try to handle positioning yourself. Either way, you are trading anti-aliasing for jittery, jerky rendering during any movement or transformation.
Have a look at System.Drawing.Drawing2d.SmoothingMode. Setting it to 'Default' or 'None' should turn off anti aliasing when doing line drawing. If you're talking about scaling an image without anti aliasing effects, have a look at InterpolationMode. Specifically, you might wish to set it to 'Nearest-Neighbor' which will keep your rectangular blocks perfectly crisp. Note that you will see some odd effects if you scale your image by anything other than whole numbers.
Perhaps you need to align your lines on half-pixel coordinates? A one pixel line drawn at say x = 5 would be drawn on the center of the line, which means it would go from x = 4.5 to x = 5.5. If you want it to go from x = 4 to x = 5 then you'd need to set its coordinate to x = 4.5.
GDI+ has a property: http://msdn.microsoft.com/en-us/library/system.drawing.graphics.pixeloffsetmode.aspx that allows you to control this behavior.
Sounds like you need to change your application to tell the system it is DPI aware so scaling doesn't occur. Here's an article on doing that: http://msdn.microsoft.com/en-us/library/ms701681%28VS.85%29.aspx