Wrong scaling on Korean PCs - c#

The Situation
We are selling a Windows Forms Application to customers all over the world.
We installed it in several countries in Europe and America. No problems.
Last week we installed our software in South-Korea and recognized a strange behaviour...
The problem occurs only on the customers office PCs, but on all of them.
Some have Windows 7 Professional K, some have Windows XP.
The customer bought a new PC with an installed Windows 7 Ultimate.
On this PC, there is no problem.
The Function
All elements in our application are derived from a "parent-user-control" that offers special functions.
One of these functions is "autosizing and positioning".
When the parent changes size, this function of all childs is called.
When our application starts, we store the "ClientSize":
InitializeComponent();
this.m_actSize = this.ClientSize;
Whenever the size of the application changes, we calculate the scaling factor and raise an event with it:
void myFormSizeChanged(object sender, EventArgs e)
{
this.m_xFactor = (float)this.ClientSize.Width / (float)this.m_actSize.Width;
this.m_yFactor = (float)this.ClientSize.Height / (float)this.m_actSize.Height;
if (this.m_MyOnResize != null)
this.m_MyOnResize(this.m_xFactor, this.m_yFactor);
}
Now, each child that subscribed, performs automatic resizing and positioning:
void MyParentUserControl_MyOnResize(float v_xFactor, float v_yFactor)
{
this.Location = new Point((int)(this.m_actLocation.X * v_xFactor), (int)(this.m_actLocation.Y * v_yFactor));
this.Size = new Size((int)(this.m_actSize.Width * v_xFactor), (int)(this.m_actSize.Height * v_yFactor));
}
The Problem
When our application starts on the customers PCs in South-Korea, the width is about 20% to small.
That means, on the right side is an area where is just a grey background.
The height is about 10% to high.
That means, the items located on the bottom of our application are outside the screen.
The Fix
First, we thought the problem comes from the Windows DPI setting.
When I set my Laptop to 125%, it looked similar.
But, the customers PCs are all set to 100%...
Then, we thought about the screen resolution.
All have different ones, some the same as my Laptop...
All have different grafic adapters...
All have .NET 4.5.1...
The only way, that solved the problem, was a strange one:
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.BackColor = System.Drawing.SystemColors.ScrollBar;
this.ClientSize = new System.Drawing.Size(1016, 734);
In the "Designer" file, manually changing the ClientSize from (1016, 734) to about (900, 800).
This made it look good on most customer PCs. But not on all.
The Question
What can be the real solution for this problem?
Where can it come from?

Do you have the same issues on the same computers if you use AutoScaleMode.Dpi or AutoScaleMode.None instead of AutoScaleMode.Font on each containing control?
If that solves your problem, here is why I think your issue may be related to using AutoScaleMode.Font
At a high level, according to MSDN, the effect of AutoScaleMode.Font is that the control will "scale relative to the dimensions of the font the classes are using, which is typically the system font." (Emphasis mine.)
I dug into the System.Windows.Forms.ContainerControl source code a bit. The method PerformAutoScale is automatically called during a control's OnLayout event. If AutoScaleMode is set to Font, then GetFontAutoScaleDimensions is called indirectly by OnLayout. The comments in GetFontAutoScaleDimensions explain howAutoScaleMode.Font is implemented:
// We clone the Windows scaling function here as closely as
// possible. They use textmetric for height, and textmetric
// for width of fixed width fonts. For variable width fonts
// they use GetTextExtentPoint32 and pass in a long a-Z string.
// We must do the same here if our dialogs are to scale in a
// similar fashion.
So, the method takes a "long" string, sends it out to GDI and asks, "what are the dimensions of this string?" Notably, this method takes into consideration the control's font "which is typically the system font."
Did you know that the Korean alphabet (Hangul) is not represented in Arial? (I didn't until I researched this answer!) It makes perfect sense that your system font (something like Tahoe or Arial) is different than that of your clients in South Korea. It also makes sense that two different fonts will display the same string of characters with a different height and width. So, I bet the issues in question occur on workstations with a system font different than your system font.
So, if you do some testing and find that AutoScaleMode.Font really is the culprit, then you have a few options:
Don't use AutoScaleMode.Font.
Explicitly set the font of all containing controls
explicitly. This will ensure that the font of the ContainerControl
does not default to the computer's system font.
No matter what you do, ensure all of your containers use the same AutoScaleMode setting. Mixing and matching will lead to headaches.
Good Luck!

Related

Pre-generate Graphics object to improve printing performance

I have an application that prints invoices. I'd like to be able to pre-generate the invoices in a background task/process so I can reduce the downtime required to send the document to the printer when prompted by the user or other automation events. I'm looking for something like this...
Graphics _g;
// background task would call this method
void GenerateInvoice(Invoice i)
{
_g = ???? // ????
_g.DrawImage...
_g.DrawString....
}
// user action, or automation event, would call this method...
void PrintInvoice()
{
if (_g == null)
throw new DocumentNotPreparedException();
PrintDocument pd = new PrintDocument();
pd.PrinterSettings.PrinterName = "My Fast Printer";
pd.PrintPage += PrintHandler;
pd.Print();
}
void PrintHandler(object o, PrintPageEventArgs e)
{
// ????
e.Graphics = _g;
}
Any suggestions on what needs to be done in and around the '???' sections?
I'd like to be able to pre-generate the invoices in a background task/process so I can reduce the downtime required to send the document to the printer
First step is to make sure you know what the source of the "downtime" is. It would be unusual for the bottleneck to exist in your own program's rendering code. Most often, a major source of printer slowness is either in the print driver itself (e.g. a driver with a lot of code and data that has to be paged in to handle the job), or dealing with a printer that requires client-side rasterization of the page images (which requires lots of memory to support the high-resolution bitmaps needed, which in turn can be slow on some machines, and of course greatly increases the time spent sending those rasterized images to the printer, over whatever connection you're using).
If and when you've determined it's your own code that's slow, and after you've also determined that your own code is fundamentally as efficient as you can make it, then you might consider pre-rendering as a way of improving the user experience. You have two main options here: rendering into a bitmap, and rendering into a metafile.
Personally, I would recommend the latter. A metafile will preserve your original rendering commands, providing a resolution-independent and memory-efficient representation of your printing data. This would be particularly valuable if your output consists primarily of line-drawings and text output.
If you render into a bitmap instead, you will want to make sure you allocate a bitmap at least the same resolution as that being supported by the printer for your print job. Otherwise, you will lose significant image quality in the process and your printouts will not look very good. Note though that if you go this route, you run the risk of incurring the same sort of memory-related slowdown that would theoretically be an issue when dealing with the printer driver directly.
Finally, in terms of choosing between the two techniques, one scenario in which the bitmap approach might be preferable to the metafile approach is if your print job output consists primarily of a large number of bitmaps which are already at or near the resolution supported by the printer. In this case, flattening those bitmaps into a single page-sized bitmap could actually reduce the memory footprint. Drawing them into a metafile would require each individual bitmap to be stored in the metafile, and if the total size of those bitmaps is larger than the single page-sized bitmap, that would of course use even more memory. Flattening them into a single bitmap would allow you to avoid having a large number of individual, large bitmaps in memory all at once.
But really, the above is mostly theoretical. You're suggesting adding a great level of complexity to your printing code, in order to address a problem that is most likely not one you can solve in the first place, because the problem most likely does not lie in your own code at all. You should make sure you've examined very carefully the reason for slow printing, before heading down this path.

Fast desktop image capture

I am trying to develop a basic screen sharing and collaboration app in C#. I am currently working on capturing the screen, finding areas of the screen that have changed and subsequently need to be transmitted to the end client.
I am having a problem in that the overall frame rate of the screen capture is too low. I have a fairly good algorithm for finding areas of the screen that have changed. Given a byte array of pixels on the screen it calculates areas that have changed in 2-4ms, however the overall frame rate I am getting is 15-18 fps (i.e. taking somewhere around 60ms per frame). The bottleneck is capturing the data on the screen as a byte array which is taking around 35-50ms. I have tried a couple of different techniques and can't push the fps past 20.
At first I tried something like this:
var _bmp = new Bitmap(screenSectionToMonitor.Width, screenSectionToMonitor.Height);
var _gfx = Graphics.FromImage(_bmp);
_gfx.CopyFromScreen(_screenSectionToMonitor.X, _screenSectionToMonitor.Y, 0, 0, new Size(_screenSectionToMonitor.Width, _screenSectionToMonitor.Height), CopyPixelOperation.SourceCopy);
var data = _bmp.LockBits(new Rectangle(0, 0, _screenSectionToMonitor.Width, _screenSectionToMonitor.Height), ImageLockMode.ReadOnly, _bmp.PixelFormat);
var ptr = data.Scan0;
Marshal.Copy(ptr, _screenshot, 0, _screenSectionToMonitor.Height * _screenSectionToMonitor.Width * _bytesPerPixel);
_bmp.UnlockBits(data);
This is too slow taking around 45ms just to run the code above for a single 1080p screen. This makes the overall frame rate too slow to be smooth, so I then tried using DirectX as per the example here:
http://www.codeproject.com/Articles/274461/Very-fast-screen-capture-using-DirectX-in-Csharp
However this didn't really net any results. It marginally increased the speed of the screen capture but it was still much too slow (taking around 25-40ms, and the small increase wasn't worth the overhead of the extra DLLs, code, etc.
After googling around a bit I couldn't really find any better solutions, so my question is what is the best way to capture the pixels currently displaying on the screen? An ideal solution would:
Capture the screen as an array of bytes as RGBA
Work on older windows platforms (e.g. Windows XP and above)
Work with multiple displays
Uses existing system libraries rather than 3rd party DLLs
All these points are negotiable for a solution that return a decent overall framerate, in the region of 5-10ms for the actual capturing so the framerate can be 40-60fps.
Alternatively, If there no solution that matches above, am I taking the wrong path to calculate screen changes. Is there a better way to calculate areas of the screen that have changed?
Perhaps you can access the screen buffers at a lower level of code and hook directly into the layers and regions Windows uses as part of its screen updates. It sounds like you are after the raw display changes and Windows already has to keep track of this data. Just offering a direction for you to pursue while you find someone more knowledgeable.

Drawing a graph in a Windows Forms application using the dot.exe process - possible out of memory error

I call a drawing function that I wrote on the data that I need for drawing a graph.
The drawing function works like this: First it creates a text file. It's basically a .dot file, meaning that Graphviz / dot.exe knows how to handle it. The generated file looks something like this:
graph{
resolution=1000;
1[
label =""
pos = "552,552!"
width = 0.002
height = 0.002
fixedsize=true
fontsize = 8
color =red
penwidth = 0.1, color = black, shape = box, width = 0.07, height = 0.07, label = ""
]
74[
label =""
pos = "450,552!"
width = 0.002
height = 0.002
fixedsize=true
fontsize = 8
color =red
shape = point
]
(...)
1 -- 74[penwidth = 0.099, color="red"]
74 -- 40[penwidth = 0.099, color="red"]
40 -- 32[penwidth = 0.099, color="red"]
32 -- 18[penwidth = 0.099, color="red"]
(...)
}
After it generates the file, the function calls the dot.exe process with the following flags:
ProcessStartInfo startInfo = new ProcessStartInfo("dot.exe");
startInfo.Arguments = "-Kneato -Goverlap=prism -Tpng " + fileName + ".txt -o " + fileName + ".png";
I've tried using different flags, image formats etc., but none of that solves my problem.
My application basically consists of an interface with a few buttons and two PictureBoxes. Clicking on one of the buttons causes the "important part of the program" to execute.
The "important part" takes some time to execute, so I used a BackgroundWorker for that. What happens over there (in the backgroundWorker1_DoWork function) is:
Some things get calculated and my drawing function gets called twice on the resulting data. It creates two images and "puts them" into the PictureBoxes.
And it works just fine for most data, but for some data it doesn't. On some of the data, no pictures get shown in the PictureBoxes. When I check the folder where the text files and the images should have been created, I see that only the text file and the resulting picture which should go into the first PictureBox are created... But not even they are shown. My conclusion is that something makes the whole BackgroundWorker process stop, probably some kind of an error in the dot.exe process.
Now, every time the process gets called, a console appears for a glimpse of a second. Some useful data might be displayed over there, but I don't know how to read it.
There's a previous and slightly different version of my application, which doesn't work on the same data that the current version fails to work on.
In the old version, however, I'm able to read the console output (probably because the whole program crashes), and it says something along the lines of:
Graph is too large for cairo renderer bitmaps.
Scaling by 0.4 to fit dot: failure to create cairo surface: out of memory.
I get this error mostly for larger graphs, but not only for larger graphs. Some larger graphs work just fine, and some smaller ones don't. And none of them are particulary large anyway: The largest are approximately 80 nodes large. I thought it might have something to do with resolution or something like that, but whatever parameter I change, the thing still doesn't work.
Does anyone have an idea on what I should try? Do you need any extra information about my problem?
Edit: Also, changing the size using the -G attribute doesn't help. In fact, whatever I do I always get the exact same error, meaning that the scaling factor mentioned in the error doesn't change.
Turns out my problem was nothing Graphviz or even graph specific - I was testing my (metaheuristic) algorithm on the test examples I found online. When reading the optimal solution files line by line and splitting them into words, "" would sometimes get recognized as a word.

Why new FontFamily("Invalid font") doesn't throw an Exception?

Why the following code does not throw an exception?
FontFamily font = new FontFamily("bla bla bla");
I need to know if a specific font (as combination of FontFamily, FontStyle, FontWeight, ...) exists in my current OS. How have I to do?
This is by design. Programs frequently ask for fonts that are not present on the machine, especially in a country far flung from the programmer's domicile. The font mapper produces an alternative. Font substitution is in general very common. You are looking at Arial right now if you are on a Windows machine. But I can paste 你好世界 into this post and you'll see it render accurately, even though Arial doesn't have glyphs for Chinese characters.
So hint number one is to not actually worry about what fonts are available. The Windows api has EnumFontFamiliesEx() to enumerate available font families. But that's not exposed in WPF, some friction with OpenType there, a font standard that's rather poorly integrated with Windows. Another shadow cast when Adobe gets involved with anything Microsoft does, it seems.
Some confusion in the comments about Winforms' FontFamily class. Which is actually usable in this case, its GetFamilies() method returns an array of available families. But only TrueType, not OpenType fonts.
You can use the class System.Drawing.Text.InstalledFontCollection
http://msdn.microsoft.com/en-us/library/system.drawing.text.installedfontcollection.aspx
WPF have a framework specific method Fonts.SystemFontFamilies
http://msdn.microsoft.com/en-us/library/system.windows.media.fonts.systemfontfamilies.aspx
To answer the question of why it isn't throwing an exception, according to FontFamily Constructor on MSDN the exception wasn't added until framework version 3.5.
I suspect that you are targeting version 3.0 or below.
Cheers!
You can browse the available fonts on the System using the Fonts.SystemFontFamilies collection - use some Linq to match on whatever conditions you need;
// true
bool exists = (from f in Fonts.SystemFontFamilies where f.Source.Equals("Arial") select f).Any();
// false
exists = (from f in Fonts.SystemFontFamilies where f.Source.Equals("blahblah") select f).Any();

Graphics.DrawImage creates different image data on x86 and x64

Hey there!
Here is my setting:
I've got a c# application that extracts features from a series of images. Due to the size of a dataset (several thousand images) it is heavily parallelized, that's why we have a high-end machine with ssd that runs on Windows7 x64 (.NET4 runtime) to lift the hard work. I'm developing it on a Windows XP SP3 x86 machine under Visual Studio 2008 (.NET3.5) with Windows Forms - no chance to move to WPF by the way.
Edit3:
It's weird but I think I finally found out what's going on. Seems to be the codec for the image format that yields different results on the two machines! I don't know exactly what is going on there but the decoder on the xp machine produces more sane results than the win7 one. Sadly the better version is still in the x86 XP system :(. I guess the only solution to this one is changing the input image format to something lossless like png or bmp (Stupid me not thinking about the file format in the first place :)).
Edit2:
Thank you for your efforts. I think I will stick to implementing a converter on my own, it's not exactly what I wanted but I have to solve it somehow :). If anybody is reading this who has some ideas for me please let me know.
Edit:
In the comments I was recommended to use a third party lib for this. I think I didn't made myself clear enough in that I don't really want to use the DrawImage approach anyway - it's just a flawed quickhack to get an actually working new Bitmap(tmp, ... myPixelFormat) that would hopefully use some interpolation. The thing I want to achieve is solely to convert the incoming image to a common PixelFormat with some standard interpolation.
My problem is as follows. Some of the source images are in Indexed8bpp jpg format that don't get along very well with the WinForms imaging stuff. Therefore in my image loading logic there is a check for indexed images that will convert the image to my applications default format (e.g. Format16bpp) like that:
Image GetImageByPath(string path)
{
Image result = null;
using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read))
{
Image tmp = Image.FromStream(fs); // Here goes the same image ...
if (tmp.PixelFormat == PixelFormat.Format1bppIndexed ||
tmp.PixelFormat == PixelFormat.Format4bppIndexed ||
tmp.PixelFormat == PixelFormat.Format8bppIndexed ||
tmp.PixelFormat == PixelFormat.Indexed)
{
// Creating a Bitmap container in the application's default format
result = new Bitmap(tmp.Width, tmp.Height, DefConf.DefaultPixelFormat);
Graphics g = Graphics.FromImage(result);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
// We need not to scale anything in here
Rectangle drawRect = new Rectangle(0, 0, tmp.Width, tmp.Height);
// (*) Here is where the strange thing happens - I know I could use
// DrawImageUnscaled - that isn't working either
g.DrawImage(tmp, drawRect, drawRect, GraphicsUnit.Pixel);
g.Dispose();
}
else
{
result = new Bitmap(tmp); // Just copying the input stream
}
tmp.Dispose();
}
// (**) At this stage the x86 XP memory image differs from the
// the x64 Win7 image despite having the same settings
// on the very same image o.O
result.GetPixel(0, 0).B; // x86: 102, x64: 102
result.GetPixel(1, 0).B; // x86: 104, x64: 102
result.GetPixel(2, 0).B; // x86: 83, x64: 85
result.GetPixel(3, 0).B; // x86: 117, x64: 121
...
return result;
}
I tracked the problem down to (*). I think the InterpolationMode has something to do with it but there's no difference which of them I choose the results are different at (**) on the two systems anyway. I've been investigating test image data with some stupid copy&paste lines, to be sure it's not an issue with accessing the data in a wrong way.
The images all together look like this Electron Backscatter Diffraction Pattern. The actual color values differ subtly but they carry a lot of information - the interpolation even enhances it. It looks like the composition algorithm on the x86 machine uses the InterpolationMode property whereas the x64 thingy just spreads the palette values out without taking any interpolation into account.
I never noticed any difference between the output of the two machines until the day I implemented a histogram view feature on the data in my application. On the x86 machine it is balanced as one would expect it from watching the images. The x64 machine on the other hand would rather give some kind of sparse bar-diagram, an indication of indexed image data. It even effects the overall output data of the whole application - the output differs on both machines with the same data, that's not a good thing.
To me it looks like a bug in the x64 implementation, but that's just me :-). I just want the images on the x64 machine to have the same values as the x86 ones.
If anybody has an idea I'd be very pleased. I've been searching for similar behavior on the net for ages but resistance seems futile :)
Oh look out ... a whale!
If you want to make sure that this is always done the same way, you'll have to write your own code to handle it. Fortunately, it's not too difficult.
Your 8bpp image has a palette that contains the actual color values. You need to read that palette and convert the color values (which, if I remember correctly, are 24 bits) to 16-bit color values. You're going to lose information in the conversion, but you're already losing information in your conversion. At least this way, you'll lost the information in a predictable way.
Put the converted color values (there won't be more than 256 of them) into an array that you can use for lookup. Then ...
Create your destination bitmap and call LockBits to get a pointer to the actual bitmap data. Call LockBits to get a pointer to the bitmap data of the source bitmap. Then, for each pixel:
read the source bitmap pixel (8 bytes)
get the color value (16 bits) from your converted color array
store the color value in the destination bitmap
You could do this with GetPixel and SetPixel, but it would be very very slow.
I vaguely seem to recall that .NET graphics classes rely on GDI+. If that's still the case today, then there's no point in trying your app on different 64 bit systems with different video drivers. Your best bet would be to either do the interpolation using raw GDI operations (P/Invoke) or write your own pixel interpolation routine in software. Neither option is particularly attractive.
You really should use OpenCV for image handling like that, it's available in C# here: OpenCVSharp.
I use a standard method for the graphics object, and with this settings outperforms X86. Count performance at release runs, not debug. Also check optimize code at project properties, build tab. Studio 2017, framework 4.7.1
public static Graphics CreateGraphics(Image i)
{
Graphics g = Graphics.FromImage(i);
g.CompositingMode = CompositingMode.SourceOver;
g.CompositingQuality = CompositingQuality.HighSpeed;
g.InterpolationMode = InterpolationMode.NearestNeighbor;
g.SmoothingMode = SmoothingMode.HighSpeed;
return g;
}

Categories

Resources