How can I get the first visible (top) and last visible (bottom) lines number for Scintilla component in C#? For example, if I scroll the text and I am able to see lines 5-41 (no folding, it is the number of lines which are shown by the component at the moment; the rest, you have to scroll to them), how do I get those numbers programatically?
If you ever want to find out how to do something with Scintilla, your first stop should always be the core Scintilla Documentation. It is comprehensive, and usually kept fully up to date.
The correct way to do what you want is to use the SCI_GETFIRSTVISIBLELINE message to get the first line, and then use the SCI_LINESONSCREEN message to calculate the last line.
There are probably Scintilla.NET wrapper methods for those messages. But the Scintilla.NET documentation seems very poor, and doesn't provide a complete description of its API - although I suppose you could always use the SendMessageDirect method (which is documented) to send the messages directly if you can't guess what the wrapper method is called.
For ScintillaNET 2 it would be:
scintilla.Lines.FirstVisibleIndex
scintilla.Lines.VisibleCount
In ScintillaNET 3 names were refactored to be more like core scintilla:
scintilla.FirstVisibleLine
scintilla.LinesOnScreen
Related
I'm currently working on a form with a bunch of textboxes with quite specific requirements. For example one textbox contain cadastral number and should look like ##:##:#######:~ (last range of digits varies) and also it would be quite nice to see the pattern before you even type anything (if I recall correctly it's called mask). Also giving requirements first two digits always should be 24, so the end result should look shomething like this 24:##:#######:~. Another example is a numeric textbox with units and spaces between big digit numbers (ex 1 000 000 m2). For short this one text box and the others have both static elements (which user should not be able to edit) and dynamic (which makes masked textboxes and similar stuff quite hard to deal with).
So, I've tried different things:
Using maskedTextBox from toolkit package, which turned out bad, because it did poorly handle last part with range of digits, and also when a key was pressed when in the middle of the static mask, it just pushed the carret, but not actually add anything to the text;
Using converters prove quite chalenging at first, but gave remarcable results, it handles variable range of part perfectly, partialy, because of custom converter, but it was difficult to manage things, when user deleted text, because of static parts being integrated in the converter itself;
Using StingFormat for textbox with binding text property was almost useless, although it handle static part quite well, and in the end I couldn't make it work.
Intuition tells me a combination of a custom converter (handling dynamic part) and and StringFormat (handling uneditable static part) should do the job. Maybe the number of requirements is just too much for a simple textbox. Also one thing is bugging me, there could be a sutable existing general solution, that I am not aware of (At first I didn't know Converters was a thing).
Now the question, how would you generally approach this problem? Is there any existing solutions, that with a bit of tweaking would work?
Using the NetTopology in C# I'm getting a 'found non-noded intersection' Exception when determining the difference between two specific geometries.
These geometries are the result of using several routines like CascadedPolygonUnion.Union, Intersection, and Difference.
At some point, we have a MultiPolygon from which we want to cut out another geometry (Polygon):
We use this code to try and cut off the 'red' polygon:
Geometry difference = multiPolygon.Difference(geometryToRemove);
But then we get a NetTopologySuite.Geometries.TopologyException with the message:
found non-noded intersection between LINESTRING (240173.28029999882 493556.2806000002, 240173.28177031482 493556.28131837514) and LINESTRING (240173.28176154062 493556.2813140882, 240173.28176153247 493556.2813140842) [ (240173.28176153894, 493556.2813140874) ]
I asked this question also in the NetTopologySuite Discussuion forum because we are close to a release date and I was hoping someone could give some extra insight (of ideas for a workaround) as this looks like a bug in de library because the polygons themselves seem valid.
The data regarding the polygons can be found here - we use the 'RDNew' data to perform the Difference action, but I also added the WGS84 versions of these polygons to be able to view them in tools like geojson.io.
Thanks to one of the maintainers of the library I got the answer.
Basically, I needed to upgrade to version 2.2 (which I already did at first to see if this would resolve the problem).
But second, I needed to configure the application to use the - in version 2.2 introduced - 'NextGen' overlay generator, which is not turned on by default.
To use the 'Next Gen' overlay generator, add the following code at some starup point in your application:
var curInstance = NetTopologySuite.NtsGeometryServices.Instance;
NetTopologySuite.NtsGeometryServices.Instance = new NetTopologySuite.NtsGeometryServices(
curInstance.DefaultCoordinateSequenceFactory,
curInstance.DefaultPrecisionModel,
curInstance.DefaultSRID,
GeometryOverlay.NG, // RH: use 'Next Gen' overlay generator
curInstance.CoordinateEqualityComparer);
I use the current instance of NtsGeometryServices to get and reuse the current default instances of the other configurable parts.
But your free to create new instances of the required parts (like mentioned in the original post at https://github.com/NetTopologySuite/NetTopologySuite/discussions/530#discussioncomment-888410 )
There are also possibilities to use both overlay generators next to each other (also mentioned in the original post), but I never tried this as we will be using the 'NextGen' version for the entire application.
I'm retrieving the names of all the classes in the current runtime in .NET, for the purpose of identifying object and function declarations in sourcecode that's given as input to my program, but templates seem a bit off, for example I've got this among the output:
hashset`1+elementcount[t]
hashset`1+slot[t]
hashset`1+enumerator[t]
which I obtain more or less from this simple code (there's some similar code that gets the referenced assemblies but essentially does the following for each of those instead of the executing assembly).
foreach (Type t in Assembly.GetExecutingAssembly().GetTypes())
{
types.Add(t.ToString().Split('.').Last().ToLower());
}
For now I can of course just Split() the strings to get the first part, before the ` mark, but I was wondering if anyone knows exactly what this might be. The three lines above are consecutive, and a couple more entries for HashSet in my results also have this 1+something thing, so I'm positive it's the actual HashSet class. (note: I'm turning everything to lowercase currently but disabling it doesn't seem to change anything.)
So... anyone know what's this notation? Not sure how to google it, but copy-pasting some lines to Google and enclosing in quotes returns either no results or very random threads that don't end up having the line in them. Thanks in advance.
Let's assume, that we're scanning test-like documents with checkboxes / empty circles (for signing / striking / ticking). What would be the proper way, to check, if already cropped checkbox/circle is checked/signed/striked/ticked?
In case we'll force test users to fully mark the area, just knowing the position of checkbox/circle and counting amount on non-white pixels would be enough (would it?), but what way should we approach to test, that the checkbox / circle is ticked or checked (X)?
This is going to be part of the project in C#, so code or even ready libraries for .net / c/c++ would be appreciated.
sorry for the shortness of this answer but you could have an ocr system run on the area within the checkbox.
If it returns nothing then you know it's not checked.
If it returns something then compare it against a large white list of possibilities and then flag uncertainties.
you could use the error handling that #dan proposed as well
What makes this more robust than just taking an average is that you can determine if it's not checked with a high certainty. because we're looking for a mark that is in some minimal way recognizable we know if there isn't anything there then it's definitely not checked. all you have to do then is find a good white list of characters and marks that could be used as checks (and think outside of the box, the ocr system may return an 'a' for a squiggle, but that is a positive response). And to clarify, the problem with just taking an average is that any increase in darkness in the check box yields a positive result, which isn't always the case. if someone puts a mark and then erases you're still going to have a increase in darkness within the box.
Lastly i'll add that there are a lot of OCR systems out there now that are pretty advanced. i doubt you'd have much trouble finding one where you could provide additional training data sets that would match you're cases better than random characters.
The algorithm would go something like this:
Find each checkbox (I understand you already have that)
Calculate the average of the color of all pixels
If it is above a certain threshold, it is marked, if below, it is unmarked
However, you should add some checks:
Are multiple above the threshold? -> Let a human check it, the student could have first ticked something and then changed it to another field.
Are none above the threshold? -> Let a human verify that really none have been checked.
I guess the important part about this answer is:
If the algorithm is unsure, flag it for manual processing.
Most of the high performance products that offer checkbox recognition use some kind of bell distribution curve to work out the likelihood of a box actually being checked: too much 'data' and there's a good chance the user changed their mind and has scribbled-out this box; too few and it could be the 'tail' left by a user ticking a box below and not lifting the pen before crossing the next box region.
I'd suggest you apply additional logic to deal with more than one box being allowed (e.g. do you own a car / do you also own a bike) as well as the situation where only one box can be correct (e.g. are you male or female). This should help your app. filter out the more obvious errors.
I am trying to extract human from a video source, so that I can use his image later. I need to only extract human body, and ignore the environment. The good thing is that the background is static. I have tried to use AForge and applied CustomFrameDifferenceDetector filter, which compares current frame to the static background image and extracts the pixels which differ (difference>threshold). It works well, but there is a problem when skin or part of the clothing has the similar color to background. In these cases filter ignores these parts and the result has various holes in the body. Simply decreasing threshold doesn't solve the problem, since body shadows and other noise increases (even under noise supression).
Do you know of any known solution to this problem? Or is it still unsolved problem?
It's a hard to solve issue (and one of the reasons for Microsoft's Kinect to not use visible light only and why blue/green screening is still so popular). I'd try to remove holes (you should be able to predict where the body has to be). If you've got the processing power, use different thresholds and merge the results. You could as well try to filter new separated images (e.g. add current frame to last frame and normalize the result). This way you could track shapes you're losing for one frame a lot more consistent.
A different approach: Use the detected shape/region for detecting the position of the body only. I.e. ignore its specific shape and use a premade shapre above/around the estimated body position. This most likely won't work if you'd like to do some kind of bluescreen like behaviour but it might help as well closing holes.
Alturos.Yolo does exactly what you are looking for.
Yolo learns from annotated images how to detect the objects you are looking for. First you need to install the project, along with a set of already trained images using the Nuget Package Manager. In your case the YOLOv2-tiny model should suffice:
Install-Package Alturos.Yolo
Install-Package Alturos.YoloV2TinyVocData
Once installed, you can use it like this to detect a human in your image:
using (var yoloWrapper = new YoloWrapper("yolov2-tiny-voc.cfg", "yolov2-tiny-voc.weights", "voc.names"))
{
var items = yoloWrapper.Detect(#"your_image.jpg");
//if (items[0].Type == "Person") { ... }
}
The items array will contain information about all the objects found. You can check there if it's a human you are looking at, using the Type property.