When a TransformToAncestor is called what Matrices are used to build up the resultant GeneralTransform? When step into the pdb all I see is a TransformField with a signature like:
private static readonly UncommonField<Transform> TransformField = new UncommonField<Transform>();
being used in the resultant GeneralTransform
Nothing beats Reflector. Especially while it's still free ;)
The code is hairy enough but can be followed. Basically it walks up the visual tree and groups the transforms but the entire thing is much more complicated and I never really had an interest to dig that deep into it. Look into Visual.TrySimpleTransformToAncestor for the gory details.
To answer the question, UIElement is never used explicitly of course; the transform is retrieved through an Effect (the UncommonField you mentioned), so I'm guessing that transforms in general are applied as effects, hence you can get them through this shortcut from anywhere, but that's just infrastructure and implementation details and I'm most likely wrong :)
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
One thing that I’ve always found inconvenient about C++ programming is its lack of a good map/dictionary/hash table/associative array container class. C#, Java and Objective-C all have descent such classes, namely Dictionary<>, Hashtable and NSDictionary, that mostly work straight out of the box with all data types. But both STL’s stl::map and Boost’s boost::unordered_map are extremely clumsy and overly verbose, even for mundane everyday tasks. I was wondering if there is a C++ equivalent, in some open source library perhaps, that more closely resembles the syntax and functionality of the aforementioned platforms.
Of the three, C#’s Dictionary<> is my favorite as it is strongly-typed, with very short syntax and very versatile. So something like that would be perfect. I’m not sure if this is possible at all. If not, I’d like to know the reasons why. Here are my main pain-points regarding Boost’s and STL’s implementations and what I’d like instead:
First of all, performance is not an issue here. Memory allocations, virtual function calls, O(n) complexity - it doesn't matter. Every-day dictionaries have only a few entries anyway. Ease of use is paramount.
The syntax should generally be like that of an array. This means a versatile operator[], like:
dictionary[key] = value; //insertion and update
dictionary[key] = NULL; //deletion of an element
if(dictionary[key]) //check if the element exists. No default-constructed value should be inserted!
Java and Objective-C don’t have operator overloading so this is impossible with them. C# has it and makes an excellent use of it. Can’t C++ do the same?
Both values and keys can be of custom user-defined types or primitive types (ints, floats, etc.).
When storing user-defined objects, they should be referenced by a shared_ptr. I’m using Boost so this will be crucial in preventing memory leaks. The other three platforms are either garbage-collected (C#/Java), or have the option to choose between manual memory mgmt, reference counting and garbage collection (Objective C). Boost does a good job of implementing reference counting, so it should be possible. That’s exactly what Objective-C’s NSDictionary makes under the hood with ARC turned on.
When storing user-defined objects, they should be compared based on memory address by default. Very important: No hash functions, operator==, operator<, common base classes, etc. should be required for user-defined objects. Requiring those things could be OK to explicitly change the comparison from memory address to something else, for example by-value comparisons for strings. But most of the time we just want memory address comparisons.
When storing primitive data types, they should be compared by value. Whether they are wrapped/boxed in some internal object should be irrelevant to the user. Again, performance doesn’t matter.
Checking if a value exists with a given key should be possible with if(dictionary[key]). This SHOULD NOT insert a default constructed value object, as it does in Boost and STL.
Should be strongly-typed for both keys and values. So no void*. Also, no common base classes should be required for both keys and values, as this would be too intrusive and would make 3rd party classes harder to store in the map.
Unique keys. No null values allowed (null values cause deletion).
The keys should be accessible as a vector or array and traversible by index. Iterators require too much typing. That is, we should be able to write:
for(int i = 0; i < dictionary.getKeys().getCount(); i++){
shared_ptr value = dictionary[dictionary.getKeys()[i]];
}
Having to write gargantuan for-loops full of iterator declarations muddles the clarity of the source code. And typedefs for iterators are no good either, as they only increase the complexity by having to “Go To Definition” for every new one you encounter, especially when reading someone else’s code.
I guess I could add some more points to the list, but I’ll just stop here. Do you know of any library with a map class that satisfies at least most of those points? I’d be very grateful for any constructive feedback.
I will address your points one by one. In many ways, what you are asking is unrealistic for C++ (just like asking in Java for a map supporting custom operators is unrealistic).
Here are my main pain-points regarding Boost’s and STL’s implementations and what I’d like instead:
1.First of all, performance is not an issue here. Memory allocations, virtual function calls, O(n) complexity - it doesn't matter. Every-day dictionaries have only a few entries anyway. Ease of use is paramount.
Performance is always "not an issue" until it is an issue, and when it is, it can sink your project. This is why usually, when performance is not an issue, it is good to still keep it in mind. No self-respecting library will expose a concept implementation (such as map/dictionary) with API specifications like "performance is not an issue". If there is one (though I think there shouldn't be), it will be implemented in an efficient way, or is in a library you should probably stay away from.
2.The syntax should generally be like that of an array. This means a versatile operator[], like:
dictionary[key] = value; //insertion and update
This is already implemented C++ std::map
dictionary[key] = NULL; //deletion of an element
This is unrealistic for C++. It can be implemented, by accessing a value through a custom reference wrapper and making that assignable with a nullptr. The problem is that in C++, a pointer is a different data type.
That is, what would this mean?
IdealMap<int, my_obj*> pointer_map;
pointer_map[1] = new my_obj{};
pointer_map[1] = nullptr;
Does the last line set poiter_map[i] to NULL, or does it ensure that any access to element i from this point will throw a "not found" exception?
You could alternately write an implementation like this:
IdealMap<int, my_obj*> pointer_map;
pointer_map[1] = new my_obj{};
pointer_map[1] = IdealMap<int, my_obj*>::novalue;
Where novalue would be a special constant, conceptually representing "none".
if(dictionary[key]) //check if the element exists.
This is again, not a good idea to implement in C++ in a generic map. Consider this map:
IdealMap<int, bool> bool_map;
bool_map[0] = true;
if(bool_map[0]) {...}
Are you checking here that an element exists at position zero, or that the element is true?
No default-constructed value should be inserted!
This is trivial to implement yourself, by encapsulating a map in a custom class.
The rest of your points sound more like a shopping list. Sorry, nobody wrote a dictionary class matching it (that is, no, there is no library that wrote a dictionary exactly like you would like).
Having to write gargantuan for-loops full of iterator declarations muddles the clarity of the source code. And typedefs for iterators are no good either, as they only increase the complexity by having to “Go To Definition” for every new one you encounter, especially when reading someone else’s code.
Writing gargantuan for-loops is not something you ever have to do. If you have that, your problem is lack of refactoring in your code base (IMHO), not a map.
I've never done Bison or Wisent before.
how can I get started?
My real goal is to produce a working Wisent/Semantic grammar for C#, to allow C# to be edited in emacs with code-completion, and all the other CEDET goodies. (For those who don't know, Wisent is a emacs-lisp port of GNU Bison, which is included into CEDET. The Wisent apparently is a European Bison. And Bison, I take it, is a play-on-words deriving from YACC. And CEDET is a Collection of Emacs Development Tools. All caught up? I'm not going to try to define emacs. )
Microsoft provides the BNF grammar for C#, including all the LINQ extensions, in the language reference document. I was able to translate that into a .wy file that compiles successfully with semantic-grammar-create-package.
But the compiled grammar doesn't "work". In some cases the grammar "finds" enum declarations, but not class declarations. Why? I don't know. I haven't been able to get it to recognize attributes.
I'm not finding the "debugging" of the grammar to be very easy.
I thought I'd take a step back and try to produce a wisent grammar for a vastly simpler language, a toy language with only a few keywords. Just to sort of gain some experience. Even that is proving a challenge.
I've seen the .info documents on the grammar fw, and wisent, but... still those things are not really clarifying for me, how the stuff really works.
So
Q1: any tips on debugging a wisent grammar in emacs? Is there a way to run a "lint-like" thing on the grammar to find out if there are unused rules, dead-ends stuff like that? What about being able to watch the parser in action? Anything like that?
Q2: Any tips on coming up to speed on bison/wisent in general? What I'm thinking is a tool that will allow me to gain some insight into how the rules work. Something that provides some transparency, instead of the "it didn't work" experience i'm getting now with Wisent.
Q3: Rather than continue to fight this, should I give up and become an organic farmer?
ps: I know about the existing C# grammar in the contrib directory of CEDET/semantic. That thing works, but ... It doesn't support the latest C# spec, including LINQ, partial classes and methods, yield, anonymous methods, object initializers, and so on. Also it mostly punts on parsing a bunch of the C# code. It sniffs out the classes and methods, and then bails out. Even foreach loops aren't done quite right. It's good as far as it goes, but I'd like to see it be better. What I'm trying to do is make it current, and also extend it to parse more of the C# code.
You may want to look at the calc example in the semantic/wisent directory. It is quite simple, and also shows how to use the %left and %right features. It will "execute" the code instead of convert it into tags. Some other simple grammars include the 'dot' parser in cogre, and the srecode parser in srecode.
For wisent debugging, there is a verbosity flag in the menu, though to be honest I hadn't tried it. There is also wisent-debug-on-entry which lets you select a action that will cause the Emacs debugger to stop in that action so you can see what the values are.
The older "bovine" parser has a debug mode that allows you to step through the rules, but it was never ported to wisent. That is a feature I have sorely missed as I write wisent parsers.
Regarding Q1:
1st make sure that the wisent parser is actually used:
(fetch-overload 'semantic-parse-stream)
should return wisent-parse-stream.
Run the following elisp-snippet:
(easy-menu-add-item semantic-mode-map '(menu-bar cedet-menu) ["Wisent-Debug" wisent-debug-toggle :style toggle :selected (wisent-debug-active)])
(defun wisent-debug-active ()
"Return non-nil if wisent debugging is active."
(assoc 'wisent-parse-action-debug (ad-get-advice-info-field 'wisent-parse-action 'after)))
(defun wisent-debug-toggle ()
"Install debugging of wisent-parser"
(interactive)
(if (wisent-debug-active)
(ad-unadvise 'wisent-parse-action)
(defadvice wisent-parse-action (after wisent-parse-action-debug activate)
(princ (format "\ntoken:%S;\nactionList:%S;\nreturn:%S\n"
(eval i)
(eval al)
(eval ad-return-value)) (get-buffer-create "*wisent-debug*"))))
(let ((fileName (locate-file "semantic/wisent/wisent" load-path '(".el" ".el.gz")))
fct found)
(if fileName
(with-current-buffer (find-file-noselect fileName)
(goto-char (point-max))
(while (progn
(backward-list)
(setq fct (sexp-at-point))
(null
(or
(bobp)
(and
(listp fct)
(eq 'defun (car fct))
(setq found (eq 'wisent-parse (cadr fct))))))))
(if found
(eval fct)
(error "Did not find wisent-parse.")))
(error "Source file for semantic/wisent/wisent not found.")
)))
It creates a new entry Wisent-Debug in the Development-menu. Clicking this entry toggles debugging of the wisent parser. Next time you reparse a buffer with the wisent-parser it outputs debug information to the buffer *wisent debug*. The buffer *wisent debug* is not shown automatically but you find it via the buffer menu.
To avoid a flooding of *wisent debug* you should disable "Reparse when idle".
From time to time you shold clear the buffer *wisent debug* with erase-buffer.
If I do this I get a System.StackOverflowException:
private string abc = "";
public string Abc
{
get
{
return Abc; // Note the mistaken capitalization
}
}
I understand why -- the property is referencing itself, leading to an infinite loop. (See previous questions here and here).
What I'm wondering (and what I didn't see answered in those previous questions) is why doesn't the C# compiler catch this mistake? It checks for some other kinds of circular reference (classes inheriting from themselves, etc.), right? Is it just that this mistake wasn't common enough to be worth checking for? Or is there some situation I'm not thinking of, when you'd want a property to actually reference itself in this way?
You can see the "official" reason in the last comment here.
Posted by Microsoft on 14/11/2008 at
19:52
Thanks for the suggestion for
Visual Studio!
You are right that we could easily
detect property recursion, but we
can't guarantee that there is nothing
useful being accomplished by the
recursion. The body of the property
could set other fields on your object
which change the behavior of the next
recursion, could change its behavior
based on user input from the console,
or could even behave differently based
on random values. In these cases, a
self-recursive property could indeed
terminate the recursion, but we have
no way to determine if that's the case
at compile-time (without solving the
halting problem!).
For the reasons above (and the
breaking change it would take to
disallow this), we wouldn't be able to
prohibit self-recursive properties.
Alex Turner
Program Manager
Visual C# Compiler
Another point in addition to Alex's explanation is that we try to give warnings for code which does something that you probably didn't intend, such that you could accidentally ship with the bug.
In this particular case, how much time would the warning actually save you? A single test run. You'll find this bug the moment you test the code, because it always immediately crashes and dies horribly. The warning wouldn't actually buy you much of a benefit here. The likelihood that there is some subtle bug in a recursive property evaluation is low.
By contrast, we do give a warning if you do something like this:
int customerId;
...
this.customerId= this.customerId;
There's no horrible crash-and-die, and the code is valid code; it assigns a value to a field. But since this is nonsensical code, you probably didn't mean to do it. Since it's not going to die horribly, we give a warning that there's something here that you probably didn't intend and might not otherwise discover via a crash.
Property referring to itself does not always lead to infinite recursion and stack overflow. For example, this works fine:
int count = 0;
public string Abc
{
count++;
if (count < 1) return Abc;
return "Foo";
}
Above is a dummy example, but I'm sure one could come up with useful recursive code that is similar. Compiler cannot determine if infinite recursion will happen (halting problem).
Generating a warning in the simple case would be helpful.
They probably considered it would unnecessary complicate the compiler without any real gain.
You will discover this typo easily the first time you call this property.
First of all, you'll get a warning for unused variable abc.
Second, there is nothing bad in teh recursion, provided that it's not endless recursion. For example, the code might adjust some inner variables and than call the same getter recursively. There is however for the compiler no easy way at all to prove that some recursion is endless or not (the task is at least NP). The compiler could catch some easy cases, but then the consumers would be surprised that the more complicated cases get through the compiler's checks.
The other cases cases that it checks for (except recursive constructor) are invalid IL.
In addition, all of those cases, even recursive constructors) are guarenteed to fail.
However, it is possible, albeit unlikely, to intentionally create a useful recursive property (using if statements).
Some time ago I had to address a certain C# design problem when I was implementing a JavaScript code-generation framework. One of the solutions I came with was using the “using” keyword in a totally different (hackish, if you please) way. I used it as a syntax sugar (well, originally it is one anyway) for building hierarchical code structure. Something that looked like this:
CodeBuilder cb = new CodeBuilder();
using(cb.Function("foo"))
{
// Generate some function code
cb.Add(someStatement);
cb.Add(someOtherStatement);
using(cb.While(someCondition))
{
cb.Add(someLoopStatement);
// Generate some more code
}
}
It is working because the Function and the While methods return IDisposable object, that, upon dispose, tells the builder to close the current scope. Such thing can be helpful for any tree-like structure that need to be hard-codded.
Do you think such “hacks” are justified? Because you can say that in C++, for example, many of the features such as templates and operator overloading get over-abused and this behavior is encouraged by many (look at boost for example). On the other side, you can say that many modern languages discourage such abuse and give you specific, much more restricted features.
My example is, of course, somewhat esoteric, but real. So what do you think about the specific hack and of the whole issue? Have you encountered similar dilemmas? How much abuse can you tolerate?
I think this is something that has blown over from languages like Ruby that have much more extensive mechanisms to let you create languages within your language (google for "dsl" or "domain specific languages" if you want to know more). C# is less flexible in this respect.
I think creating DSL's in this way is a good thing. It makes for more readable code. Using blocks can be a useful part of a DSL in C#. In this case I think there are better alternatives. The use of using is this case strays a bit too far from its original purpose. This can confuse the reader. I like Anton Gogolev's solution better for example.
Offtopic, but just take a look at how pretty this becomes with lambdas:
var codeBuilder = new CodeBuilder();
codeBuilder.DefineFunction("Foo", x =>
{
codeBuilder.While(condition, y =>
{
}
}
It would be better if the disposable object returned from cb.Function(name) was the object on which the statements should be added. That internally this function builder passed through the calls to private/internal functions on the CodeBuilder is fine, just that to public consumers the sequence is clear.
So long as the Dispose implementation would make the following code cause a runtime error.
CodeBuilder cb = new CodeBuilder();
var f = cb.Function("foo")
using(function)
{
// Generate some function code
f.Add(someStatement);
}
function.Add(something); // this should throw
Then the behaviour is intuitive and relatively reasonable and correct usage (below) encourages and prevents this happening
CodeBuilder cb = new CodeBuilder();
using(var function = cb.Function("foo"))
{
// Generate some function code
function.Add(someStatement);
}
I have to ask why you are using your own classes rather than the provided CodeDomProvider implementations though. (There are good reasons for this, notably that the current implementation lacks many of the c# 3.0 features) but since you don't mention it yourself...
Edit: I would second Anoton's suggest to use lamdas. The readability is much improved (and you have the option of allowing Expression Trees
If you go by the strictest definitions of IDisposable then this is an abuse. It's meant to be used as a method for releasing native resources in a deterministic fashion by a managed object.
The use of IDisposable has evolved to essentially be used by "any object which should have a deterministic lifetime". I'm not saying this is write or wrong but that's how many API's and users are choosing to use IDisposable. Given that definition it's not an abuse.
I wouldn't consider it terribly bad abuse, but I also wouldn't consider it good form because of the cognitive wall you're building for your maintenance developers. The using statement implies a certain class of lifetime management. This is fine in its usual uses and in slightly customized ones (like #heeen's reference to an RAII analogue), but those situations still keep the spirit of the using statement intact.
In your particular case, I might argue that a more functional approach like #Anton Gogolev's would be more in the spirit of the language as well as maintainable.
As to your primary question, I think each such hack must ultimately stand on its own merits as the "best" solution for a particular language in a particular situation. The definition of best is subjective, of course, but there are definitely times (especially when the external constraints of budgets and schedules are thrown into the mix) where a slightly more hackish approach is the only reasonable answer.
I often "abuse" using blocks. I think they provide a great way of defining scope. I have a whole series of objects that I use for capture and restoring state (e.g. of Combo boxes or the mouse pointer) during operations that may change the state. I also use them for creating and dropping database connections.
E.g.:
using(_cursorStack.ChangeCursor(System.Windows.Forms.Cursors.WaitCursor))
{
...
}
I wouldn't call it abuse. Looks more like a fancied up RAII technique to me. People have been using these for things like monitors.
How can I set Resharper to wrap, say, the generated equality members with regions when selected from the Alt+Insert menu?
Thanks
there is usually a "wrap in regions" option towards the bottom of the dialog box, but not for this one. I would submit that to JetBrains as a request. For the time being, you'll have to select the generated methods and use the ctrl->E,U,5 (surroundwith shortcut) to get the expected result.
it doesn't really answer your question, but I just can't resist to try to convince you NOT to use regions. Why would you want to do it? The obvious disadvantages of regions are:
they don't compile, so you can never know if the name of the region really describes what is inside
regions are often used to hide rubbish code. The thinking here is: you can't see the rubbish bits, so it is as if they didn't exist. But guess what, they still exist...
regions are just textual, they don't have any semantic meaning. That means that the code inside the region can change the state of another region - which doesn't help to figure out what is happening in the class at all
if you structure your code correctly, it should be obvious what it is doing anyway
I believe using regions makes sense pretty much only for automatically generated parts, e.g. WinForms designer stuff. In most (all?) other cases it is much better to refactor the code, extract some extra classes or methods, etc. to make it clear.
You can highlight the text you are interested in wrapping and use the Visual Studio key shortcut of
CTRL + k, s
selecting #region from the menu.