Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've written a (open source) C#/.NET library that contains a handful of strings that may be displayed to the user. Thus, it would be good to have them translatable.
I've worked at a couple of companies now and they always solved this problem via .resx files. However, as companies, they a) know exactly which languages their applications will be translated to and b) have the resources (man power, money) to have all of their strings translated.
As an open source author I neither want to limit the translation of my library to a certain set of languages nor do I have the resources to provide any translation at all.
So, ideally I would only provide the English "translation" for all my strings and user's of my library would have some way of translating these strings into their desired languages without any code changes to my library.
To my (limited) understanding, when using .resx files the default language (English) is compiled directly into the assembly/dll whereas other languages are provided as satellite assemblies. So, in theory, user's of my library could provide the satellite assemblies for their desired languages themselves.
Would this work for open source libraries (and if yes, how)? Or are there other, better (recommended) ways of how to deal with this problem?
(Ideally the solution should work with .NET Core.)
Having users of your library provide translations is not uncommon or unreasonable, I guess. At work we do the same with a commercial library where we also don't have the resources to provide all languages out of the box.
Translation still works with satellite assemblies, the only complicated part is to get the resource names correct (they use the default namespace of the project + any folders if you don't provide a custom name in the project file) so that they are picked up correctly at runtime.
You could use JSON to solve your translation problem
Slay the Spire is a really fun rouge-like deck building game
and to translate the game to various languages they came to the community with guidelines and files (which are basically JSON files).
Of course, i don't know the ins and outs of how they did that exactly but it seems you can use the same thing for your library
you can check the computer local language (or any other way) to get the user main language and pull(if exists) the right JSON file before the program starts up
TRANSLATOR_README
example for french translation
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've heard a few times that Resource Dictionaries can slowly but surely build up and become drag on the performance of an app (especially as merged dictionaries begin to reference other merged dictionaries and it all clumps together into an unintended matrix of resources).
With this in mind, should styles be assimilated into c# as custom controls that intelligently try to carry out what the style(s) were going to set the properties of a given control to, set by an internally defined Enum="Example" instead of Style="{StaticResource Example}"?
And if so, at what point/level of ResourceDictionary 'severity' (for lack of a better word) should this be done?
Additionally, how much attention should XAML even be given over C# if it turns out C# is more efficient at runtime?
Should XAML simply be used as an area to place minimalistic tags ultimately defined, styled, given properties and controlled by C#?
In my opinion and experience, I would say resources are a very good tool provided by WPF engine. Now, question is performance issues, to answer that question I would point two huge tools Visual Studio and Blend both are build in WPF and UI elements are using dynamic resources heavily. But, there is no performance issues with the tool. So, to correctly answer your question, you should be using the correct technology at correct place. Resources provide you a great flexibility when you want to modify some thing like theme or visual appearance. Although to your point you need to be very careful of the usage and try to keep resources in check. Include only required resources in your page.
So, conclusion:
1. No, do not make a practice of converting everything into control and not use resources at all.
2. Yes, you need to make a considered effort in respect of resources to keep application performance optimized.
If you're referring to memory bloat due to repeatedly adding the same resources via transitive dependencies, check out the various implementations of SharedResourceDictionary floating about on the web. This can reduce working set and reduce startup time (in my experience) but you should take care to avoid memory leaks as most just store a static map from URI string to ResourceDictionary.
If you're making a more general question about whether resource dictionaries are useful or not, then yes, they are very useful and even essential for many kinds of common XAML patterns (such as StaticResource).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
i am preparing for my finals and came across this question:
Learn about the reflection mechanisms of Java, C# and Prolog, all of
which allow
a program to inspect and reason about its own symbol table at run time. How
complete are these mechanisms? (For example, can a program inspect symbols that
aren’t currently in scope?) What is reflection good for? What uses should be
considered good or bad programming practice?
Why is this question asked in terms of symbol table? Can i write the same solution that i write in terms of classes and objects like mentioned in this SO question:
What is reflection and why is it useful?
I think of reflection as the basic tool to do metaprogramming.
This turns out to be a declarative way to solve (a kind of) problems.
Sometime, instead of building a solution, can be useful to write something that allow to describe the problem space. That is, see if your problem can be restated in a more practical language.
You see, we treat languages as components of algorithms, like data. Then we can exchange components between languages.
Practically, an example of interesting Java/Prolog reflection is JPL
Some time ago I found useful - and performant - C# reflection. United to emit package allows to produce compiled code.
Prolog use reflection in seamless ways: for instance DCGs are really a 'simple' rewrite of declared rules.
I've started a project that I hope I will take me to Prolog controlling Qt interface,
of course Qt reflection plays a fundamental role.
edit About your question on symbol tables: symbol is an extremely general term. Also all languages have a concept of symbols, (maybe) differently aggregated. That's the core of languages. Then the question is perfectly posed in very general terms, just to check your understanding of these basic language concepts.
The "symbol table" is just an internal concept that is needed for "reflection" to do what it does: the ability of a program to examine itself at runtime and do something dynamically with that. (be aware about the diff between - introspection vs. reflection).
So if you understand what reflection is good for, how it is implemented in your target platform (Java, C# etc.), and what might be the limitations, you should be able to answer all those questions I suppose.
Think about the symbol table as just an "implementation detail" of a platform/runtime. According to the question above I don't think they expect you to know exactly how this is implemented.
I'd suggest you to read the following pages to get an idea of reflection in the corresponding language:
JAVA
C#
Prolog - Search for 'Reflection'
After Reading those you shold see similarities of the methods.
To be honest, I've never worked with reflection in Prolog, but the docs should guide you through.
The symobl table is used by the reflection mechanisms to look the things up.
See here for a description of symbol tables.
Those resources should give you an idea on how to answer your questions
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm looking for a set of classes (preferably in the .net framework) that will parse C# code and return a list of functions with parameters, classes with their methods, properties etc. Ideally it would provide all that's needed to build my own intellisense.
I have a feeling something like this should be in the .net framework, given all the reflection stuff they offer, but if not then an open source alternative is good enough.
What I'm trying to build is basically something like Snippet Compiler, but with a twist. I'm trying to figure out how to get the code dom first.
I tried googling for this but I'm not sure what the correct term for this is so I came up empty.
Edit: Since I'm looking to use this for intellisense-like processing, actually compiling the code won't work since it will most likely be incomplete. Sorry I should have mentioned that first.
While .NET's CodeDom namespace provides the basic API for code language parsers, they are not implemented. Visual Studio does this through its own language services. These are not available in the redistributable framework.
You could either...
Compile the code then use reflection on the resulting assembly
Look at something like the Mono C# compiler which creates these syntax trees. It won't be a high-level API like CodeDom but maybe you can work with it.
There may be something on CodePlex or a similar site.
UPDATE
See this related post. Parser for C#
If you need it to work on incomplete code, or code with errors in it, then I believe you're pretty much on your own (that is, you won't be able to use the CSharpCodeCompiler class or anything like that).
There's tools like ReSharper which does its own parsing, but that's prorietary. You might be able to start with the Mono compiler, but in my experience, writing a parser that works on incomplete code is a whole different ballgame to writing one that's just supposed to spit out errors on incomplete code.
If you just need the names of classes and methods (metadata, basically) then you might be able to do the parsing "by hand", but I guess it depends on how accurate you need the results to be.
Mono project GMCS compiler contains a pretty reusable parser for C#4.0. And, it is relatively easy to write your own parser which will suite your specific needs. For example, you can reuse this: http://antlrcsharp.codeplex.com/
Have a look at CSharpCodeCompiler in Microsoft.CSharp namespace. You can compile using CSharpCodeCompiler and access the result assembly using CompilerResults.CompiledAssembly. Off that assembly you will be able to get the types and off the type you can get all property and method information using reflection.
The performance will be pretty average as you will need to compile all the source code whenever something changes. I am not aware of any methods that will let you incrementatlly compile snippets of code.
Have you tried using the Microsoft.CSharp.CSharpCodeProvider class? This is a full C# code provider that supports CodeDom. You would simply need to call .Parse() on a text stream, and you get a CodeCompileUnit back.
var codeStream = new StringReader(code);
var codeProvider = new CSharpCodeProvider();
var compileUnit = codeProvider.Parse(codeStream);
// compileUnit contains your code dom
Well, seeing as the above does not work (I just tested it), the following article might be of interest. I bookmarked it a good long time ago, so I believe it only supports C# 2.0, but it might still be worth it:
Generate Code-DOMs directly from C# or VB.NET
It might be a bit late for Blindy, but I recently released a C# parser that would be perfect for this sort of thing, as it's designed to handle code fragments and retains comments:
C# Parser and CodeDOM
It handles C# 4.0 and also the new 'async' feature. It's commercial, but is a small fraction of the cost of other commercial compilers.
I really think few people realize just how difficult parsing C# has become, especially if you need to resolve symbolic references properly (which is usually required, unless maybe you're just doing formatting). Just try to read and fully understand the Type Inference section of the 500+ page language specification. Then, meditate on the fact that the spec is not actually fully correct (as mentioned by Eric Lippert himself).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am interested in contributing something to mono whether it is a documentation or what ever. As a first step, I downloaded the source tree for going through the code. However, I thought if some one would've spend enough time to understand the project structure that would help everyone here. Any one point me out where the project structure is well explained?
NOTE: This is not a duplicate of question https://stackoverflow.com/questions/1655090/mono-source-code-walkthrough-tutorial, the answer to this question doesn't suffice my expectation.
You should have checked out (subversion checkout URLs here):
trunk/libgdiplus
This is a library used by System.Drawing.
trunk/mono
This is what we call the Mono runtime. Contains mainly C source code. Under this directory you can find:
data/: a few configuration files for different version (1.x, 2.x,...).
msvc*/: Visual Studio solution files to build the Mono runtime.
libgc/: the Boehm Garbage Collector sources.
mono/: Mono runtime sources.
mini/: JIT source code
metadata/: these are almost all the functions used by the Mono runtime (marshaling, thread pool, socket I/O, file I/O, console I/O, application domains, GC, performance counters,...). It's more or less one C file each.
util: miscellaneous functions.
io-layer/: Win32 I/O emulation functions.
trunk/mcs
This is where the C# compiler, the class libraries, class libraries tests and other tools are.
class/ : One folder per assembly. Each of them contains the source code for each assembly split in directories with the namespace name (ie, System/System.Configuration and so on) and usually a Test directory too. The only naming exception is mscorlib whose corresponding folder is called corlib.
For example, if you want to see the source code for System.Net.HttpWebRequest, which is in the System.dll assembly, you go to trunk/mcs/class/System/System.Net and there shoould be a file named HttpWebRequest.cs containing the code you're looking for.
mcs/: the sources for the C# compilers (mcs, gmcs, smcs, dmcs...)
tools/: these are a bunch of tools used for development (sn, wsdl,...), documentation (monodoc), etc. Most of the tools names match the MS ones.
There are a lot more directories around, but those are where you should look for the C and C# code. Also, I suggested trunk for the checkout, since you will get the most up-to-date sources that way.
Update: Mono resides now in github and mcs has been integrated into the mono repository.
Gonzalo provided a good overview of the different modules.
Since you also mentioned wanting to contribute to documentation, you'll want a few more pieces of information.
First, Documentation is stored in XML files within mcs/class/[assembly]/Documentation/, e.g. mcs/class/corlib/Documentation. The intent is to support multiple human languages (though only English is currently being worked on), so within Documentation is a language directory, usually en. Within en there are ns-*.xml files, e.g. mcs/class/corlib/Documentation/en/ns-System.xml contains documentation for the System namespace. Also within en are "dotted namespace" directories, and within those are XML files, one per type, for example mcs/class/corlib/Documentation/en/System.Collections.Generic/IEnumerable`1.xml.
This is also outlined within the mdoc(5) documentation, in the FILE/DIRECTORY STRUCTURE section.
Once you've found the documentation, you need to know the XML format, which is also described in the mdoc(5) documentation, in the NamespaceName/TypeName.xml File Format section. The XML dialect used is a variant of the ECMA 335 XML documentation, changed to have one file per type (instead of all types within a single monolithic file). This is also a superset of C# XML documentation (see Annex E. Documentation Comments, page 487).
Finally, there's the question of adding new types/members to the mcs/class/[assembly]/Documentation directory. If you have Mono built, you can use the doc-update Makefile target. This will run the appropriate assembly through mdoc(1) and update the appropriate files within the Documentation directory.
If you have any other documentation questions, don't hesitate to ask on the mono-docs-list mailing list.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have created an application which needs 'hand-over' to the support group in the next month.
The application is fairly small (2 months development), and consists of two client side applications and a database, it's written in c# for the windows platform.
I have a broad idea of what to include in a support document, but I haven't needed to make very many support documents so far in my career and I want a solid list of items to include.
I guess my goal is to make the lives of everyone in the support group easier and as stress free as possible.
So I guess my questions are:
What should a support document absolutely contain
What additional things have you put in support documents to make them extra useful.
What other activities can be done before hand-over to make all our lives easier?
Having been on both sides of this process professionally, I can say that the following should be mandatory:
the documentation of the code (javadoc, doxygen, etc)
details on build process
where to get current source
how to file bugs (they will happen)
route to provide patches either to the source or to customers
how it works (simple, but often overlooked)
user-customizable portions (eg there is a scripting component)
primary contacts for each component, aka escalation path
encouragement for feedback from Support as to what else they want to see
I'm sure lots of other things can be added, but these are the top priority in my mind.
Functional Specification (If you have one)
User Manual. Create one if you don't have
Technical Manual, Containing
Deployment Diagram
Softwares Used
Configuration and build details
Deatils of Server ip and admin / oracle / websphere passwords
Testing Document
Over view document giving out
Where all documents are kept
Version Control repository and its project/ user details
Application usernames / password
Any support SQL's/tools etc created by the development team, for analysis, loading data etc.
Include Screenshots of operations and output.
Prefer "online easily update-able" doc (wiki-like) instead of paper or pdf.
If online, make it searchable and cross-linked.
A usermanual is a neat thing (pictures, descriptions, aso.)
A rundown of the different features within the application
Thats what i'm thinking ontop of my head if this is "only" for support staff and not further development.