Hello
I Need to create a template, which is "dynamic", and i'll explain the meanning of "Dynamic":
I need to have a template, that is rendered into text files (c++ code, to be exact).
the user will be able to change some things in the generated files.
After a while, a process is run to update the generated files, i'll be able to "spot" where were the templates regions and update them accordingly.
My Effort
currently, i use a "T4 Template" to create the initial render,
and in the Template, I implant C++ style comments over the regions i need to recognize later.
and another code that find that regions, and regenerates what should go between those "Comment blocks".
the problem is that it is not the same code that generate the boiler-plate, and updates the regions which costs me a lot of headache and buggy features.
it is not very intuitive to write, and the users (the ones who use the generated code) need to know Not to touch the "Comment Blocks".
Questions
How can i recognize location/blocks in a generated file without "littering" the file with "comment"/"unimportant" text?
How can i unify the code that generate the "templated blocks" both for "Generation" and "Update"
Later on, How can i make it work on Non-Code files too,
Edit
I guess I wasn't clear at what i am doing,
I am writing a tool in C#, that generates C++ code.
Also, T4 is just what i used, but any tool/library can be used while C# libraries are prefered
Any idea will be highly appreciated,
Thanks.
Now, I believe your question is totally ["open" and "opinion based"] on one side, and ["why is this code not working" without showing the code] on the other side.. but I want to try pointing some problem with the idea of "improvement" you have now.
Q2: How can I unify the code that generate the "templated blocks" both for "Generation" and "Update"
I'm strongly convinced that you should not, at least not now. Here's why:
'generate' and 'update' are happening in different directions; first is t4template->content, second is content->t4template
those two directions form different functionality
at least one of these directions requires complex logic not present in the other one
'generate' is based on T4 Engine, while 'update' will probably not be able to use it at all
..and probably many other reasons, but that's enough
Q3: Later on, How can i make it work on Non-Code files too
T4 Engine has no idea that what you generate now is a C++ file. T4 works only on a layer of "text files". If the process you have works now, you should be able to "generate" any text file already right now. The "update" part is a bit more tricky, because it depends on how you implemented it. If you assumed/used any correlaction to C++ syntax, you've got a problem. (guess why T4 Templates are called 'text templating engine', agnostic to the actual generated code language) If you kept it clean and worked as if on a free-form text file, then you're already safe to work on, well, free-form text files.
Q1: How can I recognize location/blocks in a generated file without "littering" the file with "comment"/"unimportant" text?
Well, basically, you can't and/or shouldn't. Consider a smart idea of keeping a hidden database that remembers text locations for every file. For every comment that you would put in the file, you put a row in the database, saying file: BAR\FOO.CPP | FROM: line 120 char 1 | TO: line 131 char 15 | XXX: yyy | ZZZ: aaa. That's almost no difference to having comments in the file, all information is preserved, and the file is clean now, right?
Nope. And that's because you want to detect what has changed. Let's take a highly contrived example, here's the generated file with such invisible markers that are managed by database. Each # character denotes a marker, be it start/stop/metainfo nevermind:
class FooBar : public #BaseClass#
{
public:
#void Blargh(Whizz& output);#
#int GetAge() const;#
private:
int #shoeSize#;
#
};
Those # are of course invisible, it's just an information held elsewhere, the user sees a clean file. Now, he edits it to this:
class FooBar : public BaseClass
{
public:
template<T>
void Yeeek(T& output);
int GetAge() const;
private:
int shoeSize;
};
Please note how "template" was added and method renamed to "Yeeek". There were some markers out there, I didn't show them intentionally, just look on the "template<>" line. What if it was accidentally placed a line or a byte too far or too early, so one marker too many was skipped or included? Now the detector and updater may accidentally skip "template<>", and it will be totally happy to just rename the method. This is not a problem with the detector or updater. This is a problem of markers not being visible, so the user was not able to see where should he place his edit.
That's probably the most important point. But, let's see something more algorithmic/technical. Let's try an even simpler edit. User edits the file to:
class FooBarize : publ#ic BaseCl#ass
{
int goat;
# string cheese; #
p#ublic: #
void Blargh(Whizz& output);
i#nt GetA#ge() const;
p#rivate:
int shoeSize;
};
I overlaid those invisible markes from 'the external database of markers' back onto this edited file. What has happened? Simple. User has added two lines more in an odd place (he doesnt see the markes, right?), and the database remembers old places (i.e. 'line:char', but could be 'byte', or really whatever). Now of course, database may (and should!) also remember old shape of the file, so it can see that i.e. the first # was after ":public" and the process can try to map it onto the new file.. but then, you already have a highly complex problem, and this edit was trivial. Of course, you can require the user to enter some information on how to update the markers.. but hey, he don't see them, how can he do it? And since we wanted to hide the markers from him, we probably don't want to ask him about updating them as well..
How about editing the file to:
struct FooBar : One,Two,Three,Four
{
void OhNoes();
};
I didn't care to overlay the markers, because it's utter nonsense. Now, how to map it back to the template? Is OhNoes mappable to GetAge (const removed) or to Blargh (parameters removed)? How the template base class should be updated? Which one of the new bases is the true base? Or maybe all of them? Neither you nor I can decide it, even with our combined human intelligence, not mentioning an automated process.
Of course, you can leave it as a corner case, you can emit an error to the user and inform them that their edit went to far and is unanalyzable and so on. But The complexity of reverse-mapping a change back to the model text is still there.
What I want to show you by these contrived examples is, that if you want to detect and map changes back to the original template, you should keep these markers in the generated content. Having these markers in the code allows you quickly and reliably detect:
which sections changed? (-> content between markers has changed)
which sections were offsetted by edits? (->markers are now at different position than before)
were any sections deleted? (-> both markers and content between removed)
(..)
It also allows the user to see which parts are special so he can place his edits in a reasonable way, which allows you to ignore and not support more corner cases than in the "invisible markers" case.
Finally, let's take a read-world example which you already know. T4 Template. All those ugly <%!#!#^$^!%# littering your precious template text. Couldn't they be removed? Couldn't these be kept in a separate file that describes the transformation? Or at least at the beginning or end of the file? Yes, it could. But it would make the editing a real pain - we're back to 'invisible markers' problem: each your edit to the content may require you to manually update locations of some invisible markers.
Keep the markers in the generated content.
Keep your users aware of the generation and detection and special regions.
If it's too complex for them, change the users to a more technical group, or train your userbase to be more technical. Or prevent them from editing the file. Given them some partial access so they can edit a part of the file, as an excerpt, not as a whole file. Limit their editing power to absolute minimum. Maybe it will allow you to limit the number of visible markers, maybe even down to zero, maybe at the cost of splitting and downsizing editable fragments.
I think you are going about it the wrong way. You have a XY problem here. Allowing your users to modify only part of the generated file and then trying to detect that part it's a lot of headache as you have seen.
Instead, the better solution is to leave the generated file completely unmodifiable and have some configuration available. For instance you can have a config file where users can add their own data members, initializers for them, etc.
This way you have a clear separation of the parts of your system.
The modifications done by the users are now trivially carried to the next iteration and you can easily always re-generate the output.
+------------------+
| Input: Template | ------
+------------------+ \
|
+------------------+ | Generator code +-------------------------+
| Input: Config | -------+----------------------> | Output: Generated code |
+------------------+ | |-------------------------+
|
+------------------+ |
| Input: Config | --------/
+------------------+
This system can be used to generate non-code also.
Related
I have a CSV file with 412.000 strings in that I would like to pre-store locally so that I can deploy to Android and iOS. The game must then be able to look through these strings to check if there's a match based on user input.
The only viable solution that I can see would be SQLite. I haven't come across a very good SQLite solution for Unity yet.
Is there a built-in solution in Unity that I am overlooking?
The solution has to work locally. No HTTP calls.
400,000 strings is absolutely trivial.
Just put them in a dictionary (list, whatever is relevant and that you prefer).
It's a total non-issue.
It's likely you would just load them from a text file, easy as pie.
public TextAsset theTextFile;
(Just drag to the link in the Inspector, like any texture or similar.)
you can then very easily read that file as, say, JSON. (Just use JsonUtility. You can find numerous examples of this in SO and elsewhere.) For example,
Blah bb = JsonUtility.FromJson< Blah >(ta.text);
yourDict = bb.fieldname.ToDictionary(i => i.tag, i => i);
Note that you mention "memory" and so on. It's totally irrelevant, the data you are talking about is the fraction of the size of any tiny image - ! , it's a non-issue, you don't have to think about it. The hardware/software system will handle it.
P.S. ...
If you literally want to use csv, it's totally easy. I suggest you ask a new question giving the details of your file and so on, so you can get an exact answer.
Note that you'd just use a HashSet rather than a Dictionary. It's even easier.
It's just something like:
var wordList = theTextFile.text.Split('\n');
You can google many examples!
https://stackoverflow.com/a/9791488/294884
http://answers.unity.com/answers/397537/view.html
How can I get the first visible (top) and last visible (bottom) lines number for Scintilla component in C#? For example, if I scroll the text and I am able to see lines 5-41 (no folding, it is the number of lines which are shown by the component at the moment; the rest, you have to scroll to them), how do I get those numbers programatically?
If you ever want to find out how to do something with Scintilla, your first stop should always be the core Scintilla Documentation. It is comprehensive, and usually kept fully up to date.
The correct way to do what you want is to use the SCI_GETFIRSTVISIBLELINE message to get the first line, and then use the SCI_LINESONSCREEN message to calculate the last line.
There are probably Scintilla.NET wrapper methods for those messages. But the Scintilla.NET documentation seems very poor, and doesn't provide a complete description of its API - although I suppose you could always use the SendMessageDirect method (which is documented) to send the messages directly if you can't guess what the wrapper method is called.
For ScintillaNET 2 it would be:
scintilla.Lines.FirstVisibleIndex
scintilla.Lines.VisibleCount
In ScintillaNET 3 names were refactored to be more like core scintilla:
scintilla.FirstVisibleLine
scintilla.LinesOnScreen
I'm in need to estimate localization effort needed for a legacy project. I'm looking for a tool that I could point at a directory, and it would:
Parse all *.cs files in the directory structure
Extract all C# string literals from the code
Count total number of occurrences of the strings
Do you know any tool that could do that? Writing it would be simple, but if some time can be saved, then why not save it?
Use ILDASM to decompile your .DLL / .EXE.
I just use options to dump all, and you get an .il file with a section "User String":
User Strings
-------------------------------------------------------
70000001 : (14) L"Starting up..."
7000001f : (12) L"progressBar1"
70000039 : (21) L"$this.BackgroundImage"
70000065 : (10) L"$this.Icon"
7000007b : ( 6) L"Splash"
Now if you want to know how many time a certain string is used. Search for a "ldstr" like this:
IL_003c: /* 72 | (70)000001 */ ldstr "Starting up..." /* 70000001 */
I think this will be a lot easier to parse as C#.
Doing a quick search, I found the following tool that may or may not be useful to you.
http://www.devincook.com/goldparser/
I also found another SO user who was trying to do something similar.
Regex to parse C# source code to find all strings
Well, if you have hardcoded strings, you need to know what is your i18n effort first (unhardcoding them could be quite painful). Another issue: you need to count translatable words not distinct strings, that is the input for translation providers. And even though string might seem duplicated, it could be translated in a different way depending on the context, so you don't need to care about "distninct", you just have to count all words... That's how Localization works per my experience.
In most common development, you should keep your strings external to your program source code. In your case, could you spare the effort to extract the strings into a resource file?
If so, then you can make use of the default localization solution in .NET, i.e.
resource.resx,
resource.fr.resx,
resources.es.resx
stores strings for different locales.
Updated :
The actual implementation depends on your project architecture/technology, resource files ain't the best way to do this, but it is the easiest, and the recommended way in .NET.
Like in this article
A few more tutorials
A few more tutorials
I ask here about XNA and not on it's official forums because people from my country are not allowed to sign in to the new XNA website.
Well, these are my questions:
I want to use some 2D images I create in Paint Shop Pro/Photo Shop/Paint, but for some reason I need to use web safe pallet and such settings for it to be displayed currently (I use transparency).
could any1 please explain to me how can I use transparency & other settings (while creating & saving the image) so that the XNA (4.0) could display it correctly?
By the way, it might be that I just need some 1 to explain to me how to set the "GraphicsDevice"-s settings to work with transparency layer/channel.
I really do try to do things as I am supposed to (by Microsoft's view) & thus I use the content pipeline for ALL of my content loading (including classes initiation data files).
I use .txt files for storing my class initiation data & I edit them with simple good old notepad (++ :P).
Now, the problem is that all I managed to do is loading the .txt file as a really long string instead of creating a new instance of my GameDataFile class.
because of that I was forced to do it in 2 steps:
Step 1:
string tempStrData = content.load<string>("data/filename").Replace("\r", "");
/* Loads a string from a file (the string is the whole file!) */
Step 2:
GameDataFile gameDataFile = new GameDataFile(tempStrData.Split('\n'));
/* Sends the string to my GameDataFile class constructor which knows how to handle that string and break it to it's data elements (ints, strings vectors, etc...) */
I want to upgrade it to be of the following form:
GameDataFile gameDataFile = content.load<GameDataFile>("data/fileName");
I think I should do this using a custom Content pipeline Processor, any opinions if I'm right & how should I achieve that?
P.S please don't make me to use public members as I always set that to private and I hate and strictly forbid myself from using the C#-ONLY-get-&-set methods.
Thanks In Advance, Tal A.
For your first question, set the blendstate to AlphaBlend when you begin your SpriteBatch:
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null);
I save my images as PNGs in PhotoShop which allows transparency.
Edit: Unless you're referring to 3D textures. If so I'll have to revise my answer
Edit: As for question 2, this example on App Hub shows how to do it.
I have developed a large business portal. I just realized I need my website in another language. I have researched the solutions available like
Used third party control on my website. (Does fit in my design. Not useful regarding SEO point of view. Dont want to show third party brand names.)
Create Resource files for each language.( A lot of work required to restructure pages to use text from resource files. What about the data entered by the user like Business Description. )
Are there any Other options available.
I was thinking of a solution like a when a page is created on server side then I could translate it before sending back to client. Is there any way I can do that?(to translate everything including data added from databases or through a code. And without effecting design. )
If you really need to translate your application, it's going to take a lot of hard, tedious work. There is no magic bullet.
The first thing you need to do is convert your plain text in your markup to asp:Localize controls. By using the Localize control, you can leave your existing <span> tags in place and just replace the text inside of them. There's really no way around this. Visual Studio's search and replace supports regular expression matching that may help you with this, or you can use Resharper (see below).
The first approach would be to download the open source shopping application nopCommerce and see how they handle their localization. They store their strings in a database and have a UI for editing languages. A similar approach may work well for you.
Alternatively, if you want to use Resource Files, there are two tools that I would recommend using in addition to Visual Studio: Resharper 5 (Localization Features screencast) and Zeta Resource Editor. These are the steps I would take to accomplish it using this method:
Use the "Generate Local Resource" tool in visual studio for each page
Use Resharper's "Move HTML to resource" on the text in your markup to make them into Localize controls.
Use Resharper to search out any localizable strings in your code behind and move them to the resource file as well.
Use the Globalization Rules of Code Analysis / FXCop to help find any additional problems you might face formatting numbers, dates, etc.
Once all text is in the resx files, use Zeta Resource Editor to load up all of your resx files, add new languages, and export for translation (or auto translate if you're brave enough).
I've used this approach on a site translated into 8 languages (and growing) with dozens of pages (and growing). However, this is not a user-editable site; the pages are solely controlled by the programmers.
a large switch case? use a dictionary/hashtable (seperate instance for each a language), it is much, much more effective and fast.
To Convert The Page To Arabic Language Or Other Language .
Go to :
1-page design
2-Tools
3-Generate Local Resource
4-obtain "App_LocalResources" include "filename.aspx.resx"
5-copy the file and change the name to "filename.aspx.ar.resx" to convert the page to arabic language or other .
hope to helpful :)
I found a good solution, see in http://www.nopcommerce.com/p/1784/nopcommerce-translator.aspx
this project is open source and source repository is here: https://github.com/Marjani/NopCommerce-Translator
good luck
Without installing any 3rd party tool, APIs, or dll objects, I am able to utilize the App_LocalResources. Although I still use Google Translate for the words and sentences to be translated and copy and paste it to the file as you can see in one of the screenshots below (or you can have a person translator and type manually to add). In your Project folder (using MS Visual Studio as editor), add an App_LocalResources folder and create the English and other language (resx file). In my case, it's Spanish (es-ES) translation. See screenshot below.
Next, on your aspx, add the meta tags (meta:resourcekey) that will match in the App_LocalResources. One for English and another to the Spanish file. See screenshots below:
Spanish: (filename.aspx.es-ES.resx)
English: (filename.aspx.resx)
.
Then create a link on your masterpage file with a querystring that will switch the page translation and will be available on all pages:
<%--ENGLISH/SPANISH VERSION BUTTON--%>
<asp:HyperLink ID="eng_ver" runat="server" Text="English" Font-Underline="false"></asp:HyperLink> |
<asp:HyperLink ID="spa_ver" runat="server" Text="EspaƱol" Font-Underline="false"></asp:HyperLink>
<%--ENGLISH/SPANISH VERSION BUTTON--%>
.
On your masterpage code behind, create a dynamic link to the Hyperlink tags:
////LOCALIZATION
string thispage = Request.Url.AbsolutePath;
eng_ver.NavigateUrl = thispage;
spa_ver.NavigateUrl = thispage + "?ver=es-ES";
////LOCALIZATION
.
Now, on your page files' code behind, you can set a session variable to make all links or redirections to stick to the desired translation by always adding a querystring to urls.
On PageLoad:
///'LOCALIZATION
//dynamic querystring; add this to urls ---> ?" + Session["add2url"]
{
if (Session["version"] != null)
{
Session["add2url"] = "?ver=" + Session["version"]; //SPANISH version
}
else
{
Session["add2url"] = ""; // ENGLISH as default
}
}
///'LOCALIZATION
.
On Click Events sample:
protected void btnBack_Click(object sender, EventArgs e)
{
Session["FileName.aspx"] = null;
Response.Redirect("FileName.aspx" + Session["add2url"]);
}
I hope my descriptions were easy enough.
If you don't want to code more and if its feasible with google translator then You can try with Google Translator API. you can check below code.
<script src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script>
<script>
function googleTranslateElementInit() {
$.when(
new google.translate.TranslateElement({pageLanguage: 'en', includedLanguages: 'en',
layout: google.translate.TranslateElement.FloatPosition.TOP_LEFT}, 'google_translate_element')
).done(function(){
var select = document.getElementsByClassName('goog-te-combo')[0];
select.selectedIndex = 1;
select.addEventListener('click', function () {
select.dispatchEvent(new Event('change'));
});
select.click();
});
}
$(window).on('load', function() {
var select = document.getElementsByClassName('goog-te-combo')[0];
select.click();
var selected = document.getElementsByClassName('goog-te-gadget')[0];
selected.hidden = true;
});
</script>
Also, Find below code for <body> tag
<div id="google_translate_element"></div>
It will certainly be more work to create resource files for each language - but this is the option I would opt for, as it gives you the opportunity to be more accurate. If you do it this way you can have the text translated, manually, by someone that speaks the language (there are many companies out there that offer this kind of service).
Automatic translation systems are often good for giving a general impression of what something in another language means, but I would never use them when trying to portray a professional image, as often what they output just doesn't make sense. Nothing screams 'unprofessional!' like text that just doesn't make sense because it's been automatically translated.
I would take the resource file route over the translation option because the meaning of words in a language can be very contextual and even one mistake could undermine your site's credibility.
As you suggest Visual Studio can generate the meta resource file keys for most controls containing text but may leave you having to do the rest manually but I don't see an easier, more reliable solution.
I don't think localisation is an easy-to-automate thing anyway as text held in the database often results in schema changes to allow for multiple languages, and web HTML often need restructuring to deal with truncated or wrapped label and button text because, for example, you've translated into German or something.
Other considerations:
Culture settings - financial delimitors, date formats.
Right-to-left - some languages like arabic are written right to left meaning that the pages require rethinking as to control positioning like images etc.
Good luck whatever you go with.
I ended up doing it the hard way:
I wrote an extension method on the string class called TranslateInto
On the Page's PreRender method I grab all controls recursively based on their type (the types that would have text)
Foreach through them and text.TranslateInto(SupportedLanguages.CurrentLanguage)
In my TranslateInto method I have a ridiculously large switch statement with every string displayed to the user and its associated translation.
Its not very pretty, but it worked.
We work with a Translation CAT tool (Computer Assisted Translation) called MemoQ that allows us to translate the text while leaving all the tags and coding in place. This is very helpful when the order of words change when you translate from one language to another.
It is also very useful because it allows us to work with translators from around the world, without the need for them to have any technical expertise. It also allows us to have the translation proof read by a second translator.
We use this translation environment to translate html, xml, InDesign, Word, etc.
I think you should try Google Translate.
http://translate.google.com/translate_tools
Very easy and very very effective.
HTH