C# code that generates javascript on the fly - c#

Is it OK to generate code like this on the fly? Or is this a major code smell? How can this be made better?
I'm new to web but I'm stumbling across this all the time and I don't really understand why.
// Create a js function that applies foo to each group of controls
foreach (KeyValuePair<string, Dictionary<Control, string>> pair in maps)
{
js.Append(pair.Key);
js.Append("=function(grid){Prm.remove_endRequest(");
js.Append(pair.Key);
js.Append(");if(grid && grid._element)"); // ... blah blah blah
}
page.ClientScript.RegisterClientScriptBlock(page.GetType(), key + "Ajax",
js.ToString(), true);

I don't see it as a smell until you start doing it all over the place.
Consider these changes, however, that will help out in the future:
Write data-driven JS functions, and only dynamically generate the data that they need. This way, all your JS can be tucked away in a fast static file, and your server only sends the data. This is a better design change than (2) and (3) - and it really isn't that hard. Just think of the data that your current code generator needs, serve that data instead of the JS code, then wrap your JS code in "factory functions" that accept that data as input.
Use templates for the JS code just as you use templates for HTML. This way you don't have to munge around in C# flow control / data control code when you really just want to change some variable names. I would suggest naming template files with the name of the view that it assists. If you have Home.aspx then perhaps you will have JS code templates Home_DoCrazyGridThing.js, Home_DoOtherCrazyThing.js. You can write a simple template engine or use one of many existing such.
Create a thin layer over generating code so that it's obvious what you're doing to the maintainer. That is, have a JSCodeGenerator class with varying levels of intelligence (understands the language OR just allows you to dump string in it OR interfaces with the emplate engine from (2)).

Related

How to format/style ///<summary> in Web API 2

Maybe this isn't even possible, but it seems silly that I can't figure it out (nor can find anything conclusive after searching).
With a MVC/C# Web API 2 project, your controllers can be documented using something like:
///<summary>
///This is something really cool that you should use. I want <b>this bold</b>.
///</summary>
[HttpPost]
public MyResponse MyMethod(SomeInput input)
{
....
}
When the API runs, the project automatically builds the help site, and I can see the above endpoint/method, and its description ( text), but I've head to figure out how to do any sort of styling to the summary. It appears that the HTML tags get striped from the help page's output. Notice in my example above, I have "this bold". I'm not so much concerned about bold, but more interested in being able to use unordered lists () and other basic HTML tags to just do some real basic formatting.
Is this even possible?
Is there a trick to it?
Is there some other markup/formatting I should be using?
Note - The actual endpoint that I'm trying to document at moment, happens to be a mime multipart form, and the framework won't document those out of the box. To get around this, I've created some helper methods in HelpPageConfigurationExtensions (to determine if the current endpoint view is one that requires custom documentation), in HelpPageApiModel.cshtml to determine if it should show the stock documentation or the custom docs, a helper library that contains the custom doc information, and a series of help functions that use some reflection to rapidly build HTML tables for the rest of the help page's documentation (e.g. the request and response objs). I'm mentioning this because maybe I just need to further extend my custom doc library to include (hard code) the value, and then in the view I can just #Html.Raw it -- opposed to trying to get the actual method's to output with formatting.
Thoughts?
Thanks!

Parsing C# code for contextually aware semantic highlighting

I'm working on a semantic highlighting plugin for VS. Here you can see a web Example.
The goal:
Acquiring all variables and creating different Classifications for every one of them.
The problem:
Getting the variables from the code without writing a C# lexer.
My current approach uses an ITagger. I use an ITagAggregator to get the tags of all the spans that get passed to the ITagger. Then I filter those and get only spans with the "identifier" classification which includes varibles, methods names, class names, usings and properties.
public class Classifier : ITagger<ClassificationTag> {
public IEnumerable<ITagSpan<ClassificationTag>> GetTags(NormalizedSnapshotSpanCollection spans) {
ITextSnapshot snapshot = spans[0].Snapshot;
var tags = _aggregator.GetTags(spans).Where((span) => span.Tag.ClassificationType.Classification.Equals("identifier")).ToArray();
foreach(var classifiedSpan in tags) {
foreach(SnapshotSpan span in classifiedSpan.Span.GetSpans(snapshot)) {
//generate classification based on variable name
yield return new TagSpan<ClassificationTag>(span, new ClassificationTag(_classification));
}
}
}
}
It would be a lot easier to use the builtin C# Lexer to get a list of all variables bundled to a bunch of meta data. Is this data available for plugin development? Is there an alternative way I could acquire it, if not?
The problem: Getting the variables from the code without writing a C# lexer.
Roslyn can do this: https://roslyn.codeplex.com/
There's even a Syntax Visualizer sample that might interest you. I also found an example using Roslyn to create a Syntax Highlighter.
Visual Studio exposes that information as a code model.
Here is an example how you can access class, and then find attribute on the class, and parse attribute arguments:
Accessing attribute info from DTE
Here is more information about code models:
http://msdn.microsoft.com/en-us/library/ms228763.aspx
Here's also automation object model chart what I've been using quite few times: http://msdn.microsoft.com/en-us/library/za2b25t3.aspx
Also, as said, Roslyn is indeed also a possible option. Here is an example for VS2015 using roslyn: https://github.com/tomasr/roslyn-colorizer/blob/master/RoslynColorizer/RoslynColorizer.cs
For building language tools if may be better to use a parser generator for C#. The GOLD parsing system is one such toolkit which can handle LALR grammars. It has a .NET component based engine that you can use in your project and it can be used to integrate with any IDE. You can also find the grammars for various programming languages including C#.

Generate a C# object based on an xml file?

This may be way out in left field, crazy, but I just need to ask before I go on implementing this massive set of classes.
Basically, I'm writing a binary message parser that decodes a certain military message format into an object. The problem is that there are literally hundreds of different message types and they share almost nothing in common with each other. So the way I'm planning to implement this is to create hundreds of different objects.
However, even though the message attributes share nothing in common, the method for decoding them is fairly straightforward and follows a pattern. So I'm planning to write a code generator to generate all the objects and the decode logic for each message type.
What would be really sweet is if there was some way to dynamically create an object based on some schema. It doesn't necessarily have to be XML, but XML is pretty easy to work with.
Is this possible in C#?
I would like the interface to look something like this:
var decodedMessage = MessageDecoder.Decode(byteArray);
Where the MessageDecoder figures out what type of message it is and then returns the appropriate object. It will probably return an interface which implements a MessageType Property or something like that.
Basically what I'm wondering is if there is a way to have one object called Message, which implements a MessageType Property. And then Depending on the MessageType, the Message object transforms into whatever type of message it is, so I don't have to spend the time creating all of these message types.
ExpandOobject Where you can dynamically add fields to an object.
A good starting point is here.
Is xsd.exe what you are looking for? It can take an XML file or a schema and generate the c# classes. One problem that you might encounter though is that some of the military message formats are VERY obtuse. You could end up with some very large code files.
Look at T4 templates. They let you write code to generate code, they are integrated into the IDE, and they are quite easy really.
EDIT: There is no way to do what you are after with var, because var requires the right-hand side of the assignment to be statically typed (at compile time). I suppose that you could dynamically generate that statement, then compile and run it, but that's a very painful approach.
If you have XSD's for all of the message types, then you can use xsd.exe as #jle suggests. If not, then I am curious about the following:
// Let's assume this works
var decodedMessage = MessageDecoder.Decode(byteArray);
// Now what? I don't know what properties there are on decodedMessage, so I cant do anything with it.

Parsing a Auto-Generated .NET Date Object with Javascript/JQuery

There are some posts on this, but not an answer to this specific question.
The server is returning this: "/Date(1304146800000)/"
I would like to not change the server-side code at all and instead parse the date that is included in the .Net generated JSON object. This doesn't seem that hard because it looks like it is almost there. Yet there doesn't seem to be a quick fix, at least in these forums.
From previous posts it sounds like this can be done using REGEX but REGEX and I are old enemies that coldly stare at each other across the bar.
Is this the only way? If so, can someone point me to a REGEX reference that is appropriate to this task?
Regards,
Guido
The link from Robert is good, but we should strive to answer the question here, not to just post links.
Here's a quick function that does what you need. http://jsfiddle.net/Aaa6r/
function deserializeDotNetDate(dateStr) {
var matches = /\/Date\((\d*)\)\//.exec(dateStr);
if(!matches) {
return null;
}
return new Date( parseInt( matches[1] ) );
}
deserializeDotNetDate("/Date(1304146800000)/");
Since you're using jQuery I've extended its $.parseJSON() functionality so it's able to do this conversion for you automatically and transparently.
It doesn't convert only .net dates but ISO dates as well. ISO dates are supported by native JSON converters in all major browsers but they work only one way because JSON spec doesn't support date data type.
Read all the details (don't want to copy blog post content here because it would be too much) in my blog post and get the code as well. The idea is still the same: change jQuery's default $.parseJSON() behaviour so it can detect .Net and ISO dates and converts them automatically when parsing JSON data. This way you don't have to traverse your parsed objects and convert dates manually.
How it's used?
$.parseJSON(yourJSONstring, true);
See the additional variable? This makes sure that all your existing code works as expected without any change. But if you do provide the additional parameter and set it to true it will detect dates and convert them accordingly.
Why is this solution better than manual conversion? (suggested by Juan)
Because you lower the risk of human factor of forgetting to convert some variable in your object tree (objects can be deep and wide)
Because your code is in development and if you change some server-side part that returns JSON to the client (rename variables, add new ones, remove existing etc.), you have to think of these manual conversions on the client side as well. If you do it automatically you don't have to think (or do anything) about it.
Two top reasons from the top of my head.
When overriding jQuery functionality feels wrong
When you don't want to actually override existing $.parseJSON() functionality you can minimally change the code and rename the extension to $.parseJSONwithdates() and then always use your own function when parsing JSON. But you may have a problem when you set your Ajax calls to dataType: "json" which automatically calls the original parser. If you use this setting you will have to override jQuery's existing functionality.
The good thing is also that you don't change the original jQuery library code file. You put this extension in a separate file and use it at your own will. Some pages may use it, others may not. But it's wise to use it everywhere otherwise you have the same problem of human factor with forgetting to include the extension. Just include your extension in some global Javascript file (or master page/template) you may be using.

C# Factory Pattern

I am building a search application that has indexed several different data sources. When a query is performed against the search engine index, each search result specifies which data source it came from. I have built a factory pattern that I used to display a different template for each type of search result, but I've realized that this pattern will become more difficult to manage as more and more data sources are indexed by the search engine (i.e new code template has to be created for each new data source).
I created the following structure for my factory based off of an article by Granville Barnett over at DotNetSlackers.com
factory pattern http://img11.imageshack.us/img11/8382/factoryi.jpg
In order to make this search application easier to maintain, my thought was to create a set of database tables that can be used to define individual template types that my factory pattern could reference in order to determine which template to construct. I figured that I'd need to have a look up table that would be used to specify the type of template to build based off of the search result data source. I'd then need to have a table(s) to specify which fields to display for that template type. I'd also need a table (or additional columns within the template table) that would be use to define how to render that field (i.e. Hyperlink, Label, CssClass, etc).
Does anyone have any examples of a pattern like this? Please let me know.
Thanks,
-Robert
I would offer that this proposed solution is no less maintainable than simply associating a data source to the code template, as you currently have now. In fact, I would even go so far as to say you're going to lose flexibility by pushing the template schema and rendering information to a database, which will make your application harder to maintain.
For example, let's suppose you have these data sources with attributes (if I'm understanding this correctly):
Document { Author, DateModified }
Picture { Size, Caption, Image }
Song { Artist, Length, AlbumCover }
You then may have one of each of these data sources in your search results. Each element is rendered differently (Picture may be rendered with a preview image anchored to the left, or Song could display the album cover, etc.)
Let's just look at the rendering under your proposed design. You're going to query the database for the renderings and then adjust some HTML you are emitting, say because you want a green background for Documents and a blue one for Pictures. For the sake of argument, let's say you realize that you really need three background colors for Songs, two for Pictures, and one for Documents. Now, you're looking at a database schema change, which is promoted and pushed out, in addition to changing the parameterized template you're applying the rendering values to.
Let's say further you decide that the Document result needs a drop-down control, the Picture needs a few buttons, and Songs need a sound player control. Now, each template per data source changes drastically, so you're right back where you started, except now you have a database layer thrown in.
This is how the design breaks, because you've now lost the flexibility to define different templates per data source. The other thing you lose is having your templates versioned in source control.
I would look at how you can re-use common elements/controls in your emitted views, but keep the mapping in the factory between the template and the data source, and keep the templates as separate files per data source. Look at maintaining the rendering via CSS or similar configuration settings. For making it easier to maintain, considering exporting the mappings out as a simple XML file. To deploy a new data source, you simply add a mapping, create the appropriate template and CSS file, and drop them in to expected locations.
Response to comments below:
I meant a simple switch statement should suffice:
switch (resultType)
{
case (ResultType.Song):
factory = new SongResultFactory();
template = factory.BuildResult();
break;
// ...
Where you have the logic to output a given template. If you want something more compact than a long switch statement, you can create the mappings in a dictionary, like this:
IDictionary<ResultType, ResultFactory> TemplateMap;
mapping = new Dictionary<ResultType, ResultFactory>();
mapping.Add(ResultType.Song, new SongResultFactory());
// ... for all mappings.
Then, instead of a switch statement, you can do this one-liner:
template = TemplateMap[resultType].CreateTemplate();
My main argument was that at some point you still have to maintain the mappings - either in the database, a big switch statement, or this IDictionary instance that needs to be initialized.
You can take it further and store the mappings in a simple XML file that's read in:
<TemplateMap>
<Mapping ResultType="Song" ResultFactoryType="SongResultFactory" />
<!-- ... -->
</TemplateMap>
And use reflection et. al. to populate the IDictionary. You're still maintaining the mappings, but now in an XML file, which might be easier to deploy.

Categories

Resources