I saw thisarticle in the link below and I wonder if you could help me with some questions I have
http://msdn.microsoft.com/en-us/library/ee330223(v=office.12).aspx
In our current project, which we didn’t not develop but we have to maintain, we are facing some issues, it looks like the first time the other firm developed the content types, they were done fine, using xml definitions, creating list templates and list instances was also done fine and in an organized way.
However at some point in time and after the content types and lists were already running on production, some changes had to be done (adding new fields to existing content types, changing translations of displayname or groupname, changing properties like required, showinnewform, showineditform, etc)
Across the internet I have found that many people have problems with unghosted content types, which means that the content type is detached from its XML definition, as far as I know this happens when somebody modified the child content type or list using the UI.
I am trying to collect a list of best practices for managing content types after they are deployed:
1.How to add a new field to an existing content type?
For this we have used UpgradeActions and AddFieldRef
2.How to remove an existing field from a content type?
For this, we haven’t needed it yet, but I have see that there also exist the RemoveFieldRef element which could be used inside UpgradeActions
3.How to reorder fields in a content type?
We do this by code in a custom upgrade action.
4.How to change a translation in an existing field?
We do this by code in a custom upgrade action.
5.How to change properties like ShowInDisplayForm, ShowInNewForm, Hidden, Required, etc.
We do this by code in a custom upgrade action.
I wonder if my list above specially points 3,4 and 5 can be called best practices, or if I am missing something or doing something wrong? Why? A few weeks ago we had a lot of problems, when doing changes via code and pushing down the changes was not working, the changes were not pushed(we were not seeing the changes in the lists). After reading for many hours, I saw that this might be possible due to that the list content type LINK is broken from its Parent content type definition.
I found that a way to restablish this link can be done using SQL but it’s not supported of course.
http://www.olavaukan.com/2010/10/content-types-can-be-unghosted-too/
http://soerennielsen.wordpress.com/2007/09/08/convert-%E2%80%9Dvirtual-content-types%E2%80%9D-to-physical/
Maybe somebody can guide me in the right direction?
Related
I already have a custom-built CMS using which we define promo cards which are consumed by another site and display it. Until now it is showing all the promo cards declared over there in custom CMS, Right now the requirement is, I want to introduce rulesets to be assigned to these cards under CMS so the consuming site has to validate it against the respective object properties in the consuming app and show it if the ruleset passes the validation.
I have gone through a couple of libraries out there that provides an option of defining dynamic rulesets
against dynamic objects,
This one looks promising
https://github.com/microsoft/RulesEngine
As it provides an option to define ruleset dynamically, but i am not sure whether the path i going through the right path so that it will be future proof, and also i need to know the best practies while implementing the dynamic access check behavior.
Any assistance would greatly help. Thanks and looking forward to your suggestions and advice on the above
I do not know if this might help however the NRules engine might help you in what you are trying to achieve for dynamic rule engine based on properties:
https://github.com/NRules/NRules/wiki/Getting-Started
For context, using C# inside the Unity3D Editor.
I have more and more often started using enums to loosely couple things to settings.
For example i am setting up an item, and i want to give it a visual from a pool of defined visuals. That visual is basically a class that contains a sprite, a color, and a model attached to an integer unique ID. From this Unique ID, i generate an Enum. And it takes some effort to verify that the UniqueID is actually Unique, and catch some edge cases regarding that.
The benefit of doing the above, is that the enum is all that has to be stored on the item, to link it to the visual. At runtime there is a dictionary created to lookup the enum, and then request the stored visual to be loaded/used. This loosely couples the visuals to the item, so loading the item list does not automatically load all of the visual assets associated with the item. The last part is unity default behavior and is really annoying, and it really slows down the game and consumes a massive amount of RAM in this default behavior.
As a result we have a lot of those enums for various purposes and a lot of lookup stuff happening. And currently we are having no big problems with it.
However, the enums and the editing/generation of those enums is error prone in the sense that when values are removed, the items (and any other interested parties) are non the wiser, which then has to be either tested before build, or runs into a safety catch/error at runtime.
My question is. Is this a blatant abuse of Enums? And if so, what would be a better way of approaching this problem of loose coupling?
If it is not, what would be a better way to set up and manage these enums in a safe way? So alarm bells will go off if anything using the enum now has an invalid value, or the values meaning would change? Which i imagine is hardly possible, and requires code all over the place to "self check" on recompile?
Or does this just all boil down to team discipline to manage the values well, and know what the enums mean and represent? In which case, it would never be able to make this designer friendly unless i write a custom editor for each and every one of these.
Thanks for any insights you might be able to provide.
If I understand you correctly, you're trying to associate each item with one of multiple static visuals? If this is the case you can simply write each visual as a static readonly object inside the visuals class. In your "item" objects you can then make a field called e.g. "visual" and set this to reference the right visual.
I don't know what makes the visuals load, but if the constructor does, then I believe they will load when the visual class is first used at runtime.
After playing around with Asp.Net MVC for some time I have decided to actually use it in a project. One of the issues that came up is that the frontend site might have different validation rules for a given model than the admin panel.
I am aware of the MetadataType property but since you have more than one contexts this would not work for us out of the box.
In order to solve this I implemented a custom ModelMetadataProvider that redirects the default ModelMetdataProvider to a different type based on the request's execution context. This works pretty well for displaying the needed UI.
The part of this solution I do not like is that I ended up reading the stack from my custom model metadata provider to determine if the given call is for model binding. This is because when I did not do that I would correctly get "Object does not match target type" during the call to TryUpdateModel from the Controller since the model binder was trying to use properties from type A to set values to an instance of type B.
Is reading the call stack such a bad idea for production?
Is there a way to replicate the MetadataTypeAttribute behavior selectively without using attributes?
Thanks in advance,
John
This is one of those instances where you wish the ASP.NET MVC Team hadn't sealed a class - I'm sure they had their reasons. I was going to suggest simply creating your own attribute, derived from MetadataTypeAttribute.
One way to go about this is to take the source of the attribute and write your own:
http://dotnetinside.com/framework/v4.0.30319/framework/v4.0.30319/System.ComponentModel.DataAnnotations/MetadataTypeAttribute
Although, of course, this makes your code less maintainable.
I would assert that to the best of my knowledge, you are already making the right decision with a ModelMetadataProvider as your solution. I'm a little nervous for you regarding analysing the call stack though, change locations, move something to an area; you get my drift, i.e. it would be very easy to break the code with a build time decision that isn't found until runtime or beyond QA.
You haven't supplied how the context is roughly determined, but I would personally tackle that by adding a property to the class itself with an Enum (finite possibilities and design time breakage) with a list of possible contexts, then during spin up of the class, populate it, ready for execution of the Provider, which will pass through the correct metatdatatype based on the value of the Enum.
Many ways to skin this cat, but something that is going to break on build will serve you best, IMHO.
Unless you are using MVC 6 you may find ModelMetadata Fluent Configuration useful.
Some nice examples of how to use it can be found here and here.
What is really important is that it is just code which is completely under your control. Thus, once you have different contexts, you may decide to define different configurations, or you may play a bit harder and make (a set of) different registrations for different contexts.
What really helps is "decorating" (the term used on purpose!) properties of a base class, at least nothing seems to stooping you from doing it.
EDIT: Model Metadata shouldn't be confused with WCF RIA Services Contrib.
I'm working on an ASP.NET web application that uses a lot of JavaScript on the client side to allow the user to do things like drag-drop reordering of lists, looking up items to add to the list (like the suggestions in the Google search bar), deleting items from the list, etc.
I have a JavaScript "class" that I use to store each of the list items on the client side as well as information about what action the user has performed on the item (add, edit, delete, move). The only time the page is posted to the server is when the user is done, right before the page is submitted I serialize all the information about the changes that were made into JSON and store it in hidden fields on the page.
What I'm looking for is some general advice about how to build out my classes in C#. I think it might be nice to have a class in C# that matches the JavaScript one so I can just deserealize the JSON to instances of this class. It seems a bit strange though to have classes on the server side that both directly duplicate the JavaScript classes, and only exist to support the JavaScript UI implementation.
This is kind of an abstract question. I'm just looking for some guidance form others who has done similar things in terms of maintaining matching client and server side object models.
Makes perfect sense. If I were confronting this problem, I would consider using a single definitive description of the data type or class, and then generating code from that description.
The description might be a javascript source file; you could build a parser that generates the apropriate C# code from that JS. Or, it could be a C# source file, and you do the converse.
You might find more utility in describing it in RelaxNG, and then building (or finding) a generator for both C# and Javascript. In this case the RelaxNG schema would be checked into source code control, and the generated artifacts would not.
EDIT: Also there is a nascent spec called WADL, which I think would help in this regard as well. I haven't evaluated WADL. Peripherally, I am aware that it hasn't taken the world by storm, but I don't know why that is the case. There's a question on SO regarding that.
EDIT2: Given the lack of tools (WADL is apparently stillborn), if I were you I might try this tactical approach:
Use the [DataContract] attributes on your c# types and treat those as definitive.
build a tool that slurps in your C# type, from a compiled assembly and instantiates the type, by using the JsonSerializer on a sample XML JSON document, that provides, a sort of defacto "object model definition". The tool should somehow verify that the instantiated type can round-trip into equivalent JSON, maybe with a checksum or CRC on the resulting stuff.
run that tool as part of your build process.
To make this happen, you'd have to check in that "sample JSON document" into source code and you'd also have to make sure that is the form you were using in the various JS code in your app. Since Javascript is dynamic, you might also need a type verifier or something, that would run as part of jslint or some other build-time verification step, that would check your Javascript source to see that it is using your "standard" objbect model definitions.
I'm still new with C#, and I'm working on a project where we also use WPF, and the WPF DataGrid Toolkit (See at CodePlex), which hasn't yet been released into the framework. Due to the nature of the project, it's 100% sure we will be altering some controls of the assembly later.
My co-workers have decided to redefine every control from the datagrid within a namespace of our project, and ihnerit the specific control of the assembly namespace.
So instead of using:
clr-namespace:Microsoft.Windows.Controls.Primitives;assembly=WPFToolkit
clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit
We'll be using our own xxx.Controls and xxx.Controls.Primitives Namespaces.
This way, it would be pretty easy to alter the ihnerited controls.
Somehow, I got a bad feeling about this solution, but I'm still inexperienced and cannot tell if such an approach is legitimate or not, or if there is another good solution to our requirements (altering the controls later without changing too much code in multiple files).
It would be nice if you express your opinion to this approach.
What kind of alterations are you talking about? It seems like a waste of time to pre-emptively derive from each of the classes before you know what you'll actually need to change.
When you do need to change something, it shouldn't be too hard to create the derived class at that point, and fix up any references - which may only be for some instances rather than all of them. Yes, it may mean check-ins involving quite a few files - but if you're using a sensible source control system it will be an atomic change, and it will be obvious why it's changing.
With your current approach, there's no immediate "these are the controls we've had to change" - if you do it in a "just-in-time" manner, you'll be able to tell just by looking at what derived controls you've actually had to create.
I agree with you. The alterations, or better said, the changes, can be of any kind. Behavior and etc. And the changes should be make just in time.
Unfortunately, that is not my decision. Some stubborns are at work :)
But what is interesting me is if a complete different approach to the whole idea exists?
Say, I've got a DataGrid, the project evolves, and now, I've got to do some drastic changes in the validation behavior of dataGrid rows.
This could also apply to a lot of controls.
The problem with our project is, we have a kind of complex data access layer, which not only provides data, but also actually controls it. This means data isn't read,modified, deleted or appended without including some logic provided by the data access layer.
For an example, the datagrid doesn't directly delete rows, but instead, we overwrite the delete behaviour and aks the data access layer to delete it. With binding, this works pretty good for now.
This kind of scenario will apply to a lot of other things in the future, regarding CRUD operations, validation and etc.