I am designing a LightSwitch 2012 application to manage requests, and I want to be able to update the request state with the same reusable code across all of my screens. For example, A user can change a request state in the approval screen, the fulfillment screen, etc. Which are called by a button. Currently, I have a method in each of the .cs files where I need to update the request, using the partial void <ScreenCommand>_Execute() methods. I am trying to change this so I can update the code from one place instead of everywhere, and I also want to not have to copy the methods to new screens that I add a button to.
Now normally I would put this in Application.cs or somewhere else with global access, but I don't have access to the same DataWorkspace object. I also pass in the this.DataWorkspace object from the screen, which allows access to the SaveChanges() method. However, this seems a little smelly. Is there a better way to deal with this or a better place to put reusable commands that you want to be able to assign to buttons on multiple screens? Currently I have to be really careful about saving dirty data and I still have to wire everything up manually. I also don't know if the code runs in the proper context if it's in the Application.cs file.
To clarify, yes, I do want this to run client side, so I can trigger emails from their outlook inboxes and the like.
What you're trying to do is simply good programming practice, putting code that's required in more than one place in a location where it can be called from each of those places, yet maintained in just one place. It's just a matter of getting used to the way you have to do things in LightSwitch.
You can add code in a module (or a static class in C#) in the UserCode folder of the Client project. That's part of the reason the folder exists, as a place to put "user code". To do this, switch to File View, then right-click the UserCode folder to add your module/class. Add your method in the newly created module/class. You can create as many of these as you like (if you like to keep code separated), or you can add other methods to the same module/class, it's up to you.
However, I wouldn't go passing the data workspace as a parameter to the reusable method that you create. I wouldn't even pass the entity object either, just the values that you need to calculate the required state. But the actual invoking of the data workspace's SaveChanges method should remain in the screen's code. Think of the screen as a "unit of work".
In each button's Execute method (in your various screens), you call your method with values from the entity being manipulated in the screen & return the result. Assign the calculated returned value to the entity's State property (if that's what you have), then call the screen's Save method (or use the screen's Close method, passing true for the SaveChanges parameter). There's no need to call the data workspace's SaveChanges method, & you're doing things the "LightSwitch way" by doing it this way.
Another benefit of doing it this way, is that your code can now be unit tested, as it's no longer dependent on any entity.
I hope that all makes sense to you.
Related
Is it possible to make a variable or a List of items accessible to the whole project?
My program selects an object in one view and a I want to have access/change it another one.
I know this not the best workaround and it would be better to use a MVVM-pattern for this, but it seems a big effort to implement this properly just for this simple usecase of one ot two variables/lists.
Sharing data can be done in multiple ways.
One interesting way could be to cache the data, have a look at this for example : https://learn.microsoft.com/en-us/dotnet/desktop/wpf/advanced/walkthrough-caching-application-data-in-a-wpf-application?view=netframeworkdesktop-4.8
I would recommend against using any global variables, I would also recommend not using static variables either as you might open yourself up to sharing data between users for example.
In this example, when you need the data, you check if you have it in the cache, if not you load it from wherever ( db, file, api, whatever your source is) and then you simply read it from the cache wherever and whenever you require it.
If you need to update it, then you make sure you update it to whatever storage mechanism you have and then you reload the cache. This is a good way to keep things in sync when updates are needed without complicating the application, its testing and the maintenance.
I'm wondering whether this is possible. We want a function to work in our .NET code when a value in a specific table is updated. This could be upon a record insert or update. Is this possible?
If not, is there an alternative process?
You need to ask a couple of questions.
Do you want any to none of your business logic at the db level?
Obviously a db trigger could do this (perform some action when a value is changed, even if very specific value only).
I've seen some systems that are db trigger heavy. Their 'logic' resides deeply and highly coupled with the db platform. There are some advantages to that, but most people would probably say the disadvantages are too great (coupling, lack of encapuslation/reusability).
Depending on what you are doing and your leanings you could:
Make sure all DAO/BusinessFunctoin objects call your 'event' object.function to do what you want when a certain value change occurs.
Use a trigger to call your 'event' object.function when a certain value change occurs.
Your trigger does everything.
I personally would lean towards Option 2 where you have a minimal trigger (which simply fires the event call to your object.function) so you don't deeply couple your db to your business logic.
Option 1 is fine, but may be a bit of a hassle unless you have a very narrow set of BF/DAO's that talk to this db table.field you want to watch.
Option 3 is imho the worst choice as you couple logic to your db and reduce its accessibility to your business logic layer.
Given that, here is some information toward accomplishing this via Options 2:
Using this example from MSDN: http://msdn.microsoft.com/en-us/library/938d9dz2.aspx.
This shows how to have a trigger run and call a CLR object in a project.
Effectively, in your project, you create a trigger and have it call your class.
Notice the line: [SqlTrigger(Name="UserNameAudit", Target="Users", Event="FOR INSERT")]
This defines when the code fires, then within the code, you can check your constraint, then fire the rest of the method (or not), or call another object.method as needed.
The primary difference between going directly to the db and adding a trigger is this gives you access to all the objects in your project when deployed together.
I have never tried this but it is possible. You can write a CLR assembly and call that from your table trigger.
You can see an example here.
But you should post your problem and you may find a better work around.
I'm wondering is it somehow possible to save Form state in C# after application closing? I tried with List, and whenever I create an instance of Form that instance is added to List, and it's there until it's deleted. It works fine, and I can view, edit and delete saved forms, until Application is closed. So, considering that Form isn't serializable is there any chance to save List somehow, and load it later?
The Control and Form classes are not serializable. There's a very good reason for that, many of their property values are heavily dependent on the execution state of the program. Like Handle, very important but always different. UICues, depends on whether the user pressed the Alt key. Even simple things like Location and Size, dependent on video adapter settings and user preferences.
You would not want to serialize these properties. What you want to preserve is the data that was used to initialize the controls. Which of course entirely depends on your program, there is no commonality at all. It is therefore up to you to create a class that stores the state of your UI. You can make it serializable as needed and select your preferred way to implement serialization, there are many ways to do so. Strictly separating the view from the model in your code is normally very important to have a decent shot at making this work.
I'm currently developing an application using ASP.NET MVC, and now I need to create an interface (web page) that will allow the users to pick and choose from a set of different objecs, the ones they'd like to use as the building blocks for constructing a more complex object.
My question is supposed to be generic, but to provide the actual example, let's say the application that will allow users to design pieces of furniture, like wardrobes, kitchen cabinets, etc. So, I've created C# classes representing the basic building blocks of furniture design, like basic shapes (pieces of wood that added together form a box, etc), doors, doorknobs, drawers, etc. Each of these classes have some common properties (width, height, length) and some specific properties, but all descend from a basic class called FurnitureItem, so there are ways for them to be 'connected' together, and interchanged. For instance, there are different types of doors that can be used in a wardrobe... like SimpleDoor, SlidingDoor, and so on. The user designing the furniture would have to choose wich type of Door object to apply to the current furniture. Also, there are other items, like dividing panels, shelves, drawers, etc. The resulting model of course would be a complete customized modularly designed wardrobe or kitchen cabinet, for example.
The problem is that while I can easily instantiate all the objects that I need and connect them together using C#, forming a complete furniture item, I need to provide a way for users to do it using a web interface. That means, they would probably have a toolbox or toolbar of some sort, and select (maybe drag and drop) items to a design panel, in the web interface... so, while in the browser I cannot have my C# class implementation... and if I post the selected item to the server (either a form post or using ajax), i need to reconstruct the whole collection of objects that were already previously chosen by the user, so I can fit the newly added item... and calculate it's dimensions, etc. and then finaly return the complete modified set of objects...
I'm trying to think of different ways of caching, or persisting theses objects while the user is still designing (adding and deleting items), since there may be many roundtrips to the server, because the proper calculation of dimentions (width, height, etc of contained objects) is done at the server by methods of my C# classes. It would be nice maybe to store objects for the currrent furniture being designed in a session object or cache object per user... even then I need to be able to provide some type of ID to the object being added and the one being added to, in a parent owner kind of way, so I can identify properly the object instance back in the server where the new instance will be connected to.
I know it's somehow confusing... but I hope this gives one idea of the problem I'm facing... In other words, I need to keep a set of interconnected objects in the server because they are responsible for calculations and applying some constraints, while allowing the users to manipulate each of these objects and how they are connected, adding and deleting them, through a web interface. So at the end, the whole thing can be persisted in a database. Idealy I want even to give user a visual representation or feedback, so they can see what they are designing as they go along...
Finally, the question is more so as to what approach should I take to this problem. Are C# classes enough in the server (encapsulating calculation and maybe generating each one it's own graphical representation back to the client)? Will I need to create similar classes in javascript to allow a more slick user experience? Will it be easier if I manage to keep the objects alive in a session or cache object between requests? Or should I just instantiate all objects that form the whole furniture again on each user interaction (for calculation)? In that case, I would have to post all the objects and all the already customized properties every time?
Any thoughts or ideas on how to best approach this problem are greatly appreciated...
Thanks!
From the way you've described it, here is what I'm envisioning:
It sounds like you do want a slick looking UI so yes, you'll want to divide your logic into two sets; a client-side set for building and a server-side set for validation. I would get heavy on the javascript so that the user can happily build their widget disconnected, and then validate everything once it's posted to the server.
Saving to a session opens a whole can of webfarm worms. If these widgets can be recreated in less than a minute (once they've decided what they like), I would avoid saving partials all together. If it's absolutely necessary though, I would save them to the database.
If the number of objects to construct a widget is reasonable, it could all come down at once. But if there are hundreds of types of 'doors' you're going to want to consider asynchronous calls to load them, with possible paging/sorting.
I'm confused about your last part about instantiating/posting all objects that form the whole furniture. This shouldn't be necessary. I imagine the user would do his construction on his client, and then pass up a single widget object to the server for validation.
That's what I'm thinking anyway... by the way, hello StackOverflow, this is my first post.
You might want to take a look at Backbone.js for this kind of project. It allows you to create client-side models, collections, views and controllers that would be well suited to your problem domain. It includes built in Ajax code for loading/saving those models/collections to/from the server.
As far as storing objects before the complete object is sent to the server, you could utilize localStorage, and store your object data as a JSON string.
I am using a single global DataContext object for my entire application. The application should work in a network environment where multiple instances of it work simultaneously with a shared SQL database.
The database changes in one application are not reflected in other instances until I call DataContext.Refresh method. Problem is this function is time consuming and I cannot change my code back to using different datacontext objects for different operations.
What should I do to always keep the datacontext object in each application updated?
RefreshMode Enum is the correct bit. Just a matter of deciding when to use it, and using the DataContext correctly. Think of the DataContext as a unit-of-work, and flag the refresh mode when your prepairing a submit (like KeepChanges or something). In that way, the users info is pushed (or bubbles up on conflict) and it's automagically updated with the freshest stuff in the database.
I think everyone else has rightly pointed out the wrongness of a global datacontext. You'd either have to set a refresh time or give the user a button to refresh if you wanted to update their display more frequently. I don't know of another way there.