I'm trying to understand whether or not it is correct to use a class when entering data into a form. I am unsure whether my understanding is correct, I don't know if I should be trying to create a new object when entering the data, or if my data should be stored in an array/file etc.
This is just for my understanding, I am just learning for myself, so don't have a specific example of what I'm trying to achieve. I am trying to understand classes, and when to use them, currently I'm messing about with c# forms in visual studio
lets say, I want to enter information into a form of all of the members of my fishing club, i.e. name, address, contact info etc. should I be attempting to create a new object of a class that contains methods for getting/setting these variables, or should I be storing these details somewhere else, in an array, or write them to a text file/excel file for example? As I'm new I think I'm struggling with the concept of classes and where to use them.
I expect that to use a class for this purpose I would have to learn to create a class instance at run time in order to create multiple instances of a class, whereas entering the data into an array I would just need to initialize the size of the array.
I'm a bit lost so any help would be very much appreciated.
Should you store those fields in objects? Yes
Should those objects be stored in a collection (like an array)? Yes
Would persisting that collection to a file make sense? Yes
Basically, you are asking to choose an alternative among things that aren't really comparable, much less mutually exclusive. As the comments suggest, make sure to read up on object oriented programming, but here's a quick explanation of each of the above:
Objects provide a structured way to store information (giving each field a name and a type, for starters) and a way for the language/compiler to enforce that structure. Storing all the related data for an entity in an object is a great idea, you almost never want to have a bunch of members for that purpose.
Arrays or more generally, collections, are groups of "things". You could have stored all that information in an array instead of an object (and then for multiple records had "parallel" arrays" but this is a very messy and amateur technique). A collection of your objects however, is a totally reasonable thing to do (you don't want object1, object2, object3 variables in your code).
Files are important because both objects and collections (which are, in and of themselves, objects) are stored in memory, they will go away when the application closes. Files (and similar, like databases) give you a way to persist them (hence such techniques being referred to as a "persistence" layer). You can then load the persisted data into a new instance of your program (say if the user restarts the computer).
Side note: Should I make methods for getting/setting these variables? No.
Just use properties (ie, public string Address {get; set;}). They provide both a backing field (at least with an auto property like the above) and are syntactic sugar for both get and set methods (which you can override with a "full" property).
As a bonus, if using WPF you can automate the population of properties from UI with data binding, but that's a question for another day :).
Related
Is it possible to make a variable or a List of items accessible to the whole project?
My program selects an object in one view and a I want to have access/change it another one.
I know this not the best workaround and it would be better to use a MVVM-pattern for this, but it seems a big effort to implement this properly just for this simple usecase of one ot two variables/lists.
Sharing data can be done in multiple ways.
One interesting way could be to cache the data, have a look at this for example : https://learn.microsoft.com/en-us/dotnet/desktop/wpf/advanced/walkthrough-caching-application-data-in-a-wpf-application?view=netframeworkdesktop-4.8
I would recommend against using any global variables, I would also recommend not using static variables either as you might open yourself up to sharing data between users for example.
In this example, when you need the data, you check if you have it in the cache, if not you load it from wherever ( db, file, api, whatever your source is) and then you simply read it from the cache wherever and whenever you require it.
If you need to update it, then you make sure you update it to whatever storage mechanism you have and then you reload the cache. This is a good way to keep things in sync when updates are needed without complicating the application, its testing and the maintenance.
I'm building a workflow for some forms, that would route to users for approval. I have an abstract 'FormBase' class which stores a LinkedList of 'Approver' objects and has some helpers that modify the approver list, move the form to the next person and so on. These helpers are all 'virtual' so that a given form can override them with custom behaviours.
From this 'base' will come instances of various forms, each of which will differ in the data it contains. It'll be the normal set of lists, strings, and numeric values you'd find in a normal form. So I might have
class MaterialForm : FormBase
class CustomerForm : FormBase
etc, and a new instance is created when a user creates and submits a form.
I'd like to persist the field details of in EF6 (or 5), in a flexible way so I could create more forms deriving from FormBase without too much fiddling. Ideally I'd like all behaviour specific to that form type to live in the MaterialForm derived class. I figure I can't persist this derived class to EF (unless I'm wrong!). I'm considering:
a) JSonifying the field details, and storing them as a string in the class which gets stored to EF. I'd probably do the same for the approver list, and each time I need to modify the list I'd pull them out, modify the list and push them back.
b) Including a 'FormData' property in my abstract class, then including a derived version of that in each concrete implementation (eg MaterialFormData, CustomerFormData). However, the abstract class didn't seem to like my use of a derived type in this way. Also unclear is how the DbSets would be setup in this case as you'd probably need a new table for each type.
I feel I'm misunderstanding something fundamental about how the 'real' classes relate to those stored in EF. What would you recommend as an architecture for this case?
When it comes to Entity Framework you have three supported models for inheritance: Table Per Type (TPT), Table Per Hierarchy (TPH), and Table Per Concrete Class (TPC).
The use of TPC is generally avoided, and choosing between the other two comes down to several factors including performance and flexibility. There is a great article outlining the differences here.
I'd also recommend reading this and this for more information and examples on how these patterns work.
However, in your example it sounds like the issue is at the 'design' stage in terms of coming up with a suitable 'model' for your application. My advice is that, if you feel the class structure you have come up with can not accurately represent the model you are working with, you either need to change the structure; your model is massively complex; or you are constrained by an outside system (a database schema that you have no way of changing for example).
In this case, have you considered doing a class diagram? Even if you use the EF designer, try and visualize the model, as that's usually the best way of determining where improvements can be made, and also gives you a good start in the design (especially if you're going code first).
Try that, and post that back here if necessary. We can then help redesign the model if required. My feeling is that there's nearly always some good way of representing your requirement with a decent OO perspective, and it's best to analyse that before looking at more elaborate options! The key here is to avoid thinking of creating new class types dynamically if that can be avoided.
I'm using memcache behind a web app to minimize the hits to our SQL database. I'm storing C# objects into this cache by marking them with SerializableAttribute. We make heavy use of dependency injection via Ninject in our app.
Some of these objects are large, and I'd like to break them up. However, they come from a single stored procedure call (i.e. one stored procedure call gets cooked into the full object graph), and I'd like to be able to break these objects up and lazy-load specific subgraphs from the cache separately rather than load the entire object graph into memory all at once.
What are some patterns that would help me accomplish this?
As far as patterns go, I'd say the one large complex object that's built from a single stored procedure is suspect. I'm not sure if your caching is a requirement or just the current state of its implementation.
The pattern that I'm used to is a type of repository pattern, using operations that fill specific contracts. And those operations house one or many datasources that call stored procedures in the database that will be used to build ONE of those sub-graphs you speak of. With that said, if you're going to lazy load data from a database, then I can only assume that many of the object members are not used much of the time which furthers my point - break that object up.
A couple things about it:
It can be chatty if the entire object is being used regularly
It is fully injectable via the Operations
The datasources contain the reader for the specific object, thus only performing ONE task (SOLID)
Can be modified to use Entity Framework, without too much fuss
Can be designed to implement an interface, making it more reusable
Will require you to break up that proc into smaller, chewable pieces, which will likely only benefit you in the long run.
The complex object shown in this diagram really shouldn't exist if only parts of it are going to be used. Instead, consider segregating those objects. However, it really depends on how this object is being used.
UPDATE:
Using your cache as the repository, I would probably approach it like this:
So basically, you store the legacy object, but in your operations, you use them to build more relavent DTOs that are returned to the client.
I know NHibernate does lazy loading buy replacing objects with proxy objects. Then in the proxy object there is some kind of check that causes the loading of the real object the first time you try to access the object.
I'm not sure of any Design Patterns that would cover that, but you could look at the Nhibernate source code.
A down side of using proxy objects is you have to be careful with inheritance and type checks as you could be checking the type of the proxy and not the actual object.
I'm currently developing an application using ASP.NET MVC, and now I need to create an interface (web page) that will allow the users to pick and choose from a set of different objecs, the ones they'd like to use as the building blocks for constructing a more complex object.
My question is supposed to be generic, but to provide the actual example, let's say the application that will allow users to design pieces of furniture, like wardrobes, kitchen cabinets, etc. So, I've created C# classes representing the basic building blocks of furniture design, like basic shapes (pieces of wood that added together form a box, etc), doors, doorknobs, drawers, etc. Each of these classes have some common properties (width, height, length) and some specific properties, but all descend from a basic class called FurnitureItem, so there are ways for them to be 'connected' together, and interchanged. For instance, there are different types of doors that can be used in a wardrobe... like SimpleDoor, SlidingDoor, and so on. The user designing the furniture would have to choose wich type of Door object to apply to the current furniture. Also, there are other items, like dividing panels, shelves, drawers, etc. The resulting model of course would be a complete customized modularly designed wardrobe or kitchen cabinet, for example.
The problem is that while I can easily instantiate all the objects that I need and connect them together using C#, forming a complete furniture item, I need to provide a way for users to do it using a web interface. That means, they would probably have a toolbox or toolbar of some sort, and select (maybe drag and drop) items to a design panel, in the web interface... so, while in the browser I cannot have my C# class implementation... and if I post the selected item to the server (either a form post or using ajax), i need to reconstruct the whole collection of objects that were already previously chosen by the user, so I can fit the newly added item... and calculate it's dimensions, etc. and then finaly return the complete modified set of objects...
I'm trying to think of different ways of caching, or persisting theses objects while the user is still designing (adding and deleting items), since there may be many roundtrips to the server, because the proper calculation of dimentions (width, height, etc of contained objects) is done at the server by methods of my C# classes. It would be nice maybe to store objects for the currrent furniture being designed in a session object or cache object per user... even then I need to be able to provide some type of ID to the object being added and the one being added to, in a parent owner kind of way, so I can identify properly the object instance back in the server where the new instance will be connected to.
I know it's somehow confusing... but I hope this gives one idea of the problem I'm facing... In other words, I need to keep a set of interconnected objects in the server because they are responsible for calculations and applying some constraints, while allowing the users to manipulate each of these objects and how they are connected, adding and deleting them, through a web interface. So at the end, the whole thing can be persisted in a database. Idealy I want even to give user a visual representation or feedback, so they can see what they are designing as they go along...
Finally, the question is more so as to what approach should I take to this problem. Are C# classes enough in the server (encapsulating calculation and maybe generating each one it's own graphical representation back to the client)? Will I need to create similar classes in javascript to allow a more slick user experience? Will it be easier if I manage to keep the objects alive in a session or cache object between requests? Or should I just instantiate all objects that form the whole furniture again on each user interaction (for calculation)? In that case, I would have to post all the objects and all the already customized properties every time?
Any thoughts or ideas on how to best approach this problem are greatly appreciated...
Thanks!
From the way you've described it, here is what I'm envisioning:
It sounds like you do want a slick looking UI so yes, you'll want to divide your logic into two sets; a client-side set for building and a server-side set for validation. I would get heavy on the javascript so that the user can happily build their widget disconnected, and then validate everything once it's posted to the server.
Saving to a session opens a whole can of webfarm worms. If these widgets can be recreated in less than a minute (once they've decided what they like), I would avoid saving partials all together. If it's absolutely necessary though, I would save them to the database.
If the number of objects to construct a widget is reasonable, it could all come down at once. But if there are hundreds of types of 'doors' you're going to want to consider asynchronous calls to load them, with possible paging/sorting.
I'm confused about your last part about instantiating/posting all objects that form the whole furniture. This shouldn't be necessary. I imagine the user would do his construction on his client, and then pass up a single widget object to the server for validation.
That's what I'm thinking anyway... by the way, hello StackOverflow, this is my first post.
You might want to take a look at Backbone.js for this kind of project. It allows you to create client-side models, collections, views and controllers that would be well suited to your problem domain. It includes built in Ajax code for loading/saving those models/collections to/from the server.
As far as storing objects before the complete object is sent to the server, you could utilize localStorage, and store your object data as a JSON string.
I have an idea for how to solve this problem, but I wanted to know if there's something easier and more extensible to my problem.
The program I'm working on has two basic forms of data: images, and the information associated with those images. The information associated with the images has been previously stored in a JET database of extreme simplicity (four tables) which turned out to be both slow and incomplete in the stored fields. We're moving to a new implementation of data storage. Given the simplicity of the data structures involved, I was thinking that a database was overkill.
Each image will have information of it's own (capture parameters), will be part of a group of images which are interrelated (taken in the same thirty minute period, say), and then part of a larger group altogether (taken of the same person). Right now, I'm storing people in a dictionary with a unique identifier. Each person then has a List of the different groups of pictures, and each picture group has a List of pictures. All of these classes are serializable, and I'm just serializing and deserializing the dictionary. Fairly straightforward stuff. Images are stored separately, so that the dictionary doesn't become astronomical in size.
The problem is: what happens when I need to add new information fields? Is there an easy way to setup these data structures to account for potential future revisions? In the past, the way I'd handle this in C was to create a serializable struct with lots of empty bytes (at least a k) for future extensibility, with one of the bytes in the struct indicating the version. Then, when the program read the struct, it would know which deserialization to use based on a massive switch statement (and old versions could read new data, because extraneous data would just go into fields which are ignored).
Does such a scheme exist in C#? Like, if I have a class that's a group of String and Int objects, and then I add another String object to the struct, how can I deserialize an object from disk, and then add the string to it? Do I need to resign myself to having multiple versions of the data classes, and a factory which takes a deserialization stream and handles deserialization based on some version information stored in a base class? Or is a class like Dictionary ideal for storing this kind of information, as it will deserialize all the fields on disk automatically, and if there are new fields added in, I can just catch exceptions and substitute in blank Strings and Ints for those values?
If I go with the dictionary approach, is there a speed hit associated with file read/writes as well as parameter retrieval times? I figure that if there's just fields in a class, then field retrieval is instant, but in a dictionary, there's some small overhead associated with that class.
Thanks!
Sqlite is what you want. It's a fast, embeddable, single-file database that has bindings to most languages.
With regards to extensibility, you can store your models with default attributes, and then have a separate table for attribute extensions for future changes.
A year or two down the road, if the code is still in use, you'll be happy that 1)Other developers won't have to learn a customized code structure to maintain the code, 2) You can export, view, modify the data with standard database tools (there's an ODBC driver for sqlite files and various query tools), and 3) you'll be able to scale up to a database with minimal code changes.
Just a wee word of warning, SQLLite, Protocol Buffers, mmap et al...all very good but you should prototype and test each implementation and make sure that your not going to hit the same perf issues or different bottlenecks.
Simplicity may be just to upsize to SQL (Express) (you'll may be surprised at the perf gain) and fix whatever's missing from the present database design. Then if perf is still an issue start investigating these other technologies.
My brain is fried at the moment, so I'm not sure I can advise for or against a database, but if you're looking for version-agnostic serialization, you'd be a fool to not at least check into Protocol Buffers.
Here's a quick list of implementations I know about for C#/.NET:
protobuf-net
Proto#
jskeet's dotnet-protobufs
There's a database schema, for which I can't remember the name, that can handle this sort of situation. You basically have two tables. One table stores the variable name, and the other stores the variable value. If you want to group the variables, then add a third table that will have a one to many relationship with the variable name table. This setup has the advantage of letting you keep adding different variables without having to keep changing your database schema. Saved my bacon quite a few times when dealing with departments that change their mind frequently (like Marketing).
The only drawback is that the variable value table will need to store the actual value as a string column (varchar or nvarchar actually). Then you have to deal with the hassle of converting the values back to their native representations. I currently maintain something like this. The variable table currently has around 800 million rows. It's still fairly fast, as I can still retrieve certain variations of values in under one second.
I'm no C# programmer but I like the mmap() call and saw there is a project doing such a thing for C#.
See Mmap
Structured files are very performing if tailored for a specific application but are difficult to manage and an hardly reusable code resource. A better solution is a virtual memory-like implementation.
Up to 4 gigabyte of information can be managed.
Space can be optimized to real data size.
All the data can be viewed as a single array and accessed with read/write operations.
No needing to structure to store but just use and store.
Can be cached.
Is highly reusable.
So go with sqllite for the following reasons:
1. You don't need to read/write the entire database from disk every time
2. Much easier to add to even if you don't leave enough placeholders at the beginning
3. Easier to search based on anything you want
4. easier to change data in ways beyond the application was designed
Problems with Dictionary approach
1. Unless you made a smart dictionary you need to read/write the entire database every time (unless you carefully design the data structure it will be very hard to maintain backwards compatibility)
----- a) if you did not leave enough place holders bye bye
2. It appears as if you'd have to linear search through all the photos in order to search on one of the Capture Attributes
3. Can a picture be in more than one group? Can a picture be under more than one person? Can two people be in the same group? With dictionaries these things can get hairy....
With a database table, if you get a new attribute you can just say Alter Table Picture Add Attribute DataType. Then as long as you don't make a rule saying the attribute has to have a value, you can still load and save older versions. At the same time the newer versions can use the new attributes.
Also you don't need to save the picture in the database. You could just store the path to the picture in the database. Then when the app needs the picture, just load it from a disk file. This keeps the database size smaller. Also the extra seek time to get the disk file will most likely be insignificant compared to the time to load the image.
Probably your table should be
Picture(PictureID, GroupID?, File Path, Capture Parameter 1, Capture Parameter 2, etc..)
If you want more flexibility you could make a table
CaptureParameter(PictureID, ParameterName, ParameterValue) ... I would advise against this because it is a lot less efficient than just putting them in one table (not to mention the queries to retrieve/search the Capture Parameters would be more complicated).
Person(PersonID, Any Person Attributes like Name/Etc.)
Group(GroupID, Group Name, PersonID?)
PersonGroup?(PersonID, GroupID)
PictureGroup?(GroupID, PictureID)