I am trying to make an application that will load sports data (teams, players, games, etc...) from API and save into the database (with an update once a day) for subsequent analysis (in C#, but I think the language is not that important). The application composes from two parts, one for GUI and second for working with database and API.
My question is: Is it a good idea to use the same classes eg. for teams or players in model for both API and database? I know that one class should have one responsibility, but separate classes seam to me like a little overhead and even bring complications. But maybe I am wrong because I donť have enough experience with architecture design.
Thanks for answers.
If the two classes are the same class that should serve both projects, you can create instead a shared DLL holds you class by adding additional project to your solution, write the class you need and reference it to both project.
This will make your DLL to be apart of the two applications without writing your code twice.
Related
I would like to know which is the best way to organize the dll tools.
For example, I can have a project that has all the class tools the the company has been implement. For example, class to work with strings, class to work with files... and so on. I mean, a generic dll with tools that I can use in many projects. This would be a generic myCompaty.Utils.dll for example.
Other way it to have many dlls, of for each type of work. For example, I could have a myCompany.Utils.Files, other myCompany.Utils.Strings... etc.
With the first option, I would have only one dll, but if two persons need to add or fix something, only can work one person, because if two persons work at the same time, when one of the person compiles the new dll, the other person loses the work.
If I have many dlls, one for each kind of type of work, then is more difficult that two persons need to modify the same dll, because it's possible that each person is responsible of one of the dlls. However, the problem is that in this way, when I deploy the application, I would have a lot of dlls in the program directory.
So I would like to know which is the best practice when is created dlls.
Thanks.
From your question it is clear that you are using no versioning system. Try checking out something like Tortoise SVN - then, you will have no problems with several people working on same piece of software.
Regarding DLLs - I would go with having multiple DLLs, each only containing a specific type of utility methods. It will make deployment simpler. If you would do the opposite, that is, have a single DLL for all your utility methods, you would need to redeploy it everytime anything in it changes - you change the code responsible for working with files, you have to ship the whole DLL that will contain unrelated code, too. Whereas if you'll have multiple DLLs, you only need to redeploy the one that has really changed.
Basically it's going to depend on the number of classes, interfaces and delegates that your library is going to own.
Imagine the case you've 3000 classes in your "Company.Shared.dll" and you're developing a Web application. 600 of 3000 classes are for mobile development. What's the chance of using them in your Web application development? Zero.
So why you'd be deploying a 3000-classes-assembly for a Web application development if you only need Web development-related classes? Library size is greater than a Web-specific one as first can contain code for a lot of things which wouldn't be working in Web development.
For that reason you'd have a shared library called Company.Shared.Web.dll and a common to all development scenarios called Company.Shared.dll.
You can use above logic for other cases and scenarios.
Apart from the versioning system, (should be a must when more than half developer works on a project), it's really crazy that your organization allows everyone to change the base library (or libraries) on which every other project depends on. This will be evolve in a mess very quickly.
In my shop only one/two people are allowed to change anything there. And these guys are the most skilled and valuable colleagues.
For the subdivision of functionality present in the library I am not concerned with the big one DLL. It's true that I need to redistribute all even when we change a little bit of code (and when your code is mature and well tested this happens very rarely), but keeping track of every dll shipped for this project or for that project outweights the cost of the single one DLL
I recently started transforming all tables from our Oracle production into models so I can start using an ORM. I chose Castle Active Record and can really start to see its potential in making my life easier. Since many of the applications I work with utilize the same tables. I figure it would be nice to create a separate library.
My thinking is that if I can successfully separate the database work, table relationships and querying then I can reuse them to my hearts content from project to project. I know for the most part how to create new entities, link them and query what I need based on what is mapped. As of now I have a very simple class library. I could then include generic functions that could be used to query a lookup table and return an id-value pair to populate a dropdown, for example.
Could you please give me some tips and/or personal experiences to achieve this? This will be my first time attempting to create a reusable library of any sort.
Thank you.
I would:
Keep it simple.
Document its public interfaces/methods heavily, especially since you'll use it in multiple projects.
Keep it in source control, which you should be doing anyways, so all projects can easily get updates.
WCF is a popular way to achieve this. Basically you make a bunch of WCF web services that provide access to the data access functions.
This is an ideal place to use interfaces. Store all of your interfaces in a very small, isolated .dll - and you can distribute that. Your consumers can then deal with their own implementations (if I'm understanding you correctly). You could also deploy a standalone component that just has your data structures too.
Over time, the code base I maintain has grown exponentially. We have a variety of different utility classes, webparts, event receivers, console applications, and more.
Typically, each webpart lives in a separate DLL (one solution and one project per web part). Our utility classes have also been largely separated out into their own separate DLLs (this includes any specialized list access classes that get grouped with their beans together in a DLL). This has led to a large amount of solutions which has become more difficult to maintain (upgrading each solution to Visual Studio 2008, or simply just trying to find out the maze of DLL references).
With my discovery of the SharePoint Guidance, I'm re-evaluating our current code structure. For example, it looks like they recommend combining all of your specialized list access classes into a Repository (we've done completely the opposite so far by splitting them into DLLs based on what "solution" the code is for).
Questions: How should I be organizing my code? How do you decide what goes into a solution vs project vs folder or what goes in a namespace? One solution per web part?
I usually organize my code by functionality. Let's say I've got an extranet project and some code for some intranet webparts, I seperate it out into a Extranet and an Intranet project, and seperate the different classes of code (eventreceivers, timerjobs, webparts, etc.) into different namespace.
That way, I can deploy (sub)sets of functionality to different farms if I want to, and when editing code I got everything that depends on one another in the same place :)
When/where do you decide to split a large Visual Studio project into smaller multiple projects? If it can be reusable? when project is too big? (but how big is too big?)
and When you do split the project, do you,
group by database tables
group by similar functionality
other..
Pros of many projects:
Easier to isolate code for unit testing. I like to isolate code that has a dependency on a big external server thing, for example code that talks to the SMTP server gets its own assembly, code that talks to the database gets it's own assembly, code that talks to the webserver, code that is pure business logic like validations.
Pros of few projects:
Visual studio goes faster
Some developers just don't get your vision
about dividing up responsibilities
and will start putting classes
everywhere, so you end up with the
pain of extra projects and the
benefits of putting everything into
one project.
Each project has a configuration and when you make a decision about project configuration, often you have to make the same chagne everywhere, such as setting or changing the strong name key
Pros of many Solutions
You hit the maximum project level later.
Only the stuff in your current solution gets compiled everytime you hit f5
If the project isn't expected to change in the life of your application, why re-compile it over and over? Call it done and move it to its own solution.
Cons of many Solutions
It's up to you to work out the dependencies between solutions and manually compile the dependencies first. This leads to complicated build scripts.
Projects should be cohesive. Logic should be related, and accomplishing a similar goal
This answer will depend on the size of the product you are supporting. In general we organize our projects along domain and logic. And we will divide those even further, the more you divide the more organize you must be, or you are going to hit the dreaded recursive dependency issue.
When I do choose to break up project it is when it grows to be too large or two areas are becoming too similar.
When complexity is rising I do not split by tables, i generally split functionality.
Re-usability is another excellent time to reduce lines of code, as well as introduce a new project. However be careful how many "utility" libraries you introduce because they do have impact on readability/understandability.
I do not think there is a line in sand that says, if you hit 3k SLOC, you have too much. It all is contextual.
I always have several projects (and therefore a solution) , instead of one project with all of my source in it.
In some cases, it is unavoidable because you are using and open source library and want to be able to debug it. But more pragmatically, I typically have my applications provide functionality via plugins. This allows me to change the behavior or offer a user-selectable behavior at runtime. In the non-plugin case, it allows you to update one portion of your program without updating everything. There are also cases where you can provide the main apparently, and only download the modules / assemblies when you need them.
One other reason is that you can create smaller test apps to exercise an assembly, rather than building a very large solution and potentially requiring a user to execute several (and irrelevant) GUI operations before even reaching the part you want to test. And this isn't just a testing concern -- maybe you have less-savvy users in your organization that only want to be presented with the bits that concern them.
When the overall purpose of the project remains the same, but the number of classes is becoming large, I tend to create folders and namespaces to better group functionality within the project. Classes that are coupled to each-other tend to go in the same folder/namespace, so that if I need to understand a given class, the related classes are nearby in the Solution Explorer. I usually only create new projects if I realize that a particular piece of functionality is very different in purpose or if there is a common dependency between existing projects.
I usually wind up with a few relatively small Framework projects that define interfaces for loose coupling between other projects, with larger projects for the different types of concrete functionality. That's always at least one project for the UI and one project for logic and data (often split into two projects if the data layer becomes very large in its own right.)
I move code to a new project, if it has general functionality (theoretically) usable by other projects too. If the project is large, because it represents a complex problem, then namespaces provide a great way to bring order in the code. Here you can for example introduce a (sub-)namespaces for each SQL table, etc. etc.
I am designing this HR System (desktop-based) for a mid-size organization. The thing is I have all the tables designed and was planning on using the O/RM in VS2008 to generate the entity classes (this is the first time I work with OR/M; in fact, this is my first "big" project.) I wanted to make the app with 3 layers (one of the programmers of the company suggested not 3 but 4 or 5 layers) but after reading quite a lot of blog entries and a lot of questions here I've realized that is not quite easy to do that with LINQ to SQL because of how the datacontext works and how difficult it is to pass objects between layers using LINQ to SQL.
Probably I'll just use the entity classes generated by the VS2008 ORM and add any validation and bussines logic in partial classes. But that would be 2 layers, or not? The app will be used by like 10 users, so I don't think the 2 layer approach is a big issue for now.
In the future, a web-based front-end will be developed so candidates can apply to jobs online. I want to develop it as scalable as possible. But the truth is I don't have a lot of time to waste to make a decision, times running up hehe.
Having said all that, should I just use the entities generated by the VS2008 ORM?
So any suggestion or idea would be greatly appreciated. Thanks.
You're chewing over quite a lot with your line of questioning here. (Is there a concrete question hidden in there somewhere?)
With layers, I assume you mean physical boundaries, i.e. application, app/SOA/WCF server, data layer that lives on the SOA server, and a database somewhere.
Designing for the future might seem like a good idea, but DO make sure that there WILL be a need for all those layers somewhere down the line. Essentially, you do not need a WCF/SOA based approach if you're not exposing your application over the internet at some point. A web frontend can solve the same problem in many cases.
I'm not saying you will not need those layers at all, but you might not. If you really do, seams are your friend. You need to make "cut points" where you can define your boundaries. I commonly use the repository pattern to diversify data access methodologies, and use plain objects (POCO) and interfaces that are persisted via technologies such as NHibernate. Using POCOs also makes it MUCH easier to transfer those objects over the wire at a later point, either standalone or part of messages.
Creating service interfaces that are called can solidify your boundaries. When you are ready to move cross-machine/physical boundaries, you simply create your boundaries in the service implementations.
It sure sounds like a dangerous way to go - creating the tables first, then domain and finally GUI.
I must admit I am no expert on ORM expert but the generated classes I´ve seen looks more like dataobjects than classes. I would say you need another layer to stop all logic to end up in the GUI ).