ActiveRecord is too limiting normally. However, i'm in a difficult situation in terms of the views held by each member of the team in regards to using ORM's. We currently use a very basic ActiveRecord with regret I say is written mostly by hand with some basic generation of code. I would like to build a new DAL for us but avoiding the limitations of ActiveRecord, so DDD more so. The points however I am battling against are (both old skool developers and myself quite young):
Team Lead Developer
Is in favour of stored procedures, but isn't consistant...some just get
a table e.g. SELECT * FROM Company and some get SELECT C.*, O.OtherTableValue FROM Company C...(very frustrating)
Doesn't really know the benefits of some of the latest ORM tools
Won't commit to any tool as its "too restrictive" and if there are
issues what do you do?
DBA
Doesn't like dynamic SQL
Doesn't like SELECT *
I'm not saying the above are off limits, its more convicing them otherwise. I belive we could massively improve our efficent with the use of an ORM but its very difficult to convince them otherwise. If I could give proof to some of these areas I might be able to convince them, even by implementing under the covers without them knowing initally and then seeing the benefits.
What advice can you give to help my situation? I believe many developers come across this and cannot choose which architecture they would like to use.
I think your Team Lead needs to either commit to consistency or spend some time on an ORM research project to make a recommendation to use. In other words, an inconsistent and set-in-ways Team Lead shouldn't be in that role.
Secondly, I tend to agree with your DBA for a number of reasons. As long as he/she is flexible enough to know that there are occasions where dynamic SQL will solve the problem much better.
For your specific situation I'd say ask your DBA to lay down the law that stored procs are to be used every time unless justification is provided on a case-by-case basis and enforce this through policy and monitoring. This will address the consistency issue. Perhaps then, with that policy in hand, an ORM looks more enticing than having to hand-code everything to the Team Lead.
The lead developer should listen to the DBA. "SELECT *" is bad. And from the bullet points about the lead developer it sounds like you've got a familiar uphill battle. Sometimes the path of least resistance in that situation is to implement something using an ORM (such as NHibernate) and schedule some kind of demonstration to the team.
Encourage their input, specially from the lead developer and the DBA and any other nay-sayers. They might have legitimate gripes that can actually be solved with the tool as you learn more about it. On the other hand, they might just be dead-set against it for no good logical reason. If that's the case, it's best to find that out because it could very well mean that there is no winning this argument against them, because they're not really debating.
What are their objections to using an ORM? If they are not just being obstinate and stubborn then if you know what specifically they object to you and address those concerns. Like the others I think your DBA is correct. But if he is concerned about SQL injection from dynamic sql, ORM's alleviate that issue somewhat. Select * should be grounds for a firing.
I'm of the mind where you should just use something, LINQ-to-SQL or subsonic or nhibernate on something small. Show that development can be faster and cleaner using an ORM.
Related
I'm writing a complex control system for several machines. I'm using C# for convenience, since no true-real time is required, just fast response.
My question is regarding the sampling of sensors in the physical controlled systems: I want to perform actions depending on their values (e.g. if the temperature drops below X do Action A, if the pressure is higher than Y do action B).
There's the simple loop-with-value-querying approach, and there's the option to implement a clock that periodically (hmmm, every 0.01 second) checks the value of about 50 different sensors.
Any finer, more efficient, smarter, more OOP-ish approaches?
Thanks!
Well your question is very broad and IMO it is impossible to answer it in a way you would expect. The only advice I can give is to learn about object oriented design which, in my case, happens to be a continous process which might never end.
There is some literature I can definitely recommend if you want to do so:
Agile Principles, Patterns and Practices in C# - How can I design Software in an object oriented manner? (probably exactly what you need)
Clean Code - what does code look like that is easily understandable, maintainable and extendable?
Clean Coder - this focusses on daily issues when developing software. (how to work in a developers-team, how to deal with managers etc, sounds boring at first, but i was surprised and this helped me a ton)
All the books are by Robert C. Martin.
Learn about SOLID-Principles, why they are important and how to apply them correctly. Dive into Design Patterns of the GOF which solve common problems every developer stumbles upon sooner or later. Booth of those points are explained in the first book I recommended.
I know this is not what you expected, but given the little detail and the broadness of you question, this is all i can give.
Regarding the polling, the only thing that comes to my mind would be to implement INotifyPropertyChanged and subscribe to the PropertyChanged-event if that is possible. This would be an Observer-like approach.
I am trying to follow TDD principles in all my code base. The frontend (MVC) and backend part are split, and frontend use their own Model objects, while backed use database objects which are then saved to a document database (RavenDb).
This requires a lot of conversion from say CustomerModel to CustomerData. These are created independently from each other, so the strucutre might not match. For example, the CustomerModel might be flat while CustomerData has a nested object ContactDetails.
Currently, we are implementing two methods, one say ConvertCustomerModelToCustomerData and ConvertCustomerDataToCustomerModel. These are very similar, but inverse of each other. Apart from this, these methods are also unit-tested. Hence, similar code is created in four instances - twice for each conversion, and twice for each unit test.
This is a big headache to maintain, and does not seem right to me. I've tried to use AutoMapper, however I found it quite rigid. Also, I could not find any way how I can unit-test this.
Any ideas would be greatly appreciated.
I think that having well defined boundaries and anti-corruption layers (see this and this) like the one you did is a great way to avoid headache, and bug hunting an highly coupled application is far worse.
Then, these layers are for sure boring to maintain, but I bet that dealing with is is simple, a no-brainer activity.
If you find yourself modifying your entities often (and so having many tests to update), maybe they are not well defined yet, or they have too wide scope. Why do you need to update them? What's the trigger?
Then, AutoMapper can help you a lot, I agree with other comments.
Anyway, without seeing the code it's difficult for me to judge and maybe offer any kind of advice, so feel free to consider this just my humble opinion :)
We are trying to get ReSharper introduced to our company but it would have to be for all developers. Management want us to justify the cost with a business case.
I am unsure how to go about getting proof that ReSharper will benefit the business. What kind of statistics can you get from it?
I am unsure how to go about getting proof that Resharper will benefit the business.
If they asked for a business case, they're not asking for proof, just some kind of fact-based estimate of the likely return on their investment.
So, for example:
A license costs (say) $250 per developer, a developer costs (say) $50,000 per year.
A developer with Resharper costs 0.5% more than a developer without Resharper.
That gives you a basic financial model - if you get more than a 0.5% productivity gain, then it's worth it, if you get less, it isn't. Some corporates apply a minimum return on investment (ROI) factor - so if the factor is 1.2, then you would have to show a 0.7% benefit to get approval. The factor is very unlikely to be more than 3.
You could tweak that model - depreciate the license over 3 years, include the procurement costs, changing cost of capital, etc., but a simple, conservative model is likely to have the broadest appeal.
Then all you need is some evidence that you get more than a 0.5% productivity improvement. You could run a benchmark, or a pilot with a small number of developers for this. Pick some typical tasks and time them with and without Resharper. There is a 30 day trial version available so you could run a pilot before you have to purchase.
The PDF on the Resharper home page claims a 35% productivity increase - you can take that with a pinch of salt, but unless that's exaggerated by a factor of 70, it's still a worthwhile investment. The number of recommendations on the web, and developers claiming to buy it with their own money suggest that it isn't a wild exaggeration.
When you present the business case, you might like to illustrate that percentage as a dollar value too.
Developers only spend part of their day in their IDE, so you should probably adjust the expected returns downwards because of that. The real number is probably between 20% and 80%, but the lower end of the range might not be a politically acceptable number to present. You're interested in what proportion of the output is affected by the investment.
I don't have any connection with Jetbrains - and I'm answering a question about how to make a business case, not selling licenses! The anecdotal evidence from where I work is that the developers who have used Resharper have only good things to say about it. In some very specific cases it has saved weeks or months by automating mechanical tasks that have to be applied over a lot of files. The rest of the time it's hard to measure, but since the developers use it all of the time, they must be getting some real value out of it.
There's a quality argument too - you could measure this as a productivity increase, or a cost saving, or just an additional argument - depending on how quality issues are perceived at management level in your company.
ReSharper does not track itself in any way that would provide useful statistics. Also, I'm pretty sure no college/company/consultants have any sort of meaningful hard data. This is just too complex. I suppose you could measure the time savings from (A.) code insertion, (B.) refactoring quickly, and (C.) getting it right the first time because ReSharper didn't make the mistakes a human would. Just these savings pays for ReSharper soon enough. For a $300 license, all ReSharper has to do it save you 5-6 hours per developer. That's hours.
But the real benefits of ReSharper are impossible to measure:
Since good structure is now as easy to make as bad structure, you do it right!
Your designs are better, because you spend you time thinking about design rather than coding cruft.
Whatever your level, you learn from ReSharper . The refactorings available are those demanded by top-level developers. By using them you learn these good practices.
Mistakes are more forgiving. If you structure your code poorly, it's easy to fix. I find myself more daring and willing to try new things, because there is less risk. This has resulted in some great code.
I'm afraid your powers-that-be will need to trust you, or trust the testimonials on the web-site, or trust a consultant, or experience ReSharper for themselves. If your managers are not themselves quality developers, you're going to have an uphill battle. I wish you luck.
I bought ReSharper with my own money a few months ago, because I knew the best developers used it (or coderush). And best means they create more maintainable solutions for less time/money. It has surpassed my expectations. Getting code out there quicker and being able to refactor quicker is what I expected. All well and good. What I did not expect was how this would increase the time I had to make the right development decisions and do the right things at the most efficient time. Before there was just not enough time to do things right; now there is.
So it's impossible to tell management whether ReSharper pays for itself 20 times over, or 100, or 500, but I think 20 should be enough.
I know business managers loves them some numbers, but the best business case is anecdotal:
It makes developers happy.
True, it does increase productivity, but that's hard to prove. Making developers happy should be enough, since happy developers are more productive. You might want to point out that the static code analysis is built in to it, therefore nudging developers toward writing better code, gently training them to code cleanly.
No statistics, but here's a very good blog article arguing the case for Resharper. Some coworkers and I used some of these justifications to get it bought for us.
EDIT
Changed the link to point to the internet archive version
Basically, it's a tool to reduce development time:
Visualize more problems immediately
Improved coding speed by showing warnings (from info up to error) and allowing developers to fix them by a simple Ctrl + Space
Enforce naming conventions (customizable)
Way better refactoring: Not only leading to fewer bugs, but also allowing more operations; refactoring improves the velocity (no refactoring leads to slower and slower development speed)
Way faster code navigation (meaning opening the desired file location):
camelCase find file/class/symbol, by Ctrl+[Shift]+T
Find where a piece is used in all the source code
Developers can learn something: The auto-correction suggestions are usually taking into account some refactoring tricks and the latest .NET features. It's not only like an MS Word spell corrector, but it's also even going to tell you how you could say the same better.
Note: Technically, it can be installed on a single machine. If installed on the machine of the lead dev or project manager, (s)he can review code much faster. Refactoring and integration are some important tasks of a lead dev.
On a downside, I don't believe in the advertised gain. That gain is based on a bad development process with idealistic gain. What I can tell you is that it made my life better as a developer.
The best business case for ReSharper has to do more with the ability for it, when coupled with StyleCop Add-In (free), to allow a small team of developers to quickly create consistent, coherent, standards-based, maintainable code. Until it was introduced in our organization we had nothing but numerous stylistic approaches, not to mention the defects, bugs and other problems ReSharper helped us identify and correct. It is quite simply the best VS Add-In I've ever encountered.
As an aside, you should pick up GhostDoc (free VS Add-In) as well. It makes documenting your code much easier as well. These two tools together are invaluable.
An issue you may run into is that Management may not so much be looking for a justification for ReSharper, but justification for those things that ReSharper does: refactoring, code cleanup, increased ability to navigate code, unit test support.
If you've got Management that needs to justify something like ReSharper, then they may not yet have "justified" modern software development practices, either.
While I cannot, even from my own organization provide direct metrics, the tool provides a wealth of assistance and hints for developers.
It will also, when properly used, help an organization have more consistent code following the organizations code standards.
It will also highlight new features in newer.net frameworks and gently show developers how they can be applied to their code.
The tool is fantastic in getting rid of some code smells.
Aside from that aspect, once developers become more proficient in its use, it has a great number navigational features that allow them to quickly zip through code.
See the ReSharper Benefits For You and Your Business document for a small ROI analysis. Unfortunately it is not backed by any hard data and boils down to the assumption that developer productivity increases by 35 percent when using ReSharper, but it sums up all the arguments for using a productivity solution like ReSharper.
If management just want a set up numbers put in front of them, I have knocked up a basic app that should give an indication to the potential ROI that can be had from purchasing a tool like ReSharper. Even if you don't follow the 35% claim in productivity improvement, a 1% improvement still brings a ROI.
Since yesterday, I am analyzing one of our project with Ndepend (free for most of its features) and more I am using it, and more I have doubt about the real value of this type of software (code-analysis software).
Let me explain, The system build a report about the health of the system and class by Rank every metric. I thought it would be a good starting point to do modifications but most of the top result are here because they have over 100 lines inside the class (we have big headers and we do use VS comments styles) so it's not a big deal... than the number of Afferent Coupling level (CA) is always too high and this is almost very true for Interface that we used a lot... so at this moment I do not see something wrong but NDepend seem to do not like it (if you have suggestion to improve that tell me because I do not see the need for). It's the samething for the metric called "NOC" for Number of children that most of my Interface are too high...
For the moment, the only very useful metric is the Cyclomatic Complexity...
My question is : Do you find is worth it to analyse code with Automatic Code Analyser like NDepend? If yes, how do you filter all information that I have mentionned that doesn't really show the real health of the system?
Actually metrics are just one feature of NDepend, did you try to use VisualNDepend that lets you analyze your project much more in depth than the report? By reading your comment, I am almost sure you didn't play with NDepend UI (standalone or integrated in Visual Studio) which is the best way to filter data about your code base.
I am one of the developers of NDepend and we use it a lot to analyze our own code. Basically we write our own quality rules with Code Rules over LINQ Queries (CQLinq). These rules automatically make sure that we don't have regression on our design. Here you'll find the list of around 200 default code rules.
Here are some unique features of NDepend and not related to code metrics:
Write CQLinq rules to make sure we don't have architectural flaws, such as dependency cycles between our components, UI using directly the DB or DB entangled with the business objects.
Make sure we don't have problem with code coverage by tests (like we make sure with a CQLinq rule that if a class is supposed to be 100% covered, it will remain 100% covered in the future)
Enforce side-effects-free code (immutable class/pure methods)
Use the ability to compare 2 analysis to code review changes since the last release, before doing a new release. More specifically, I enjoy using NDepend to know which method has been added and refactored since the last release, and is not 100% covered by tests.
Having an optimal encapsulation for all our members and types (like knowing which internal methods can be declared as private). This is also related to dead-code detection that NDepend also supports.
For a complete list of features if NDepend, see here.
I don't necessarily see NDepend results as "good" or "bad" in software engineering, there's always a good reason why an application is designed the way it is. I see it as a report that can probably help me point out issues with my design, but I have the final word when it comes to deciding if a method needs to be refactored or if it's good the way I designed it. In general, don't get too caught up trying to answer if it's worth it or not. It definitely is, instead I would suggest you carefully review the results. This will help you view your design from another perspective and there may be occasions where you decide the way you designed it is the best to achieve your applications goals.
I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in .Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead.
I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in .Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options.
I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks!
-Eric Sipple
Since .NET 3.5 came out, I've exclusively used LINQ. It's really that good; I don't see any reason to use any of those old crutches any more.
As great as LINQ is, though, I think any ORM system would allow you to do away with that dreck.
We've moved away from datasets and built our own ORM objects loosely based on CSLA. You can get the same job done with either a DataSet or LINQ or ORM but re-using it is (we've found) a lot easier. 'Less code make more happy'.
I was fed up with DataSets in .Net 1.1, at least they optimised it so that it doesn't slow as exponentially for large sets any more.
It was always a rather bloated model - I haven't seen many apps that use most of its features.
SqlDataReader was good, but I used to wrap it in an IEnumerable<T> where the T was some typed representation of my data row.
Linq is a far better replacement in my opinion.
I've been using the Data Transfer Objects pattern (originally from the Java world, I believe), with a SqDataReader to populate collections of DTOs from the data layer for use in other layers of the application.
The DTOs themselves are very lightweight and simple classes composed of properties with gets/sets. They can be easily serialized/deserialized, and used for databinding, making them pretty well suited to most of my development needs.
I'm a huge fan of SubSonic. A well-written batch/CMD file can generate an entire object model for your database in minutes; you can compile it into its own DLL and use it as needed. Wonderful model, wonderful tool. The site makes it sound like an ASP.NET deal, but generally speaking it works wonderfully just about anywhere if you're not trying to use its UI framework (which I'm moderately disappointed in) or its application-level auto-generation tools.
For the record, here is a version of the command I use to work with it (so that you don't have to fight it too hard initially):
sonic.exe generate /server [servername] /db [dbname] /out [outputPathForCSfiles] /generatedNamespace [myNamespace] /useSPs true /removeUnderscores true
That does it every time ... Then build the DLL off that directory -- this is part of an NAnt project, fired off by CruiseControl.NET -- and away we go. I'm using that in WinForms, ASP.NET, even some command-line utils. This generates the fewest dependencies and the greatest "portability" (between related projects, EG).
Note
The above is now well over a year old. While I still hold great fondness in my heart for SubSonic, I have moved on to LINQ-to-SQL when I have the luxury of working in .NET 3.5. In .NET 2.0, I still use SubSonic. So my new official advice is platform version-dependent. In case of .NET 3+, go with the accepted answer. In case of .NET 2.0, go with SubSonic.
I have used typed and untyped DataSets, DataViewManagers, DataViews, DataTables, DataRows, DataRowViews, and just about anything you can do with the stack since it firsts came out in multiple enterprise projects. It took me awhile to get used to how allow of it worked. I have written custom components that leverage the stack as ADO.NETdid not quite give me what I really needed. One such component compares DataSets and then updates backend stores. I really know how all of these items work well and those that have seen what I have done are very impressed that I managed to get beyond there feel that it was only useful for demo use.
I use ADO.NET binding in Winforms and I also use the code in console apps. I most recently have teamed with another developer to create a custom ORM that we used against a crazy datamodel that we were given from contractors that looked nothing like our normal data stores.
I searched today for replacement to ADO.NET and I do not see anything that I should seriously try to learn to replace what I currently use.
DataSets are great for demos.
I wouldn't know what to do with one if you made me use it.
I use ObservableCollection
Then again i'm in the client app space, WPF and Silverlight. So passing a dataset or datatable through a service is ... gross.
DataReaders are fast, since they are a forward only stream of the result set.
I use them extensively but I don't make use of any of the "advanced" features that Microsoft was really pushing when the framework first came out. I'm basically just using them as Lists of Hashtables, which I find perfectly useful.
I have not seen good results when people have tried to make complex typed DataSets, or tried to actually set up the foreign key relationships between tables with DataSets.
Of course, I am one of the weird ones that actually prefers a DataRow to an entity object instance.
Pre linq I used DataReader to fill List of my own custom domain objects, but post linq I have been using L2S to fill L2S entities, or L2S to fill domain objects.
Once I get a bit more time to investigate I suspect that Entity Framework objects will be my new favourite solution!
Selecting a modern, stable, and actively supported ORM tool has to be probably the single biggest boost to productivity just about any project of moderate size and complexity can get. If you're concluding that you absolutely, absolutely, absolutely have to write your own DAL and ORM, you're probably doing it wrong (or you're using the world's most obscure database).
If you're doing raw datasets and rows and what not, spend the day to try an ORM and you'll be amazed at how much more productive you can be w/o all the drudgery of mapping columns to fields or all the time filling Sql command objects and all the other hoop jumping we all once went through.
I love me some Subsonic, though for smaller scale projects along with demos/prototypes, I find Linq to Sql pretty damn useful too. I hate EF with a passion though. :P
I've used typed DataSets for several projects. They model the database well, enforce constraints on the client side, and in general are a solid data access technology, especially with the changes in .NET 2.0 with TableAdapters.
Typed DataSets get a bad rap from people who like to use emotive words like "bloated" to describe them. I'll grant that I like using a good O/R mapper more than using DataSets; it just "feels" better to use objects and collections instead of typed DataTables, DataRows, etc. But what I've found is that if for whatever reason you can't or don't want to use an O/R mapper, typed DataSets are a good solid choice that are easy enough to use and will get you 90% of the benefits of an O/R mapper.
EDIT:
Some here suggest that DataReaders are the "fast" alternative. But if you use Reflector to look at the internals of a DataAdapter (which DataTables are filled by), you'll see that it uses...a DataReader. Typed DataSets may have a larger memory footprint than other options, but I've yet to see the application where this makes a tangible difference.
Use the best tool for the job. Don't make your decision on the basis of emotive words like "gross" or "bloated" which have no factual basis.
I just build my business objects from scratch, and almost never use the DataTable and especially not the DataSet anymore, except to initially populate the business objects. The advantages to building your own are testability, type safety and IntelliSense, extensibility (try adding to a DataSet) and readability (unless you enjoy reading things like Convert.ToDecimal(dt.Rows[i]["blah"].ToString())).
If I were smarter I'd also use an ORM and 3rd party DI framework, but just haven't yet felt the need for those. I'm doing lots of smaller size projects or additions to larger projects.
I NEVER use datasets. They are big heavyweight objects only usable (as someone pointed out here) for "demoware". There are lot's of great alternatives shown here.