I got into an argument with a co-worker about the use of LINQ to Objects (IEnumerable, not IQueryable) in our C# code. I was using LINQ, and he said that we shouldn't be using an external vendor's (Microsoft) code in our code, but that we should wrap it ourselves in our own layer of abstraction.
Now I understand this methodology for use where you've got a no-name third party dll that may go out of business next week, or when your dealing with database calls (ie. returning a common data provider, rather than a SQL or Oracle specific one), but in my mind the LINQ syntax is too pretty/elegant/readable for Microsoft to abandon in the next 10 years. It's about as likely to be dropped as the ToString("Hello {0}", firstName); functionality.
I could give up arguing, and implement our own LINQ library that calls the standard LINQ methods under the covers, but isn't this over doing it? Plus I could only use the extension methods, I have no idea how to be able to wrap this:
from e in employees
select new { e.Name, e.Id };
What would your argument be, for or against using LINQ to objects (the IEnumerable extension methods)?
Your friend is wrong. LINQ was the flagship feature of C# 3.0, and will not be leaving the language. There's always a chance MS will cease to support C# (though I severely doubt that,) but as long as there's a C#, there will be LINQ.
Also, consider in which assembly the LINQ-to-objects extension methods are housed: System.Core.
He is completely wrong.
That argument only applies when dealing with fungible components, such as database platforms.
Microsoft is extremely careful to avoid making breaking changes in .Net; there is no way that they would drop LINQ.
To answer your other points, query comprehension syntax (from x in y) is a compiler feature which is transformed into method calls.
If you write your own methods, the compiler will happily use them.
There are already third party implementations of the LINQ to Objects methods, such as LINQBridge (for use with .Net 2.0, which predates LINQ), or EduLINQ (written for educational purposes)
I was using LINQ, and he said that we shouldn't be using an external vendor's (Microsoft) code in our code, but that we should wrap it ourselves in our own layer of abstraction.
It's a core part of the language (i.e., part of the C# language specification). It'll be around as long as C# is around. Changes to it would be breaking changes, and would be a massive cost to Microsoft's customers. They are not going to do this.
Wrapping a layer dependency like ADO.NET or a third-party library: idea worth considering.
Wrapping LINQ to Objects (core part of framework and game-changer): bad idea.
Maybe your co-worker simply feels that this is more correct:
var result = employees.Select(e => new { e.Name, e.Id });
Regardless, I think your co-worker's worries are completely unfounded.
...interesting...
Related
I'm now porting some library that uses expressions to .Net Core application and encountered a problem that all my logic is based on LambdaExpression.CompileToMethod which is simply missing in. Here is sample code:
public static MethodInfo CompileToInstanceMethod(this LambdaExpression expression, TypeBuilder tb, string methodName, MethodAttributes attributes)
{
...
var method = tb.DefineMethod($"<{proxy.Name}>__StaticProxy", MethodAttributes.Private | MethodAttributes.Static, proxy.ReturnType, paramTypes);
expression.CompileToMethod(method);
...
}
Is it possible to rewrite it somehow to make it possible to generate methods using Expressions? I already can do it with Emit but it's quite complex and i'd like to avoid it in favor of high-level Expressions.
I tried to use var method = expression.Compile().GetMethodInfo(); but in this case I get an error:
System.InvalidOperationException : Unable to import a global method or
field from a different module.
I know that I can emit IL manually, but I need exactly convert Expression -> to MethodInfo binded to specific TypeBuilder instead of building myself DynamicMethod on it.
It is not an ideal solution but it is worth considering if you don't want to write everything from the scratch:
If you look on CompileToMethod implementation, you will see that under the hood it uses internal LambdaCompiler class.
If you dig even deeper, you willl see that LambdaCompiler uses System.Reflection.Emit to convert lambdas into MethodInfo.
System.Reflection.Emit is supported by .NET Core.
Taking this into account, my proposition is to try to reuse the LambdaCompiler source code. You can find it here.
The biggest problem with this solution is that:
LambdaCompiler is spread among many files so it may be cumbersome to find what is needed to compile it.
LambdaCompiler may use some API which is not supported by .NET Core at all.
A few additional comments:
If you want to check which API is supported by which platform use .NET API Catalog.
If you want to see differences between .NET standard versions use this site.
Disclaimer: I am author of the library.
I have created Expression Compiler, which has similar API to that of Linq Expressions, with slight changes. https://github.com/yantrajs/yantra/wiki/Expression-Compiler
var a = YExpression.Parameter(typeof(int));
var b = YExpression.Parameter(typeof(int));
var exp = YExpression.Lambda<Func<int,int,int>>("add",
YExpression.Binary(a, YOperator.Add, b),
new YParameterExpression[] { a, b });
var fx = exp.CompileToStaticMethod(methodBuilder);
Assert.AreEqual(1, fx(1, 0));
Assert.AreEqual(3, fx(1, 2));
This library is part of JavaScript compiler we have created. We are actively developing it and we have added features of generators and async/await in JavaScript, so Instead of using Expression Compiler, you can create debuggable JavaScript code and run C# code easily into it.
I ran into the same issue when porting some code to netstandard. My solution was to compile the lambda to a Func using the Compile method, store the Func in a static field that I added to my dynamic type, then in my dynamic method I simply load and call the Func from that static field. This allows me to create the lambda using the LINQ Expression APIs instead of reflection emit (which would have been painful), but still have my dynamic type implement an interface (which was another requirement for my scenario).
Feels like a bit of a hack, but it works, and is probably easier than trying to recreate the CompileToMethod functionality via LambdaCompiler.
Attempting to get LambdaCompiler working on .NET Core
Building on Michal Komorowski's answer, I decided to give porting LambdaCompiler to .NET Core a try. You can find my effort here (GitHub link). The fact that the class is spread over multiple files is honestly one of the smallest problems here. A much bigger problem is that it relies on internal parts in the .NET Core codebase.
Quoting myself from the GitHub repo above:
Unfortunately, it is non-trivial because of (at least) the following issues:
AppDomain.CurrentDomain.DefineDynamicAssembly is unavailable in .NET Core - AssemblyBuilder.DefineDynamicAssembly replaces is. This SO answer describes how it can be used.
Assembly.DefineVersionInfoResource is unavailable.
Reliance on internal methods and properties, for example BlockExpression.ExpressionCount, BlockExpression.GetExpression, BinaryExpression.IsLiftedLogical etc.
For the reasons given above, an attempt to make this work as a standalone package is quite unfruitful in nature. The only realistic way to get this working would be to include it in .NET Core proper.
This, however, is problematic for other reasons. I believe the licensing is one of the stumbling stones here. Some of this code was probably written as part of the DLR effort, which it itself Apache-licensed. As soon as you have contributions from 3rd party contributors in the code base, relicensing it (to MIT, like the rest of the .NET Core codebase) becomes more or less impossible.
Other options
I think your best bet at the moment, depending on the use case, is one of these two:
Use the DLR (Dynamic Language Runtime), available from NuGet (Apache 2.0-licensed). This is the runtime that empowers IronPython, which is to the best of my knowledge the only actively maintained DLR-powered language out there. (Both IronRuby and IronJS seems to be effectively abandoned.) The DLR lets you define lambda expressions using Microsoft.Scripting.Ast.LambdaBuilder; this doesn't seem to be directly used by IronPython though. There is also the Microsoft.Scripting.Interpreter.LightCompiler class which seems to be quite interesting.
The DLR unfortunately seems to be quite poorly documented. I think there is a wiki being referred to by the CodePlex site, but it's offline (can probably be accessed by downloading the archive on CodePlex though).
Use Roslyn to compile the (dynamic) code for you. This probably has a bit of a learning curve as well; I am myself not very familiar with it yet unfortunately.
This seems to have quite a lot of curated links, tutorials etc: https://github.com/ironcev/awesome-roslyn. I would recommend this as a starting point. If you're specifically interested in building methods dynamically, these also seem worth reading:
https://gunnarpeipman.com/using-roslyn-to-build-object-to-object-mapper/
http://www.tugberkugurlu.com/archive/compiling-c-sharp-code-into-memory-and-executing-it-with-roslyn
Here are some other general Roslyn reading links. Most of these links are however focused on analyzing C# code here (which is one of the use cases for Roslyn), but Roslyn can be used to generate IL code (i.e. "compile") C# code as well.
The .NET Compiler Platform SDK: https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/
Get started with syntax analysis: https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/get-started/syntax-analysis
Tutorial: Write your first analyzer and code fix: https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/tutorials/how-to-write-csharp-analyzer-code-fix
There is also the third option, which is probably uninteresting for most of us:
Use System.Reflection.Emit directly, to generate the IL instructions. This is the approach used by e.g. the F# compiler.
So, I have this new project. My company uses the SalesForce.com cloud to store information about day-to-day operations. My job is to write a new application that will, among other things, more seamlessly integrate the CRUD operations of this data with existing in-house application functionality.
The heart of the Salesforce WSDL API is a set of "query()" web methods that take the query command as a string. The syntax of the query is SQL-ish, but not quite (they call it SOQL). I'm not a fan of "magic strings", so I'd like to use Linq in the codebase, and parse the IQueryable into the SOQL query I need in the wrapper for the service. It's certainly possible (L2E, L2Sql), but I'd like to know if there's a shortcut, 'cause if I say it'll take more than a day or two to roll my own, I'll be "encouraged" to find another method (most likely a method for each general-use query, which was the method used in older apps). If I succeed in making a general-purpose SOQL parser, we can use it in several other upcoming apps, and I'll be a hero. Even if I make a simple one that only supports certain query structures, it'll go a long way by allowing me to proceed with the current project in a Linq-y way, and I can expand on it in my free time.
Here are the options as I see it:
Look harder for an existing Linq2SOQL provider (My Google-fu is failing me here, or else there simply isn't one; the only .NET wrapper only mentions Linq as a nice-to-have).
Build an expression tree parser. It needs to support, at least, the Select and Where method calls, and needs to either parse lambdas or manipulate their method bodies to get the operations and projections needed. This seems to be a rather massive task, but like I said, it's certainly possible.
Wrap the service in Linq2Sql or a similar existing Linq provider that will allow me to extract a close-enough query string, polish it up and pass it to the service. There must be dozens out there (though none that just drop in, AFAIK).
Call Expression.ToString() (or Expression.DebugView), and manipulate that string to create the SOQL query. It'll be brittle, it'll be ugly (behind the scenes), and it will support only what I'm explicitly looking for, but it will provide a rudimentary translation that will allow me to move on.
What do you guys think? Is building a Linq parser more than a two-day task for one guy? Would a bodged-up solution involving an existing Linq provider possibly do it? Would it be terrible to chop up the expression string and construct my query that way?
EDIT: Thanks to Kirk for the grounding. I took some more looks at what I'd be required to do for even a basic SOQL parser, and it's just beyond the scope of me getting working application code written on any feasible schedule. For instance, I have to build a select list from either the Select() method lambda or a default one from all known columns from my WSDL object, a task I hadn't even thought of (I was focusing more on the Where parsing). I'm sure there are many other "unknown unknowns" that could turn this into a pretty big deal. I found several links which shows the basics of writing a Linq provider, and though they all try to make it simple, it's just not going to be feasible time-wise right now. I'll structure my repository, for now, using named methods that encapsulate named queries (a constant class of formattable query strings should reduce the amount of head-scratching in maintenance). Not perfect, but far more feasible. If and when a Linq2SOQL provider gets off the ground, either in-house or open-source, we can refactor.
For others looking for Linq provider references, here are those helpful links I found:
Building a Linq Provider
Walkthrough: Creating an IQueryable LINQ Provider
Linq: Building an IQueryable provider series - in 17 parts! <-- this one, though long and involved, has a lot of real in-depth explanations and is good for "first-timers".
Let's take them one at a time:
Look harder for an existing Linq2SOQL provider (My Google-fu is failing me here, or else there simply isn't one; the only .NET wrapper only mentions Linq as a nice-to-have).
Yeah, I doubt one exists already, but hopefully you can find one.
Build an expression tree parser. It needs to support, at least, the Select and Where method calls, and needs to either parse lambdas or manipulate their method bodies to get the operations and projections needed. This seems to be a rather massive task, but like I said, it's certainly possible.
This is absolutely the way to go if you really are serious about this in the long-run.
Wrap the service in Linq2Sql or a similar existing Linq provider that will allow me to extract a close-enough query string, polish it up and pass it to the service. There must be dozens out there (though none that just drop in, AFAIK).
What do you mean by "drop in"? You can easily get the SQL straight from L2S.
Call Expression.ToString() (or Expression.DebugView), and manipulate that string to create the SOQL query. It'll be brittle, it'll be ugly (behind the scenes), and it will support only what I'm explicitly looking for, but it will provide a rudimentary translation that will allow me to move on.
I would strongly discourage you from this approach, as, at minimum, it will be at least as difficult as parsing the expression trees properly. If anything, in order to use this, you'd have to first put the parsed strings into a proper object model -- i.e. the existing expression trees you're starting with.
Really, you should think about building a query provider and doing this right. I think two days is a bit of a stretch though to get something working in even a primitive sense, though it might be possible. IMO, you should research it some at home and toy around with it so you have some familiarity with the basic pieces and parts. Then you might barely be able to get some usable queries going after two days.
Honestly though, fully implementing this kind of project is really in the realm of several weeks, if not months -- not days.
If that's too much work, you might consider option 3. I'm no expert on SOQL, so no idea what kind of work would be involved transforming ordinary SQL queries into SOQL queries. If you think it's rather algorithmic and reliable, that might be the way to go.
I am currently developing an application where you can create "programs" with it without writing source code, just click&play if you like.
Now the question is how do I generate an executable program from my data model. There are many possibilities but I am not sure which one is the best for me. I need to generate assemblies with classes and namespace and everything which can be part of the application.
CodeDOM class: I heard of lots of limitations and bugs of this class. I need to create attributes on method parameters and return values. Is this supported?
Create C# source code programmatically and then call CompileAssemblyFromFile on it: This would work since I can generate any code I want and C# supports most CLR features. But wouldn't this be slow?
Use the reflection ILGenerator class: I think with this I can generate every possible .NET code. But I think this is much more complicated and error prone than the other approaches?
Are there other possible solutions?
EDIT:
The tool is general for developing applications, it is not restricted to a specific domain. I don't know if it can be considered a visual programming language. The user can create classes, methods, method calls, all kinds of expressions. It won't be very limitating because you should be able to do most things which are allowed in real programming languages.
At the moment lots of things must still be written by the user as text, but the goal at the end is, that nearly everything can be clicked together.
You my find it is rewarding to look at the Dynamic Language Runtime which is more or less designed for creating high-level languages based on .NET.
It's perhaps also worth looking at some of the previous Stack Overflow threads on Domain Specific Languages which contain some useful links to tools for working with DSLs, which sounds a little like what you are planning although I'm still not absolutely clear from the question what exactly your aim is.
Most things "click and play" should be simple enough just to stick some pre-defined building-block objects together (probably using interfaces on the boundaries). Meaning: you might not need to do dynamic code generation - just "fake it". For example, using property-bag objects (like DataTable etc, although that isn't my first choice) for values, etc.
Another option for dynamic evaluation is the Expression class; especially in .NET 4.0, this is hugely versatile, and allows compilation to a delegate.
Do the C# source generation and don't care about speed until it matters. The C# compiler is quite quick.
When I wrote a dynamic code generator, I relied heavily on System.Reflection.Emit.
Basically, you programatically create dynamic assemblies and add new types to them. These types are constructed using the Emit constructs (properties, events, fields, etc..). When it comes to implementing methods, you'll have to use an ILGenerator to pump out MSIL op-codes into your method. That sounds super scary, but you can use a couple of tools to help:
A pre-built sample implementation
ILDasm to inspect the op-codes of the sample implementation.
It depends on your requirements, CodeDOM would certainly be the best fit for a "program" stored it in a "data model".
However its unlikely that using option 2 will be in any way measurably slower in comparision with any other approach.
I would echo others in that 1) the compiler is quick, and 2) "Click and Play" things should be simple enough so that no single widget added to a pile of widgets can make it an illegal pile.
Good luck. I'm skeptical that you can achieve point (2) for anything but really toy-level programs.
As a fairly junior developer, I'm running into a problem that highlights my lack of experience and the holes in my knowledge. Please excuse me if the preamble here is too long.
I find myself on a project that involves my needing to learn a number of new (to me) technologies, including LINQ (to OBJECTS and to XML for purposes of this project) among others. Everything I've read to this point suggests that to utilize LINQ I'll need to fully understand the following (Delegates, Anonymous Methods and Lambda Expressions).
OK, so now comes the fun. I've CONSUMED delegates in the past as I have worked with the .NET event model, but the majority of the details have been hidden from me (thanks Microsoft!). I understand that on a basic level, delegate instances are pointers to methods (a gross over-simplification, I know).
I understand that an anonymous method is essentially an in-line unnamed method generally (if not exclusively) created as a target for a delegate.
I also understand that lambdas are used in varying ways to simplfy syntax and can be used to point a simple anonymous method to a delegate.
Pardon me if my any of my descriptions are WAY off here, this is the basic level to which I understand these topics.
So, the challenge:
Can anyone tell me if at least on a basic level if my understanding of these items is even close? I'm not looking for complex esoteric minutiae, just the basics (for now).
To what degree do I need to truly understand these concepts before applying LINQ in a project to reasonable effect? I want to understand it fully and am willing to spend the time. I simply may not HAVE the time to fully grok all of this stuff before I need to produce some work.
Can anyone point me to some good articles that explain these subjects and apply them to "real world" examples so that I can get my head around the basics of the topics and application of them? What I mean by real world, is how might I use this in the context of "Customers and Invoices" rather than abstract "Vectors and Shapes" or "Animals and Cows". The scenario can be somewhat contrived for demonstration purposes, but hopefully not strictly academic. I have found a number of examples on-line and in books, but few seem to be "Plain English" explanations.
Thank you all in advance for your patience, time and expertise.
Where can i find a good in depth guide to C# 3?
1) Your knowledge so far seems ok. Lambda expressions are turned into anonymous methods or System.Linq.Expressions.Expression's, depending on context. Since you aren't using a database technology, you don't need to understand expressions (all lambdas will be anonymous methods). You didn't list Extension methods, but those are very important (and easy) to understand. Make sure you see how to apply an extension method to an interface - as all the functionality in linq comes from System.Linq.Enumerable - a collection of extention methods against IEnumerable(Of T).
2) You don't need a deep understanding of lambdas.
The arrow syntax ( => ) was the biggest hurdle for me. The arrow separates the signature and the body of the lambda expression.
Always remember : Linq methods are not executed until enumerated.
Watch out for using loop variables in a lambda. This is a side effect from deferred execution that is particularly tricky to track down.
3) Sure, Here are some of my answers that show linq method calls - some with xml.
List splitting
Simple Xml existence search
Xml projection - shape change
1) Those descriptions sound pretty accurate to me. Sometimes anonymous methods and lambda expressions will need to create a new type to put the target of the delegate in, so they can act as closures.
2/3) I would read up a bit until you're happy with delegates, anonymous methods and lambda expressions. I dedicate a chapter to the delegate-related changes in each of C# 2.0 and C# 3.0 in C# in Depth, although of course other books go into detail too. I have an article as well, if that helps.
As for examples - delegates are used for many different purposes. They're all different ways of looking at the same functionality, but they can feel very different:
Providing the code to call when you start a new thread
Reacting to UI events
Providing the filter, selection, ordering etc for a LINQ query
Providing a callback for when an asynchronous operation has finished
If you have any specific situations you'd like an example of, that would be easier to answer.
EDIT: I should point out that it's good news that you're only working with LINQ to Objects and LINQ to XML at the moment, as that means you don't need to understand expression trees yet. (They're cool, but one step at a time...) LINQ to XML is really just an XML API which works nicely with LINQ - from what I remember, the only times you'll use delegates with LINQ to XML are when you're actually calling into LINQ to Objects. (That's very nice to do, admittedly - but it means you can reuse what you've already learned.)
As you've already got C# in Depth, chapters 10 and 11 provide quite a few examples of using lambda expressions (and query expressions which are translated into lambda expressions) in LINQ. Chapter 5 has a few different examples of delegate use.
Read this...
http://linqinaction.net/
..and all you're question will be answered!!!
I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in .Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead.
I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in .Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options.
I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks!
-Eric Sipple
Since .NET 3.5 came out, I've exclusively used LINQ. It's really that good; I don't see any reason to use any of those old crutches any more.
As great as LINQ is, though, I think any ORM system would allow you to do away with that dreck.
We've moved away from datasets and built our own ORM objects loosely based on CSLA. You can get the same job done with either a DataSet or LINQ or ORM but re-using it is (we've found) a lot easier. 'Less code make more happy'.
I was fed up with DataSets in .Net 1.1, at least they optimised it so that it doesn't slow as exponentially for large sets any more.
It was always a rather bloated model - I haven't seen many apps that use most of its features.
SqlDataReader was good, but I used to wrap it in an IEnumerable<T> where the T was some typed representation of my data row.
Linq is a far better replacement in my opinion.
I've been using the Data Transfer Objects pattern (originally from the Java world, I believe), with a SqDataReader to populate collections of DTOs from the data layer for use in other layers of the application.
The DTOs themselves are very lightweight and simple classes composed of properties with gets/sets. They can be easily serialized/deserialized, and used for databinding, making them pretty well suited to most of my development needs.
I'm a huge fan of SubSonic. A well-written batch/CMD file can generate an entire object model for your database in minutes; you can compile it into its own DLL and use it as needed. Wonderful model, wonderful tool. The site makes it sound like an ASP.NET deal, but generally speaking it works wonderfully just about anywhere if you're not trying to use its UI framework (which I'm moderately disappointed in) or its application-level auto-generation tools.
For the record, here is a version of the command I use to work with it (so that you don't have to fight it too hard initially):
sonic.exe generate /server [servername] /db [dbname] /out [outputPathForCSfiles] /generatedNamespace [myNamespace] /useSPs true /removeUnderscores true
That does it every time ... Then build the DLL off that directory -- this is part of an NAnt project, fired off by CruiseControl.NET -- and away we go. I'm using that in WinForms, ASP.NET, even some command-line utils. This generates the fewest dependencies and the greatest "portability" (between related projects, EG).
Note
The above is now well over a year old. While I still hold great fondness in my heart for SubSonic, I have moved on to LINQ-to-SQL when I have the luxury of working in .NET 3.5. In .NET 2.0, I still use SubSonic. So my new official advice is platform version-dependent. In case of .NET 3+, go with the accepted answer. In case of .NET 2.0, go with SubSonic.
I have used typed and untyped DataSets, DataViewManagers, DataViews, DataTables, DataRows, DataRowViews, and just about anything you can do with the stack since it firsts came out in multiple enterprise projects. It took me awhile to get used to how allow of it worked. I have written custom components that leverage the stack as ADO.NETdid not quite give me what I really needed. One such component compares DataSets and then updates backend stores. I really know how all of these items work well and those that have seen what I have done are very impressed that I managed to get beyond there feel that it was only useful for demo use.
I use ADO.NET binding in Winforms and I also use the code in console apps. I most recently have teamed with another developer to create a custom ORM that we used against a crazy datamodel that we were given from contractors that looked nothing like our normal data stores.
I searched today for replacement to ADO.NET and I do not see anything that I should seriously try to learn to replace what I currently use.
DataSets are great for demos.
I wouldn't know what to do with one if you made me use it.
I use ObservableCollection
Then again i'm in the client app space, WPF and Silverlight. So passing a dataset or datatable through a service is ... gross.
DataReaders are fast, since they are a forward only stream of the result set.
I use them extensively but I don't make use of any of the "advanced" features that Microsoft was really pushing when the framework first came out. I'm basically just using them as Lists of Hashtables, which I find perfectly useful.
I have not seen good results when people have tried to make complex typed DataSets, or tried to actually set up the foreign key relationships between tables with DataSets.
Of course, I am one of the weird ones that actually prefers a DataRow to an entity object instance.
Pre linq I used DataReader to fill List of my own custom domain objects, but post linq I have been using L2S to fill L2S entities, or L2S to fill domain objects.
Once I get a bit more time to investigate I suspect that Entity Framework objects will be my new favourite solution!
Selecting a modern, stable, and actively supported ORM tool has to be probably the single biggest boost to productivity just about any project of moderate size and complexity can get. If you're concluding that you absolutely, absolutely, absolutely have to write your own DAL and ORM, you're probably doing it wrong (or you're using the world's most obscure database).
If you're doing raw datasets and rows and what not, spend the day to try an ORM and you'll be amazed at how much more productive you can be w/o all the drudgery of mapping columns to fields or all the time filling Sql command objects and all the other hoop jumping we all once went through.
I love me some Subsonic, though for smaller scale projects along with demos/prototypes, I find Linq to Sql pretty damn useful too. I hate EF with a passion though. :P
I've used typed DataSets for several projects. They model the database well, enforce constraints on the client side, and in general are a solid data access technology, especially with the changes in .NET 2.0 with TableAdapters.
Typed DataSets get a bad rap from people who like to use emotive words like "bloated" to describe them. I'll grant that I like using a good O/R mapper more than using DataSets; it just "feels" better to use objects and collections instead of typed DataTables, DataRows, etc. But what I've found is that if for whatever reason you can't or don't want to use an O/R mapper, typed DataSets are a good solid choice that are easy enough to use and will get you 90% of the benefits of an O/R mapper.
EDIT:
Some here suggest that DataReaders are the "fast" alternative. But if you use Reflector to look at the internals of a DataAdapter (which DataTables are filled by), you'll see that it uses...a DataReader. Typed DataSets may have a larger memory footprint than other options, but I've yet to see the application where this makes a tangible difference.
Use the best tool for the job. Don't make your decision on the basis of emotive words like "gross" or "bloated" which have no factual basis.
I just build my business objects from scratch, and almost never use the DataTable and especially not the DataSet anymore, except to initially populate the business objects. The advantages to building your own are testability, type safety and IntelliSense, extensibility (try adding to a DataSet) and readability (unless you enjoy reading things like Convert.ToDecimal(dt.Rows[i]["blah"].ToString())).
If I were smarter I'd also use an ORM and 3rd party DI framework, but just haven't yet felt the need for those. I'm doing lots of smaller size projects or additions to larger projects.
I NEVER use datasets. They are big heavyweight objects only usable (as someone pointed out here) for "demoware". There are lot's of great alternatives shown here.