We have a Web API library, that calls into a Business/Service library(where our business logic is located), which in turn calls a Data access library (Repository).
We use this type of data transfer object all over the place. It has a "Payers" property that we may have to filter (meaning, manipulate its value). I have gone about implementing that check as such, but it feels dirty to me, as I'm calling the same function all over the place. I have thought about either:
Using an attribute filter to handle this or
Making the RequestData a property on the class, and do the filtering in the constructor.
Any additional thoughts or design patterns where this could be designed more efficiently:
public class Example
{
private MyRepository _repo = new MyRepository();
private void FilterRequestData(RequestData data)
{
//will call into another class that may or may not alter RequestData.Payers
}
public List<ReturnData> GetMyDataExample1(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample1(data);
}
public List<ReturnData> GetMyDataExample2(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample2(data);
}
public List<ReturnData> GetMyDataExample3(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample3(data);
}
}
public class RequestData
{
List<string> Payers {get;set;}
}
One way of dealing with repeated code like that is to use a strategy pattern with a Func (and potentially some generics depending on your specific case). You could refactor that into separate classes and everything but the basic idea looks like that:
public class MyRepository
{
internal List<ReturnData> GetMyDataExample1(RequestData arg) { return new List<ReturnData>(); }
internal List<ReturnData> GetMyDataExample2(RequestData arg) { return new List<ReturnData>(); }
internal List<ReturnData> GetMyDataExample3(RequestData arg) { return new List<ReturnData>(); }
}
public class ReturnData { }
public class Example
{
private MyRepository _repo = new MyRepository();
private List<ReturnData> FilterRequestDataAndExecute(RequestData data, Func<RequestData, List<ReturnData>> action)
{
// call into another class that may or may not alter RequestData.Payers
// and then execute the actual code, potentially with some standardized exception management around it
// or logging or anything else really that would otherwise be repeated
return action(data);
}
public List<ReturnData> GetMyDataExample1(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample1);
}
public List<ReturnData> GetMyDataExample2(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample2);
}
public List<ReturnData> GetMyDataExample3(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample3);
}
}
public class RequestData
{
List<string> Payers { get; set; }
}
This sort of thinking naturally leads to aspect oriented programming.
It's specifically designed to handle cross-cutting concerns (e.g. here, your filter function cuts across your query logic.)
As #dnickless suggests, you can do this in an ad-hoc way by refactoring your calls to remove the duplicated code.
More general solutions exist, such as PostSharp which give you a slightly cleaner way of structuring code along aspects. It is proprietary, but I believe the free tier gives enough to investigate an example like this. At the very least it's interesting to see how it would look in PostSharp, and whether you think it improves it at all! (It makes strong use of attributes, which extends first suggestion.)
(N.B. I'm not practically suggesting installing another library for a simple case like this, but highlighting how these types of problems might be examined in general.)
Related
This is stripped down from a more complex situation.
The goal is to construct several instances of class SubAction, each of which uses an action to alter how it uses its internal data.
Consider:
public class SubAction
{
private Action<SubAction> _DoIt;
public SubAction(Action<SubAction> doIt)
{
_DoIt = doIt;
}
public void DoIt()
{
_DoIt(this);
}
static public Action<SubAction> GetAction1 => (it) => it.DoSomething(it._Data.Value1);
static public Action<SubAction> GetAction2 => (it) => it.DoSomething(it._Data.Value2);
private void DoSomething(string value)
{
// ...
}
// This gets set by code not shown.
protected Data _Data;
}
public class Data
{
public string Value1;
public string Value2;
}
public class SubActionTests
{
static SubActionTests()
{
var actions = new List<SubAction>
{
new SubAction(SubAction.GetAction1),
new SubAction(SubAction.GetAction2),
};
// ... code not shown that calls a method to update each instance's _Data...
foreach (var subAction in actions)
{
subAction.DoIt();
}
}
}
This works, but it seems cumbersome. Specifically:
public Action<SubAction> _DoIt { get; set; }
...
static public Action<SubAction> GetAction1 => (it) => it.DoSomething(it._Data.Value1);
...
new SubAction(SubAction.GetAction1)
If I set DoIt AFTER constructing the object, could simply be:
public Action DoIt { get; set; }
...
public Action GetAction1 => () => DoSomething(_Data.Value1);
...
var it = new SubAction();
it.DoIt = it.GetAction1;
Which has simpler action declarations:
The actions don't need <SubAction>.
`GetAction1,2,3.. declarations are much simpler.
But more verbose instance initialization, because access to it is needed to set DoIt.
Unfortunately it isn't possible to refer to "it" during object initializer, so there doesn't seem to be any way to have BOTH the simpler initialization syntax AND the simpler action-declaration syntax.
Am I overlooking some solution?
ALTERNATIVE: factory method
NOTE: This could be approached quite differently, by using an enum to select between the different actions. But that is a different sort of complication; I'm looking for a way to describe these Actions themselves more succinctly.
Specifically, I'm aware there could be a factory method that takes an enum, to hide the complexity:
public enum WhichAction
{
Action1,
Action2
}
...
public static CreateSubAction(WhichAction which)
{
var it = new SubAction();
switch (which)
{
case WhichAction.Action1:
it.DoIt = it.GetAction1;
break;
case WhichAction.Action2:
it.DoIt = it.GetAction2;
break;
}
return it;
}
The downside of this is that each added action requires editing in multiple places.
ALTERNATIVE: sub-classes
Another alternative is to create multiple sub-classes.
That is what I was doing originally, but that was even more verbose - multiple lines per each new action.
And felt like "overkill".
After all, the approach I've got isn't terrible - its a single line for each new GetAction. It just felt like each of those lines "ought" to be much simpler.
Sadly, from what I understand, I don't think you can make the complexity disappear. You probably need to choose an approach from the ones you suggested (or even other solutions like using a strategy pattern).
Advice
When confronted with a design choice like this. I suggest you optimize for the consumer's side of things. In other words, design your classes to make them simple to use.
In your scenario, that would mean opting for your initial solution or the more complex solutions (factory method, sub-classes, strategy pattern, etc.).
The problem with the second solution is that your object can be in a limbo state when initializing it.
var it = new SubAction();
// Before you set DoIt, the object is not fully initialized.
it.DoIt = it.GetAction1;
Consumers can also forget to set DoIt. When possible, you should probably avoid designs that allow such mistakes.
While I'm still curious whether there are syntax alternatives that would streamline what I showed, so I'll accept an answer that shows a simpler syntax, turns out in my situation, I can easily avoid the need for those actions.
Discussing with a colleague, they pointed out that my current actions all have a similar pattern: get a string, pass it to SubAction.DoSomething.
Therefore I can simplify those actions down to a property that gets the appropriate string:
public abstract string CurrentValue { get; }
...
public virtual void DoIt()
{
DoSomething(CurrentValue);
}
Given the above, subclasses become so simple they no longer feel like "overkill":
public class SubAction1 : SubAction
{
protected override string CurrentValue => _Data.Value1;
}
...
// usage
new SubAction1()
That is straightforward; highly readable. And trivial to extend when additional conditions are needed.
There will be more complicated situations that do need to override DoSomething. In those, the "real work" dwarfs what I've shown; so its appropriate to subclass those anyway.
I am doing a refactor over certain code.
We have a list of investors with amounts assigned to each. The total of amounts should be equal to another total, but sometimes there are a couple of cents of difference, so we use different algorithms to assign these differences to each investor.
The current code is something like this:
public void Round(IList<Investors> investors, Enum algorithm, [here goes a list of many parameters]) {
// some checks and logic here - OMMITED FOR BREVITY
// pick method given algorithm Enum
if (algoritm == Enum.Algorithm1) {
SomeStaticClass.Algorithm1(investors, remainders, someParameter1, someParameter2, someParameter3, someParameter4)
} else if (algoritm == Enum.Algorithm2) {
SomeStaticClass.Algorithm2(investors, remainders, someParameter3)
}
}
so far we only have two algorithms. I have to implement the third one. I was given the possibility to refactor both existing implementations as well as do some generic code to make this function for future algorithms, maybe custom to each client.
My first thought was "ok, this is a strategy pattern". But the problem I see is that both algorithms receive a different parameter list (except for the first two). And future algorithms can receive a different list of parameters as well. The only thing in "common" is the investor list and the remainders.
How can I design this so I have a cleaner interface?
I thought of
Establishing an interface with ALL possible parameters, and share it
among all implementations.
Using an object with all possible parameters as properties, and use that generic object as part of the interface. I
would have 3 parameters: The list of investors, the remainders object, and a "parameters" object. But in this case, I have a similar problem. To instantiate each object and fill the required properties depends on the algorithm (unless I set all of them). I
would have to use a factory (or something) to instantiate it, using all parameters in the interface, am I right? I would be moving the problem of too many parameters to that "factory" or whatever.
Using a dynamic object instead of a statically typed object. Still
presents the same problems as before, the instantiation
I also thought of using the Visitor Pattern, but as I understand, that would be the case if I had different algorithms for different entities to use, like, another class of investors. So I don't think it is the right approach.
So far the one that convinces me the most is the second, although I am still a bit reticent about it.
Any ideas?
Thanks
Strategy has different implementations. Its straightforward when all alternate Concrete Strategies require same type signature. But when concrete implementations start asking for different data from Context, we have to gracefully take a step back by relaxing encapsulation ("breaking encapsulation" is known drawback of strategy), either we can pass Context to strategies in method signature or constructor depending upon how much is needed.
By using interfaces and breaking big object trees in to smaller containments we can restrict the access to most of the Context state.
following code demonstrates passing through method parameter.
public class Context {
private String name;
private int id;
private double salary;
Strategy strategy;
void contextInterface(){
strategy.algorithmInterface(this);
}
public String getName() {
return name;
}
public int getId() {
return id;
}
public double getSalary() {
return salary;
}
}
public interface Strategy {
// WE CAN NOT DECIDE COMMON SIGNATURE HERE
// AS ALL IMPLEMENTATIONS REQUIRE DIFF PARAMS
void algorithmInterface(Context context);
}
public class StrategyA implements Strategy{
#Override
public void algorithmInterface(Context context) {
// OBSERVE HERE BREAKING OF ENCAPSULATION
// BY OPERATING ON SOMEBODY ELSE'S DATA
context.getName();
context.getId();
}
}
public class StrategyB implements Strategy{
#Override
public void algorithmInterface(Context context) {
// OBSERVE HERE BREAKING OF ENCAPSULATION
// BY OPERATING ON SOMEBODY ELSE'S DATA
context.getSalary();
context.getId();
}
}
Okay, I might be going in the wrong direction... but it seems kinda weird that you're passing in arguments to all the algorithms, and the identifier to which algorithm to actually use. Shouldn't the Round() function ideally just get what it needs to operate?
I'm imagining the function that invokes Round() to look something like:
if (something)
algToUse = Enum.Algorithm1;
else
if (otherthing)
algToUse = Enum.Algorithm2;
else
algToUse = Enum.Algorithm3;
Round(investors, remainder, algToUse, dayOfMonth, lunarCycle, numberOfGoblinsFound, etc);
... what if, instead, you did something like this:
public abstract class RoundingAlgorithm
{
public abstract void PerformRounding(IList<Investors> investors, int remainders);
}
public class RoundingRandomly : RoundingAlgorithm
{
private int someNum;
private DateTime anotherParam;
public RoundingRandomly(int someNum, DateTime anotherParam)
{
this.someNum = someNum;
this.anotherParam = anotherParam;
}
public override void PerformRounding(IList<Investors> investors, int remainder)
{
// ... code ...
}
}
// ... and other subclasses of RoundingAlgorithm
// ... later on:
public void Round(IList<Investors> investors, RoundingAlgorithm roundingMethodToUse)
{
// ...your other code (checks, etc)...
roundingMethodToUse.Round(investors, remainders);
}
... and then your earlier function simply looks like:
RoundingAlgorithm roundingMethod;
if (something)
roundingMethod = new RoundingByStreetNum(1, "asdf", DateTime.Now);
else
if (otherthing)
roundingMethod = new RoundingWithPrejudice(null);
else
roundingMethod = new RoundingDefault(1000);
Round(investors, roundingMethod);
... basically, instead of populating that Enum value, just create a RoundingAlgorithm object and pass that in to Round() instead.
The general reason I want to do this is:
class MovieApiController : ApiController
{
public string CurrentUser {get;set;}
// ...
public string Index()
{
return Resources.GetText("Color");
}
}
class Resources
{
static string GetText(string id)
{
var caller = ??? as MovieApiController;
if (caller && caller.CurrentUser == "Bob")
{
return "Red";
}
else
{
return "Blue";
}
}
}
I don't need this to be 100% dependable. It seems like the callstack should have this information, but StackFrame doesn't seem to expose any information about the specific object on which each frame executes.
It is generally a bad idea for a method to try to "sniff" its surroundings, and produce different results based on who is making the call.
A better approach is to make your Resources class aware of whatever it needs to know in order to make its decision, and configure it in a place where all relevant information is known, for example
class MovieApiController : ApiController {
private string currentUser;
private Resources resources;
public string CurrentUser {
get {
return currentUser;
}
set {
currentUser = value;
resources = new Resources(currentUser);
}
}
// ...
public string Index() {
return resources.GetText("Color");
}
}
class Resources {
private string currentUser;
public Resources(string currentUser) {
this.currentUser = currentUser;
}
public string GetText(string id) {
if (currentUser == "Bob") {
return "Red";
} else {
return "Blue";
}
}
}
CurrentUser should be available at HttpContext.Current.User and you can leave your controller out of the resource class.
It seems like the callstack should have this information,
Why? The call stack indicated what methods are called to get where you are at - it does not have any information about instances.
Rethink your parameters be deciding what information does the method need to do its job. Reaching outside of the class (e.g. by using the callstack or taking advantage of static methods like HttpContext.Current) limit the re-usability of your code.
From what you've shown, all you need is the current user name (you don't even show where you use the id value. If you want to return different things based on what's passed in then maybe you need separate methods?
As a side note, the optimizer has a great deal of latitude in reorganizing code to make it more efficient, so there are no guarantees that the call stack even contains what you think it should from the source code.
Short answer - you can't, short of creating a custom controller factory that stores the current controller as a property of the current HttpContext, and even that could prove unpredictable.
But it's really not good for a class to behave differently by attempting to inspect its caller. When a method depends on another class it needs to get the correct behavior by depending on the right class, calling the right method, and passing the right parameters.
So in this case you could
have a parameter that you pass to GetResources that tells it what it needs to know in order to return the correct string.
Create a more specific version of the Resources class that does what you need
declare
public interface IResources
{
string GetText(string id);
}
And have multiple classes that implement IResources, use dependency injection to provide the correct implementation to this controller. Ideally that's the best scenario. MovieApiController doesn't know anything about the implementation of IResources. It just knows that there's an instance of IResources that will do what it needs. And the Resource class doesn't know anything about what is calling it. It behaves the same no matter what calls it.
That would look like this:
public class MovieApiController : ApiController
{
private readonly IResources _resources;
public MovieApiController(IResources resources)
{
_resources = resources;
}
public string Index()
{
return _resources.GetText("Color");
}
}
Notice how the controller doesn't know anything about the Resources class. It just knows that it has something that implements IResources and it uses it.
If you're using ASP.NET Core then dependency injection is built in. (There's some good reading in there on the general concept.) If you're using anything older then you can still add it in.
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-dependency-injection - This has a picture that is worth 1000 words for describing the concept.
http://www.c-sharpcorner.com/UploadFile/dacca2/implement-ioc-using-unity-in-mvc-5/
Some of these recommend understanding "inversion of control" first. You might find it easier to just implement something according to the example without trying to understand it first. The understanding comes when you see what it does.
I'm not sure exactly how to describe this question, but here goes. I've got a class hierarchy of objects that are mapped in a SQLite database. I've already got all the non-trivial code written that communicates between the .NET objects and the database.
I've got a base interface as follows:
public interface IBackendObject
{
void Read(int id);
void Refresh();
void Save();
void Delete();
}
This is the basic CRUD operations on any object. I've then implemented a base class that encapsulates much of the functionality.
public abstract class ABackendObject : IBackendObject
{
protected ABackendObject() { } // constructor used to instantiate new objects
protected ABackendObject(int id) { Read(id); } // constructor used to load object
public void Read(int id) { ... } // implemented here is the DB code
}
Now, finally, I have my concrete child objects, each of which have their own tables in the database:
public class ChildObject : ABackendObject
{
public ChildObject() : base() { }
public ChildObject(int id) : base(id) { }
}
This works fine for all my purposes so far. The child has several callback methods that are used by the base class to instantiate the data properly.
I now want to make this slightly efficient. For example, in the following code:
public void SomeFunction1()
{
ChildObject obj = new ChildObject(1);
obj.Property1 = "blah!";
obj.Save();
}
public void SomeFunction2()
{
ChildObject obj = new ChildObject(1);
obj.Property2 = "blah!";
obj.Save();
}
In this case, I'll be constructing two completely new memory instantiations and depending on the order of SomeFunction1 and SomeFunction2 being called, either Property1 or Property2 may not be saved. What I want to achieve is a way for both these instantiations to somehow point to the same memory location--I don't think that will be possible if I'm using the "new" keyword, so I was looking for hints as to how to proceed.
Ideally, I'd want to store a cache of all loaded objects in my ABackendObject class and return memory references to the already loaded objects when requested, or load the object from memory if it doesn't already exist and add it to the cache. I've got a lot of code that is already using this framework, so I'm of course going to have to change a lot of stuff to get this working, but I just wanted some tips as to how to proceed.
Thanks!
If you want to store a "cache" of loaded objects, you could easily just have each type maintain a Dictionary<int, IBackendObject> which holds loaded objects, keyed by their ID.
Instead of using a constructor, build a factory method that checks the cache:
public abstract class ABackendObject<T> where T : class
{
public T LoadFromDB(int id) {
T obj = this.CheckCache(id);
if (obj == null)
{
obj = this.Read(id); // Load the object
this.SaveToCache(id, obj);
}
return obj;
}
}
If you make your base class generic, and Read virtual, you should be able to provide most of this functionality without much code duplication.
What you want is an object factory. Make the ChildObject constructor private, then write a static method ChildObject.Create(int index) which returns a ChildObject, but which internally ensures that different calls with the same index return the same object. For simple cases, a simple static hash of index => object will be sufficient.
If you're using .NET Framework 4, you may want to have a look at the System.Runtime.Caching namespace, which gives you a pretty powerful cache architecture.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx
Sounds perfect for a reference count like this...
#region Begin/End Update
int refcount = 0;
ChildObject record;
protected ChildObject ActiveRecord
{
get
{
return record;
}
set
{
record = value;
}
}
public void BeginUpdate()
{
if (count == 0)
{
ActiveRecord = new ChildObject(1);
}
Interlocked.Increment(ref refcount);
}
public void EndUpdate()
{
int count = Interlocked.Decrement(ref refcount);
if (count == 0)
{
ActiveRecord.Save();
}
}
#endregion
#region operations
public void SomeFunction1()
{
BeginUpdate();
try
{
ActiveRecord.Property1 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
ActiveRecord.Property2 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
SomeFunction1();
SomeFunction2();
}
finally
{
EndUpdate();
}
}
#endregion
I think your on the right track more or less. You can either create a factory which creates your child objects (and can track "live" instances), or you can keep track of instances which have been saved, so that when you call your Save method it recognizes that your first instance of ChildObject is the same as your second instance of ChildObject and does a deep copy of the data from the second instance over to the first. Both of these are fairly non-trivial from a coding standpoint, and both probably involve overriding the equality methods on your entities. I tend to think that using the first approach would be less likely to cause errors.
One additional option would be to use an existing Obect-Relational mapping package like NHibernate or Entity Framework to do your mapping between objects and your database. I know NHibernate supports Sqlite, and in my experience tends to be the one that requires the least amount of change to your entity structures. Going that route you get the benefit of the ORM layer tracking instances for you (and generating SQL for you), plus you would probably get some more advanced features your current data access code may not have. The downside is that these frameworks tend to have a learning curve associated with them, and depending on which you go with there could be a not insignificant impact on the rest of your code. So it would be worth weighing the benefits against the cost of learning the framework and converting your code to use the API.
So I've been dealing with several APIs recently provided by different software vendors for their products. Sometimes things are lacking, sometimes I just want to make the code more readable, and I'm trying to avoid a ton of static methods where they don't belong to "get what I need" from the APIs. Thus, I've found myself writing quite a few extension methods.
However, because there are many methods, and in the interest of keeping "my" methods separate from those of the API objects in terms of code readability, I came up with this little tidbit:
public class MyThirdPartyApiExtensionClass {
public static MyThirdPartyApiExtensionClass MTPAEC(this ThirdPartyApiClass value) {
return new MyThirdPartyApiExtensionClass(value);
}
private ThirdPartyApiClass value;
public MyThirdPartyApiExtensionClass(ThirdPartyApiClass extendee) {
value = extendee;
}
public string SomeMethod() {
string foo = value.SomeCrappyMethodTheProviderGaveUs();
//do some stuff that the third party api can't do that we need
return foo;
}
public int SomeOtherMethod() {
int bar = value.WowThisAPISucks(null);
//more stuff
return bar;
}
}
Then I can do things like:
string awesome = instanceOfApiObject.MTPAEC.SomeMethod();
and I have a clean separation of my stuff from theirs.
Now my question is.. does this seem like a good practice, improving code readability... or is this a bad idea? Are there any harmful consequences to doing this?
Disclaimer:
The code above is just to demonstrate the concept. Obviously there is better sanity checking and usefulness in the real thing.
I suppose the same level of separation could simply be done like this:
public static class MyThirdPartyApiExtensionClass {
public ThirdPartyApiClass MTPAEC(this ThirdPartyApiClass value) {
return value;
}
public string SomeMethod(this ThirdPartyApiClass value) {
string foo = value.SomeCrappyMethodTheProviderGaveUs();
//do some stuff that the third party api can't do that we need
return foo;
}
public int SomeOtherMethod(this ThirdPartyApiClass value) {
int bar = value.WowThisAPISucks(null);
//more stuff
return bar;
}
}
To answer your direct question, I think that going through the extra trouble to separate out your functionality from the basic functionality is a bad code smell. Don't worry about having your code separate from their code from a usage perspective. First it makes it that much harder to find what you're looking for since now there's two places to look for the same functionality and secondly the syntax makes it look like your extensions are operating on the MTPAEC property and not the core object (which they are).
My suggestion is to use actual Extension methods which allow you to have that but without having the additional constructor.
public static class ApiExtension
{
public static string SomeMethod(this ThirdPartyApiClass value)
{
string foo = value.SomeCrappyMethodTheProviderGaveUs();
//do some stuff that the third party api can't do that we need
return foo;
}
}
used by
var mine = new ThirdPartyApiClass();
mine.SomeMethod();
C# will do the rest.
Looking at your above suggestion you'll have to split the two classes out I think. One for providing extension groups using the extensions mechanism and another for providing each group of logic.
If you need to separate out yours from a glance then use a naming convention to make your look unique. Though upon hovering and through intellisense it will tell you that it is an extension method.
If you just want the separation of content like you have, then you'll need two classes.
public static class ApiExtensionder
{
public static MTPAEC(this ThirdPartyApiClass value)
{
return new MtpaecExtensionWrapper(value);
}
}
public class MtpaecExtensionWrapper
{
private ThirdPartyApiClass wrapped;
public MtpaecExtensionWrapper(ThirdPartyApiClass wrapped)
{
this.wrapped = wrapped;
}
public string SomeMethod()
{
string foo = this.wrapped.SomeCrappyMethodTheProviderGaveUs();
//do some stuff that the third party api can't do that we need
return foo;
}
}
When dealing with an API that you cannot modify, extension methods are a reasonable way to extend the API's expressiveness while staying relatively decoupled, IMHO.
The biggest issue with extension methods has to do with the fact that they are implicitly inferred to be available based on namespace inclusion (the using statements at the top of the file). As a result, if you simply forget to include a namespace, you can end up scratching your head wondering why there's not available.
I would also add that extension methods are not an obvious construct, and as a result developers don't commonly expect or anticipate them. However, with the increased use of LINQ, I would imagine more and more developers would be getting comfortable with their use.
My opinion is that you're adding an extra level of indirection needlessly. You want the extension methods to be available on the original objects... so why not put them there? Intellisense will let you know that the objects are extensions, for the rare case that you actually care.