I'm wondering whether this is possible. We want a function to work in our .NET code when a value in a specific table is updated. This could be upon a record insert or update. Is this possible?
If not, is there an alternative process?
You need to ask a couple of questions.
Do you want any to none of your business logic at the db level?
Obviously a db trigger could do this (perform some action when a value is changed, even if very specific value only).
I've seen some systems that are db trigger heavy. Their 'logic' resides deeply and highly coupled with the db platform. There are some advantages to that, but most people would probably say the disadvantages are too great (coupling, lack of encapuslation/reusability).
Depending on what you are doing and your leanings you could:
Make sure all DAO/BusinessFunctoin objects call your 'event' object.function to do what you want when a certain value change occurs.
Use a trigger to call your 'event' object.function when a certain value change occurs.
Your trigger does everything.
I personally would lean towards Option 2 where you have a minimal trigger (which simply fires the event call to your object.function) so you don't deeply couple your db to your business logic.
Option 1 is fine, but may be a bit of a hassle unless you have a very narrow set of BF/DAO's that talk to this db table.field you want to watch.
Option 3 is imho the worst choice as you couple logic to your db and reduce its accessibility to your business logic layer.
Given that, here is some information toward accomplishing this via Options 2:
Using this example from MSDN: http://msdn.microsoft.com/en-us/library/938d9dz2.aspx.
This shows how to have a trigger run and call a CLR object in a project.
Effectively, in your project, you create a trigger and have it call your class.
Notice the line: [SqlTrigger(Name="UserNameAudit", Target="Users", Event="FOR INSERT")]
This defines when the code fires, then within the code, you can check your constraint, then fire the rest of the method (or not), or call another object.method as needed.
The primary difference between going directly to the db and adding a trigger is this gives you access to all the objects in your project when deployed together.
I have never tried this but it is possible. You can write a CLR assembly and call that from your table trigger.
You can see an example here.
But you should post your problem and you may find a better work around.
Related
I am designing a LightSwitch 2012 application to manage requests, and I want to be able to update the request state with the same reusable code across all of my screens. For example, A user can change a request state in the approval screen, the fulfillment screen, etc. Which are called by a button. Currently, I have a method in each of the .cs files where I need to update the request, using the partial void <ScreenCommand>_Execute() methods. I am trying to change this so I can update the code from one place instead of everywhere, and I also want to not have to copy the methods to new screens that I add a button to.
Now normally I would put this in Application.cs or somewhere else with global access, but I don't have access to the same DataWorkspace object. I also pass in the this.DataWorkspace object from the screen, which allows access to the SaveChanges() method. However, this seems a little smelly. Is there a better way to deal with this or a better place to put reusable commands that you want to be able to assign to buttons on multiple screens? Currently I have to be really careful about saving dirty data and I still have to wire everything up manually. I also don't know if the code runs in the proper context if it's in the Application.cs file.
To clarify, yes, I do want this to run client side, so I can trigger emails from their outlook inboxes and the like.
What you're trying to do is simply good programming practice, putting code that's required in more than one place in a location where it can be called from each of those places, yet maintained in just one place. It's just a matter of getting used to the way you have to do things in LightSwitch.
You can add code in a module (or a static class in C#) in the UserCode folder of the Client project. That's part of the reason the folder exists, as a place to put "user code". To do this, switch to File View, then right-click the UserCode folder to add your module/class. Add your method in the newly created module/class. You can create as many of these as you like (if you like to keep code separated), or you can add other methods to the same module/class, it's up to you.
However, I wouldn't go passing the data workspace as a parameter to the reusable method that you create. I wouldn't even pass the entity object either, just the values that you need to calculate the required state. But the actual invoking of the data workspace's SaveChanges method should remain in the screen's code. Think of the screen as a "unit of work".
In each button's Execute method (in your various screens), you call your method with values from the entity being manipulated in the screen & return the result. Assign the calculated returned value to the entity's State property (if that's what you have), then call the screen's Save method (or use the screen's Close method, passing true for the SaveChanges parameter). There's no need to call the data workspace's SaveChanges method, & you're doing things the "LightSwitch way" by doing it this way.
Another benefit of doing it this way, is that your code can now be unit tested, as it's no longer dependent on any entity.
I hope that all makes sense to you.
I have changed my MVC app database implementation from database first to code first. The only thing left is to move Triggers. I do not know where to put them. Should I put them in Repository and implement them in add/ edit / delete methods? Or maybe there is more appropriated place to execute triggers depending on repositories actions. Please let me any ideas of trigger implementation in Code First approach.
Can't offer loads of help here as I've never had to do it, but you could look at overriding DbContext.SaveChanges depending on how many triggers you're talking about...
Update Take a look at this question for a better example.
I am writing a new application and I am in the design phase. A few of the db tables require to have an InsertedDate field which will display when the record was inserted.
I am thinking of setting the Default Value to GetDate()
Is there an advantage to doing this in the application over setting the default value in the database?
I think its better to set the Default Value to GetDate() in SQL Server, rather than in your application. You can use that to get an ordered table based on insertion. It looks like an overhead if you try to set it from the application. Unless you want to specify some particular date with the insert, which I believe kills the purpose of it.
If you ever need to manually INSERT a record into the database, you'll need to remember to set this field if you're setting the default value in your application to avoid NULL references.
Personally, I prefer to set default values in the database where possible, but others might have differing opinions on this.
If you do it in your application, you can unit test it. In the projects I've been on, especially when using an ORM, we do all default operations in the code.
When designing, I always put a lot of importance on separation of concern, which for me, in the context of "database functionality vs application functionality", boils down to the question: "Who owns the data?". I have always been of the opinion that my code owns my data - never the database. The database is simply the container for data. This is similar to saying that I own my clothes, rather than my dresser owning my clothes. My dresser performs an important function, making my clothes available in an organized fashion, but I am always the agent putting clothes in the dresser, and I am responsible for their organization.
I'm sure many will have a problem with this analogy, saying that modern databases are much more powerful than my dresser, but in my experience the more functionality I put in the database layer, the more confusing projects get and the more blurred the line between data and functionality (e.g. database stored procedures, etc). Admittedly, yours is a simple example of this concept, but once a precedent is set, anything goes.
Another thing I'd like to address is the ease-of-use factor. I reject the idea that because a particular implementation is convenient (such as avoiding nulls, different server times, etc.) then I should choose it. To me, choosing such implementations is equivalent to saying: "It's alright if my code doesn't work. I'll avoid using my code rather than fixing it and making it robust."
I'm sure there are many cases, perhaps at extreme scale or due to other business requirements, when database-layer functionality is not only warranted but necessary, but my experience tells me the more you can keep your functionality in your code, the cleaner, simpler, and more robust your application will be.
Ok guys, another my question is seems to be very widely asked and generic. For instance, I have some accounts table in my db, let say it would be accounts table. On client (desktop winforms app) I have appropriate functionality to add new account. Let say in UI it's a couple of textboxes and one button.
Another one requirement is account uniqueness. So I can't add two same accounts. My question is should I check this account existence on client (making some query and looking at result) or make a stored procedure for adding new account and check account existence there. As it for me, it's better to make just a stored proc, there I can make any needed checks and after all checks add new account. But there is pros and cons of that way. For example, it will be very difficult to manage languagw of messages that stored proc should produce.
POST EDIT
I already have any database constraints, etc. The issue is how to process situation there user is being add an existence account.
POST EDIT 2
The account uniqueness is exposed as just a simple tiny example of business logic. My question is more abour handling complicated business logic on that accounts domain.
So, how can I manage this misunderstanding?
I belive that my question is basic and has proven solution. My tools are C#, .NET Framework 2.0. Thanks in advance, guys!
If the application is to be multi-user ( i.e. not just a single desktop app with a single user, but a centralised DB with the app acting as clients maybe on many workstations), then it is not safe to rely on the client (app) to check for such as uniqueness, existance, free numbers etc as there is a distinct possibility of change happening between calls (unless read locking is used, but this often become more of an issue than a help!).
There is the ability of course to precheck and then recheck (pre at app level, re at DB), but of course this would give extra DB traffic, so depends on whether it is a problem for you.
When I write SPROCs that will return to an app, I always use the same framework - I include parameters for a return code and message and always populate them. Then I can use standard routines to call them and even add in the parameters automatically. I can then either display the message directly on failure, or use the return code to localize it as required (or automate a response). I know some DBs (like SQL Svr) will return Return_Code parameters, but I impliment my own so I can leave inbuilt ones for serious system based errors and unexpected failures. Also allows me to have my own numbering systems for return codes (i.e. grouping them to match Enums in the code and/or grouping by severity)
On web apps I have also used a different concept at times. For example, sometimes a request is made for a new account but multiple pages are required (profile for example). Here I often use a header table that generates a hidden user ID against the requested unique username, a timestamp and someway of recognising them (IP Address etc). If after x hours it is not used, the header table deletes the row freeing up the number (depending on DB the number may never become useable again - this doesn;t really matter as it is just used to keep the user data unique until application is submitted) and the username. If completed correctly, then the records are simply copied across to the proper active tables.
//Edit - To Add:
Good point. But account uniqueness is just a very tiny simple sample.
What about more complex requirements for accounts in business logic?
For example, if I implement in just in client code (in winforms app) I
will go ok, but if I want another (say console version of my app or a
website) kind of my app work with this accounts I should do all this
logic again in new app! So, I'm looking some method to hold data right
from two sides (server db site and client side). – kseen yesterday
If the requirement is ever for mutiuse, then it is best to separate it. Putting it into a separate Class Library Project allows the DLL to be used by your WinForm, Console program, Service, etc. Although I would still prefer rock-face validation (DB level) as it is closest point in time to any action and least likely to be gazzumped.
The usual way is to separate into three projects. A display layer [DL] (your winform project/console/Service/etc) and Business Application Layer [BAL] (which holds all the business rules and calls to the DAL - it knows nothing about the diplay medium nor about the database thechnology) and finally the Data Access Layer [DAL] (this has all the database calls - this can be very basic with a method for insert/update/select/delete at SQL and SPROC level and maybe some classes for passing data back and forth). The DL references only the BAL which references the DAL. The DAL can be swapped for each technology (say change from SQL Server to MySQL) without affecting the rest of the application and business rules can be changed and set in the BAL with no affect to the DAL (DL may be affected if new methods are added or display requirement change due to data change etc). This framework can then be used again and again across all your apps and is easy to make quite drastic changes to (like DB topology).
This type of logic is usually kept in code for easier maintenance (which includes testing). However, if this is just a personal throwaway application, do what is most simple for you. If it's something that is going to grow, it's better to put things practices in place now, to ease maintenance/change later.
I'd have a AccountsRepository class (for example) with a AddAcount method that did the insert/called the stored procedure. Using database constraints (as HaLaBi mentioned), it would fail on trying to insert a duplicate. You would then determine how to handle this issue (passing a message back to the ui that it couldn't add) in the code. This would allow you to put tests around all of this. The only change you made in the db is to add the constraint.
Just my 2 cents on a Thrusday morning (before my cup of green tea). :)
i think the answer - like many - is 'it depends'
for sure it is a good thing to push logic as deeply as possible towards the database. This prevent bad data no matter how the user tries to get it in there.
this, in simple terms, results in applications that TRY - FAIL - RECOVER when attempting an invalid transaction. you need to check each call(stored proc, or triggered insert etc) and IF something bad happens, recover from that condition. Usually something like tell the user an issue occurred, reset the form or something, and let them try again.
i think at a minimum, this needs to happen.
but, in addition, to make a really nice experience for the user, the app should also preemptively check on certain data conditions ahead of time, and simply prevent the user from making bad inserts in the first place.
this is of course harder, and sometimes means double coding of business rules (one in the app, and one in the DB constraints) but it can make for a dramatically better user experience.
The solution is more of being methodical than technical:
Implement - "Defensive Programming" & "Design by Contract"
If the chances of a business-rule being changed over time is very less, then apply the constraint at database-level
Create a "validation or rules & aggregation layer (or class)" that will manage such conditions/constraints for entity and/or specific property
A much smarter way to do this would be to make a user-control for the entity and/or specific property (in your case the "Account-Code"), which would internally use the "validation or rules & aggregation layer (or class)"
This will allow you to ensure a "systematic-way-of-development" or a more "scalable & maintainable" application-architecture
If your application is a website then along with placing the validation on the client-side it is always better to have validation even in the business-layer or C# code as well
When ever a validation would fail you could implement & use a "custom-error-message" library, to ensure message-content is standard across the application
If the errors are raised from database itself (i.e., from stored-procedures), you could use the same "custom-error-message" class for converting the SQL Exception to the fixed or standardized message format
I know that this is all a bit too much, but is will always good for future.
Hope this helps.
As you should not depend on a specific Storage Provider (DB [mysql, mssql, ...], flat file, xml, binary, cloud, ...) in a professional project all constraint should be checked in the business logic (model).
The model shouldn't have to know anything about the storage provider.
Uncle Bob said something about architecture and databases: http://blog.8thlight.com/uncle-bob/2011/11/22/Clean-Architecture.html
Suppose I have some application A with a database. Now I want to add another application B, which should keep track of the database changes of application A. Application B should do some calculations, when data has changed. There is no direct communication between both applications. Both can only see the database.
The basic problem is: Some data changes in the database. How can I trigger some C# code doing some work upon these changes?
To give some stimulus for answers, I mention some approaches, which I am currently considering:
Make application B polling for
changes in the tables of interest.
Advantage: Simple approach.
Disadvantage: Lots of traffic,
especially when many tables are
involved.
Introduce triggers, which will fire
on certain events. When they fire
they should write some entry into an
“event table”. Application B only
needs to poll that “event table”.
Advantage: Less traffic.
Disadvantage: Logic is placed into
the database in the form of triggers.
(It’s not a question of the
“evilness” of triggers. It’s a design
question, which makes it a
disadvantage.)
Get rid of the polling approach and
use SqlDependency class to get
notified for changes. Advantage:
(Maybe?) Less traffic than polling
approach. Disadvantage: Not database
independent. (I am aware of
OracleDependency in ODP.NET, but what
about the other databases?)
What approach is more favorable? Maybe I have missed some major (dis)advantage in the mentioned approaches? Maybe there are some other approaches I haven’t think of?
Edit 1: Database independency is a factor for the ... let's call them ... "sales people". I can use SqlDependency or OracleDependency. For DB2 or other databases I can fall back to the polling approach. It's just a question of cost and benefit, which I want to at least to think about so I can discuss it.
I'd go with #1. It's not actually as much traffic as you might think. If your data doesn't change frequently, you can be pessimistic about it and only fetch something that gives you a yay or nay about table changes.
If you design your schema with polling in mind you may not really incur that much of a hit per poll.
If you're only adding records, not changing them, then checking the highest id might be enough on a particular table.
If you're updating them all then you can store a timestamp column and index it, then look for the maximum timestamp.
And you can send an ubber query that polls multiple talbes (efficiently) and returns the list of changed tables.
Nothing in this answer is particularly clever, I'm just trying to show that #1 may not be quite as bad as it at first seems.
I would go with solution #1 (polling), because avoiding dependencies and direct connections between separate apps can help reduce complexity and problems.
I think you have covered the approaches I have thought of, there's no absolute "best" way to do it, what matters are your requirements and priorities.
Personally I like the elegance of the SqlDependency class; and what does database independence matter in the real world for most applications anyway? But if it's a priority for you to have database independence then you can't use that.
Polling is my second favourite because it keeps the database clean from triggers and application logic; it really isn't a bad option anyway because as you say it's simple. If application B can wait a few minutes at a time before "noticing" database changes, it would be a good option.
So my answer is: it depends. :)
Good luck!
Do you really care about database independence?
Would it really be that hard to create a difference mechanism for each database type that all have the same public interface?
I am aware of OracleDependency in ODP.NET, but what about the other databases?
SQL Server has something like that, but I've never used it.
You can make an MySqlDependency class, and implement SqlDependency or SqlDependencyForOracle (pooling)
You can use an SQL trigger inside a SQL CLR Database Project and run your code in that project, see: https://msdn.microsoft.com/en-us/library/938d9dz2.aspx
Or, on trigger inside the SQL CLR Database Project you could make a request from the SQL CLR Database Project to the project you actually want to act on the trigger.