I have a method which handles a set of records.This method,return true\false after processing.So,if all the records are processed(doing some db updates),will return true.Now,suppose after processing 1 record,some exception is thrown,should I write result=false(at the end of method result is returned) in catch block? And,allow processing of other records to be done?
Continuing to add data to the dbase when adding one record failed is almost always wrong. Records are very frequently related. They represent a set of transactions on a bank account. Or a batch of orders from a customer. Adding these with one of them missing is always a problem.
Not only do you give your client a huge problem coming up with a new batch that contains the single corrected record, you make it far too easy to allow somebody to just ignore the error. The kind of error that doesn't get discovered or causes problems until much later. Invariably with a huge cost associated with correcting the error.
When an error occurs, reject the entire batch. Keep the dbase in a proper state by using transactions. Use, say, SqlTransaction and call BeginTransaction() when you start. Call Commit() when everything worked, call Rollback() in your catch clause.
Your client can now go back to the sub-system that generates the records, make the correction and re-run your program. Your dbase will always contain a proper copy of that sub-system's data. And errors cannot be ignored.
How you handle this with exceptions will be determined very much by what you want to happen in the event of something going wrong. You could, as you say, just write result= false in your catch blocks, but this means you are simply saying to the calling function "Hey - some records were not processed - live with it...". That might be enough for you - it depends what you're trying to do.
At the very least though, I would want to also write the details of the exceptions away to a log. And if you don't have a method somewhere that takes an exception and writes away to a log, it's time to write one (or use a third party solution...)
Otherwise you are losing information that could be useful in determining why things failed...
Whether you process those records you can or throw everything out in the event of a problem is a design question that only you can answer - we don't have the context...
I think it could be something like that
int count = 0;
foreach( item in list)
{
try
{
//update DB
++count;
}
catch(Exception ex)
{
//log exception
}
if(count == list.Count)
return true;
else return false;
}
Another way
bool result = true;
foreach( item in list)
{
try
{
//update DB
}
catch(Exception ex)
{
//log exception
result = false;
}
return result;
}
Related
I'm trying to execute a basic transactional operation that contains two operations
Get the length of a set scard MySet
Pop the entire set with the given length : spop MySet len
I know it is possible to use smembers and del consecutively. But what I want to achieve is to get the output of the first operation and use it in the second operation and do it in a transaction. Here is what I tried so far:
var transaction = this.cacheClient.Db1.Database.CreateTransaction();
var itemLength = transaction.SetLengthAsync(key).ContinueWith(async lengthTask =>
{
var length = await lengthTask;
try
{
// here I want to pass the length argument
return await transaction.SetPopAsync(key, length); // probably here transaction is already committed
// so it never passes this line and no exceptions thrown.
}
catch (Exception ex)
{
throw;
}
});
await transaction.ExecuteAsync();
Also, I tried the same thing with CreateBatch and get the same result. I'm currently using the workaround I mentioned above. I know it is also possible to evaluate a Lua script but I want to know is it possible with transactions or am I doing something terribly wrong.
The nature of redis is that you cannot read data during multi/exec - you only get results when the exec runs, which means it isn't possible to use those results inside the multi. What you are attempting is kinda doomed. There are two ways of doing what you want here:
Speculatively read what you need, then perform a multi/exec (transaction) block using that knowledge as a constraint, which SE.Redis will enforce inside a WATCH block; this is really complex and hard to get right, quite honestly
Use Lua, meaning: ScriptEvaluate[Async], where you can do everything you want in a series of operations that execute contiguously on the server without competing with other connections
Option 2 is almost always the right way to do this, ever since it became possible.
I am calling a stored procedure that inserts data in to a sql server database from c#. I have a number of constraints on the table such as unique column etc. At present I have the following code:
try
{
// inset data
}
catch (SqlException ex)
{
if (ex.Message.ToLower().Contains("duplicate key"))
{
if (ex.Message.ToLower().Contains("url"))
{
return 1;
}
if (ex.Message.ToLower().Contains("email"))
{
return 2;
}
}
return 3;
}
Is it better practise to check if column is unique etc before inserting the data in C#, or in store procedure or let an exception occur and handle like above? I am not a fan of the above but looking for best practise in this area.
I view database constraints as a last resort kind of thing. (I.e. by all means they should be present in your schema as a backup way of maintaining data integrity.) But I'd say the data should really be valid before you try to save it in the database. If for no other reason, then because providing feedback about invalid input is a UI concern, and a data validity error really shouldn't bubble up and down the entire tier stack every single time.
Furthermore, there are many sorts of assertions you want to make about the shape of your data that can't be expressed using constraints easily. (E.g. state transitions of an order. "An order can only go to SHIPPED from PAID" or more complex scenarios.) That is, you'd need to use involving procedural-language based checks, ones that duplicate even more of your business logic, and then have those report some sort of error code as well, and include yet more complexity in your app just for the sake of doing all your data validation in the schema definition.
Validation is inherently hard to place in an app since it concerns both the UI and is coupled to the model schema, but I veer on the side of doing it near the UI.
I see two questions here, and here's my take...
Are database constraints good? For large systems they're indepensible. Most large systems have more than one front end, and not always in compatible languages where middle-tier or UI data-checking logic can be shared. They may also have batch processes in Transact-SQL or PL/SQL only. It's fine to duplicate the checking on the front end, but in a multi-user app the only way to truly check uniqueness is to insert the record and see what the database says. Same with foreign key constraints - you don't truly know until you try to insert/update/delete.
Should exceptions be allowed to throw, or should return values be substituted? Here's the code from the question:
try
{
// inset data
}
catch (SqlException ex)
{
if (ex.Message.ToLower().Contains("duplicate key"))
{
if (ex.Message.ToLower().Contains("url"))
{
return 1; // Sure, that's one good way to do it
}
if (ex.Message.ToLower().Contains("email"))
{
return 2; // Sure, that's one good way to do it
}
}
return 3; // EVIL! Or at least quasi-evil :)
}
If you can guarantee that the calling program will actually act based on the return value, I think the return 1 and return 2 are best left to your judgement. I prefer to rethrow a custom exception for cases like this (for example DuplicateEmailException) but that's just me - the return values will do the trick too. After all, consumer classes can ignore exceptions just as easily as they can ignore return values.
I'm against the return 3. This means there was an unexpected exception (database down, bad connection, whatever). Here you have an unspecified error, and the only diagnostic information you have is this: "3". Imagine posting a question on SO that says I tried to insert a row but the system said '3'. Please advise. It would be closed within seconds :)
If you don't know how to handle an exception in the data class, there's no way a consumer of the data class can handle it. At this point you're pretty much hosed so I say log the error, then exit as gracefully as possible with an "Unexpected error" message.
I know I ranted a bit about the unexpected exception, but I've handled too many support incidents where the programmer just sequelched database exceptions, and when something unexpected came up the app either failed silently or failed downstream, leaving zero diagnostic information. Very naughty.
I would prefer a stored procedure that checks for potential violations before just throwing the data at SQL Server and letting the constraint bubble up an error. The reasons for this are performance-related:
Performance impact of different error handling techniques
Checking for potential constraint violations before entering SQL Server TRY and CATCH logic
Some people will advocate that constraints at the database layer are unnecessary since your program can do everything. The reason I wouldn't rely solely on your C# program to detect duplicates is that people will find ways to affect the data without going through your C# program. You may introduce other programs later. You may have people writing their own scripts or interacting with the database directly. Do you really want to leave the table unprotected because they don't honor your business rules? And I don't think the C# program should just throw data at the table and hope for the best, either.
If your business rules change, do you really want to have to re-compile your app (or all of multiple apps)? I guess that depends on how well-protected your database is and how likely/often your business rules are to change.
I did something like this:
public class SqlExceptionHelper
{
public SqlExceptionHelper(SqlException sqlException)
{
// Do Nothing.
}
public static string GetSqlDescription(SqlException sqlException)
{
switch (sqlException.Number)
{
case 21:
return "Fatal Error Occurred: Error Code 21.";
case 53:
return "Error in Establishing a Database Connection: 53.";
default
return ("Unexpected Error: " + sqlException.Message.ToString());
}
}
}
Which allows it to be reusable, and it will allow you to get the Error Codes from SQL.
Then just implement:
public class SiteHandler : ISiteHandler
{
public string InsertDataToDatabase(Handler siteInfo)
{
try
{
// Open Database Connection, Run Commands, Some additional Checks.
}
catch(SqlException exception)
{
SqlExceptionHelper errorCompare = new SqlExceptionHelper(exception);
return errorCompare.ToString();
}
}
}
Then it is providing some specific errors for common occurrences. But as mentioned above; you really should ensure that you've tested your data before you just input it into your database. That way no mismatched constraints surface or exists.
Hope it points you in a good direction.
Depends on what you're trying to do. Some things to think about:
Where do you want to handle your error? I would recommend as close to the data as possible.
Who do you want to know about the error? Does your user need to know that 'you've already used that ID'...?
etc.
Also -- constraints can be good -- I don't 100% agree with millimoose's answer on that point -- I mean, I do in the should be this way / better performance ideal -- but practically speaking, if you don't have control over your developers / qc, and especially when it comes to enforcing rules that could blow your database up (or otherwise, break dependent objects like reports, etc. if a duplicate key were to turn-up somewhere, you need some barrier against (for example) the duplicate key entry.
I am writing an API that connects to a service which either returns a simple "Success" message or one of over 100 different flavors of failure.
Originally I thought to write the method that sends a request to this service such that if it succeeded the method returns nothing, but if it fails for whatever reason, it throws an exception.
I didn't mind this design very much, but on the other hand just today I was reading Joshua Bloch's "How to Design a Good API and Why it Matters", where he says "Throw Exceptions to indicate Exceptional Conditions...Don't force client to use exceptions for control flow." (and "Conversely, don't fail silently.")
On the other-other hand, I noticed that the HttpWebRequest I am using seems to throw an exception when the request fails, rather than returning a Response containing a "500 Internal Server Error" message.
What is the best pattern for reporting errors in this case? If I throw an exception on every failed request, am I in for massive pain at some point in the future?
Edit: Thank you very kindly for the responses so far. Some elaboration:
it's a DLL that will be given to the clients to reference in their application.
an analogous example of the usage would be ChargeCreditCard(CreditCardInfo i) - obviously when the ChargeCreditCard() method fails it's a huge deal; I'm just not 100% sure whether I should stop the presses or pass that responsibility on to the client.
Edit the Second:
Basically I'm not entirely convinced which of these two methods to use:
try {
ChargeCreditCard(cardNumber, expDate, hugeAmountOMoney);
} catch(ChargeFailException e) {
// client handles error depending on type of failure as specified by specific type of exception
}
or
var status = TryChargeCreditCard(cardNumber, expDate, hugeAmountOMoney);
if(!status.wasSuccessful) {
// client handles error depending on type of failure as specified in status
}
e.g. when a user tries to charge a credit card, is the card being declined really an exceptional circumstance? Am I going down too far in the rabbit hole by asking this question in the first place?
Here's a short list of things to consider. While not comprehensive, I believe these things can help you write better code. Bottom line: Don't necessarily perceive exception handling as evil. Instead, when writing them, ask yourself: How well do I really understand the problem I am solving? More often than not, this will help you become a better developer.
Will other developers be able to read this? Can it be reasonably understood by the average developer? Example: ServiceConnectionException vs. a confusing ServiceDisconnectedConnectionStatusException
In the case of throwing an exception, how exceptional is the circumstance? What does the caller have to do in order to implement the method?
Is this exception fatal? Can anything really be done with this exception if it is caught? Threads aborting, out of memory.. you can't do anything useful. Don't catch it.
Is the exception confusing? Let's say you have a method called Car GetCarFromBigString(string desc) that takes a string and returns a Car object. If the majority use-case for that method is to generate a Car object from that string, don't throw an exception when a Car couldn't be determined from the string. Instead, write a method like bool TryGetCarFromBigString(string desc, out Car).
Can this be easily prevented? Can I check something, let's say the size of an array or a variable being null?
For code readability's sake, let's potentially take a look at your context.
bool IsServiceAlive()
{
bool connected = false; //bool is always initialized to false, but for readability in this context
try
{
//Some check
Service.Connect();
connected = true;
}
catch (CouldNotConnectToSomeServiceException)
{
//Do what you need to do
}
return connected;
}
//or
void IsServiceAlive()
{
try
{
//Some check
Service.Connect();
}
catch (CouldNotConnectToSomeServiceException)
{
//Do what you need to do
throw;
}
}
static void Main(string[] args)
{
//sample 1
if (IsServiceAlive())
{
//do something
}
//sample 2
try
{
if (IsServiceAlive())
{
//do something
}
}
catch (CouldNotConnectToSomeServiceException)
{
//handle here
}
//sample 3
try
{
IsServiceAlive();
//work
}
catch (CouldNotConnectToSomeServiceException)
{
//handle here
}
}
You can see above, that catching the CouldNotConnectToSomeServiceException in sample 3 doesn't necessarily yield any better readability if the context is simply a binary test. However, both work. But is it really necessary? Is your program hosed if you can't connect? How critical is it really? These are all factors you will need to take in to account. It's hard to tell since we don't have access to all of your code.
Let's take a look at some other options that most likely lead to problems.
//how will the code look when you have to do 50 string comparisons? Not pretty or scalable.
public class ServiceConnectionStatus
{
public string Description { get; set; }
}
and
//how will your code look after adding 50 more of these?
public enum ServiceConnectionStatus
{
Success,
Failure,
LightningStormAtDataCenter,
UniverseExploded
}
I think you need to consider a few things in your design:
1) How will the API be accessed? If you are exposing it over web services, then throwing exceptions are probably not a good idea. If the API is in a DLL that you are providing for people to reference in their applications, then exceptions may be ok.
2) How much additional data needs to travel with the return value in order to make the failure response useful for the API consumer? If you need to provide usable information in your failure message (i.e. user id and login) as opposed to a string with that information embedded, then you could utilize either custom exceptions or an "ErrorEncountered" class that contains the error code and other usable information. If you just need to pass a code back, then an ENum indicating either success (0) or failure (any non-zero value) may be appropriate.
3) Forgot this in the original response: exceptions are expensive in the .Net framework. If your API will be called once in awhile, this doesn't need to factor in. However, if the API is called for every web page that is served in a high-traffic site, for example, you definitely do not want to be throwing exceptions to indicate a request failure.
So the short answer, is that it really does depend on the exact circumstances.
I really like the "Throw Exceptions to indicate Exceptional Conditions" idea. They must have that name for a reason.
In a regular application, you would use File.Exists() prior to a File.Open() to prevent an exception from being thrown. Expected errors as exceptions are hard to handle.
In a client-server environment though, you may want to prevent having to send two requests and create a FileOpenResponse class to send both status and data (such as a file handle, in this case).
I have two methods, one called straight after another, which both throw the exact same 2 exceptions (IF an erroneous condition occurs, not stating that I'm getting exceptions).
For this, should I write seperate try and catch blocks with the one statement in each try block and catch both exceptions (Both of which I can handle as I checked MSDN class library reference and there is something I can do, eg, re-open SqlConnection or run a query and not a stored proc which does not exist). So code like this:
try
{
obj.Open();
}
catch (SqlException)
{
// Take action here.
}
catch (InvalidOperationException)
{
// Take action here.
}
And likewise for the other method I call straight after. This seems like a very messy way of coding. The other way is to code with the exception variable (that is ommited as I am using AOP to log the exception details, using a class-level attribute). Doing this, this could aid me in finding out which method caused an exception and then taking action accordingly. Is this the best approach or is there another best practise altogether?
I also assume that, as only these two methods are thrown, I do not need to catch Exception as that would be for an exception I cannot handle (causes way out of my control).
Thanks
You shouldn't catch an exception unless you can handle it in a sensible way and recover from the error. With that in mind, you should either choose not to catch these exceptions, or else you should catch them and do something useful and continue.
Assuming that you are trying to do the latter: handle the error and continue, does it really makes sense to do the same thing no matter which of the two statements fails? Assume you have this:
try {
f1();
f2();
} catch (FooException) {
// Recover from error and continue
}
f3();
In this case if f1() fails and you recover from the error, f2() will never be executed - it goes straight to f3(). Is that really what you want? Maybe it is sometimes... but not usually.
More likely, after the error from f1() you either want to quit completely with an error or to recover and then go on to execute f2(). If so then you would need two separate try/catch blocks.
If you're not interested in recovery but just logging the exceptions then the simplest way is to let them propagate and catch them at a higher level (but before your program crashes or becomes unusable) and log the message and stack trace. This ensures that you will log all exceptions and saves you having to insert try/catch and logging code in every method that could throw.
You're right that you shouldn't be catching Exception.
Generally you need as many catch clauses as you have different recovery approaches. If the recovery behavior is different for these two methods, I see nothing wrong with using a try/catch for each.
Especially consider whether you'd run the second method after successfully recovering the first. If so, you definitely don't want to put the second method in the same try block where it will be skipped by the exception.
You could create a method that accepts an Action parameter:
void trySomething(Action mightThrow)
{
try
{
mightThrow();
}
catch(SqlException)
{
}
catch(InvalidOperationException)
{
}
}
Then you can get the name of the method that threw by mightThrow.Method.Name.
I have some linq2sql stuff which updates some rows.
Then when I submit I do this:
try
{
database.SubmitChanges();
}
catch (ChangeConflictException)
{
database.ChangeConflicts.ResolveAll(RefreshMode.KeepChanges);
database.SubmitChanges();
}
Now the second submit(the one in the catch) is throwing yet again a ChangeConflictException
How is that possible? And if it is possible. How would one need to do the query? (I can not put yet another try/catch around that one? When would I stop?)
I want only the changed values to be put in the database.
EDIT:
let me rephrase the intent of the question: When I say 'ResolveALL(keepchanges)', I would think that I say: "I don't care..just use my values". Instead, it throws yet again the same exception.
I was surprised by this behaviour since the examples on MSDN don't have a second try catch around the second SubmitChanges
So how many times can these exceptions be thrown(As many times as there are columns?), and can I avoid them altogether somehow (after saying ResolveAll)?
EDIT:
Last edit before I start the bounty:
I've made it into a neat loop as suggested by 1 of the commenters. But it doesn't matter how many times I retry. The moment it starts throwing exceptions, it will never do it without an exception! So either it works the first time, or it won't work at all.
Now my linq update has some 20 or 50 rows in it which need updating (I work with batches to speed things up).
Is every resolveall only fixing one issue in 1 column ion 1 row? or is it smart enough to fix everything it encountered?
To recap: the values I just changed (only 1 or 2 columns) are the ones which need to go into the database no matter what. How can I do this using linq (or should I really resort to opening an SqlConnection for this? (If so why Linq in the first place?)
My code up till now:
int retry;
for (retry = 0; retry < 10; retry++)
{
try
{
database.SubmitChanges();
//submit succeeded... break loop
break;
}
catch (ChangeConflictException)
{
database.ChangeConflicts.ResolveAll(RefreshMode.KeepChanges);
if (retry > 0)
{
Thread.Sleep(retry * 10); //introduce some wait, to see if this helps
}
}
EDIT: Found it!
Thanks to the link to the blog in the accepted answer I now cycle through all the conflicts and log them to see what is causing this.
And I'm glad I did, since as it turned out that one of the DB fields contains a trigger which updates something else in certain conditions. So I could resolve as much as I liked, every time the trigger would fire again, causing the next conflict.
This trigger obviously I was not aware off, since my DB admin put it in place there to track something or the other. Triggers can be a great tool, but if you are not aware of them they can cause major headaches!
It may not be picking up on all your conflicts when you call SubmitChanges() (and therefore not resolving them all), because the default behavior is to stop when it reaches the first one.
Try changing
database.SubmitChanges();
to
database.SubmitChanges(ConflictMode.ContinueOnConflict);
See http://arun-ts.blogspot.com/2009/08/linq-to-sql-concurrency-conflicts.html for more info. He also nests two levels of try/catch in his code sample for ResolveAll(), so the second time SubmitChanges() is tried, he's able to log any exception before exiting. That seems like a reasonable model to follow.
If this is something you need to keep retrying, then an obvious reordering of the code is:
bool success = false;
while (!success)
{
try
{
database.SubmitChanges();
success = true;
}
catch (ChangeConflictException)
{
database.ChangeConflicts.ResolveAll(RefreshMode.KeepChanges);
}
}
I don't know much about databases, so I'll stay away from theorising about what your actual problem is, but maybe this will fix it :/