I have a simple table:
IPAddress (PK, string)
Requests (int)
It's a flood limiter. Every minute the tables data is deleted. Every page request, the Requests count increments for given IPAddress.
It works great, and our website performance has increased significantly as we do suffer some accidental/intentional effective DDOSes due to the nature of our product and website.
The only problem is, when an IP does send thousands of requests a minute to our website for whatever reason, we get these errors popping up:
Violation of PRIMARY KEY constraint 'PK_v2SiteIPRequests'. Cannot insert duplicate key in object 'dbo.v2SiteIPRequests'. The duplicate key value is ([IP_ADDRESS]). The statement has been terminated.
The code that makes the insert is:
/// <summary>
/// Call everytime a page view is requested
/// </summary>
private static void DoRequest(string ipAddress)
{
using (var db = new MainContext())
{
var rec = db.v2SiteIPRequests.SingleOrDefault(c => c.IPAddress == ipAddress);
if (rec == null)
{
var n = new v2SiteIPRequest {IPAddress = ipAddress, Requests = 1};
db.v2SiteIPRequests.InsertOnSubmit(n);
db.SubmitChanges();
}
else
{
rec.Requests++;
db.SubmitChanges();
// Ban?
if (rec.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
}
What's the best way to handle this exception, and why is it being thrown? Is a try catch best here?
If you get two requests simultanously, the following happens:
Request one: is it in the database?
Request two: is it in the database?
Request one: No, not yet
Request two: No, not yet
Request one: INSERT
Request two: INSERT
Request one: WORKS
Request two: FAILS (already inserted a split second before)
There is nothing you can do here but catch the exception and handle it gracefully. Maybe by using a simple "try again" logic.
You've got a few race conditions there, especially when there are concurrent connections.
You may need to change approach, and always store each request, and then query if there are more in the timeframe than permitted and take whatever action you need
Here's the solution based on suggestions. It's ugly but works as far as I can tell.
/// <summary>
/// Call everytime a page view is requested
/// </summary>
private static void DoRequest(string ipAddress)
{
using (var db = new MainContext())
{
var rec = db.v2SiteIPRequests.SingleOrDefault(c => c.IPAddress == ipAddress);
if (rec == null)
{
// Catch insert race condition for PK violation. Especially susceptible when being hammered by requests from 1 IP
try
{
var n = new v2SiteIPRequest {IPAddress = ipAddress, Requests = 1};
db.v2SiteIPRequests.InsertOnSubmit(n);
db.SubmitChanges();
}
catch (Exception e)
{
try
{
// Can't reuse original context as it caches
using (var db2 = new MainContext())
{
var rec2 = db2.v2SiteIPRequests.Single(c => c.IPAddress == ipAddress);
rec2.Requests++;
db2.SubmitChanges();
if (rec2.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
catch (Exception ee)
{
// Shouldn't reach here
Error.Functions.NewError(ee);
}
}
}
else
{
rec.Requests++;
db.SubmitChanges();
// Ban?
if (rec.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
}
Related
I have an application which runs multiple threads to insert data into a SQL Server 2017 database table using EF Core 5.
The C# code for inserting the domain model entities using EF Core 5 is as follows:
using (var ctx = this.dbContextFactory.CreateDbContext())
{
//ctx.Database.AutoTransactionsEnabled = false;
foreach (var rootEntity in request.RootEntities)
{
ctx.ChangeTracker.TrackGraph(rootEntity, node =>
{
if ((request.EntityTypes != null && request.EntityTypes.Contains(node.Entry.Entity.GetType()))
|| rootEntity == node.Entry.Entity)
{
if (node.Entry.IsKeySet)
node.Entry.State = Microsoft.EntityFrameworkCore.EntityState.Modified;
else
node.Entry.State = Microsoft.EntityFrameworkCore.EntityState.Added;
}
});
}
await ctx.SaveChangesAsync(cancellationToken);
}
Each thread is responsible for instantiating its own DbContext instance hence the use of dbContextFactory.
Some example SQL generated for the INSERT (MERGE) is as follows:
SET NOCOUNT ON;
DECLARE #inserted0 TABLE ([OrderId] bigint, [_Position] [int]);
MERGE [dbo].[Orders] USING (
VALUES (#p0, 0),
(#p1, 1),
(#p2, 2),
...
(#43, 41)) AS i ([SomeColumn], _Position) ON 1=0
WHEN NOT MATCHED THEN
INSERT ([SomeColumn])
VALUES (i.[SomeColumn])
OUTPUT INSERTED.[OrderId], i._Position
INTO #inserted0;
SELECT [t].[OrderId] FROM [dbo].[Orders] t
INNER JOIN #inserted0 i ON ([t].[OrderId] = [i].[OrderId])
ORDER BY [i].[_Position];
As these threads frequently run at the same time I get the following SQL exception:
Transaction (Process ID 99) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
EF Core implicitly sets the isolation level to READ COMMITTED.
Using SQL Profiler the transaction deadlock was caused by the following:
My concerns:
Frustratingly, the SQL generated by EF Core includes two statements: a MERGE, and then a SELECT. I do not understand the purpose of the SELECT given the identities of the primary key are available from the #inserted0 table variable. Given this answer, the MERGE statement in isolation would be sufficient enough to make this atomic.
I believe it is this SELECT which is causing the transaction deadlock.
I tried to resolve the problem by using READ COMMITTED SNAPSHOT to avoid the conflict with the primary key lookup, however I still got the same error even though this isolation level should avoid locks and use row versioning instead.
My attempt at solving the problem:
The only way I could find to solve this problem was to explicitly prevent a transaction being started by EF Core, hence the following code:
ctx.Database.AutoTransactionsEnabled = false;
I have tested this numerous times and haven't received a transaction deadlock. Given the logic is merely inserting new records I believe this can be done.
Does anyone have any advice to fixing this problem?
Thanks for your time.
We had the same issues with INSERT (MERGE) statements on multiple threads. We didn't want to enable the EnableRetryOnFailure() option for all transactions, so we wrote the following DbContent extension method.
public static async Task<TResult> SaveWithRetryAsync<TResult>(this DbContext context,
Func<Task<TResult>> bulkInsertOperation,
Func<TResult, Task<bool>> verifyBulkOperationSucceeded,
IsolationLevel isolationLevel = IsolationLevel.Unspecified,
int retryLimit = 6,
int maxRetryDelayInSeconds = 30)
{
var existingTransaction = context.Database.CurrentTransaction?.GetDbTransaction();
if (existingTransaction != null)
throw new InvalidOperationException($"Cannot run {nameof(SaveWithRetryAsync)} inside a transaction");
if (context.ChangeTracker.HasChanges())
{
throw new InvalidOperationException(
"DbContext should be saved before running this action to revert only the changes of this action in case of a concurrency conflict.");
}
const int sqlErrorNrOnDuplicatePrimaryKey = 2627;
const int sqlErrorNrOnSnapshotIsolation = 3960;
const int sqlErrorDeadlock = 1205;
int[] sqlErrorsToRetry = { sqlErrorNrOnDuplicatePrimaryKey, sqlErrorNrOnSnapshotIsolation, sqlErrorDeadlock };
var retryState = new SaveWithRetryState<TResult>(bulkInsertOperation);
// Use EF Cores connection resiliency feature for retrying (see https://learn.microsoft.com/en-us/ef/core/miscellaneous/connection-resiliency)
// Usually the IExecutionStrategy is configured DbContextOptionsBuilder.UseSqlServer(..., options.EnableRetryOnFailure()).
// In ASP.NET, the DbContext is configured in Startup.cs and we don't want this retry behaviour everywhere for each db operation.
var executionStrategyDependencies = context.Database.GetService<ExecutionStrategyDependencies>();
var retryStrategy = new CustomSqlServerRetryingExecutionStrategy(executionStrategyDependencies, retryLimit, TimeSpan.FromSeconds(maxRetryDelayInSeconds), sqlErrorsToRetry);
try
{
var result = await retryStrategy.ExecuteInTransactionAsync(
retryState,
async (state, cancelToken) =>
{
try
{
var r = await state.Action();
await context.SaveChangesAsync(false, cancelToken);
if (state.FirstException != null)
{
Log.Logger.Warning(
$"Action passed to {nameof(SaveWithRetryAsync)} failed {state.NumberOfRetries} times " +
$"(retry limit={retryLimit}, ThreadId={Thread.CurrentThread.ManagedThreadId}).\nFirst exception was: {state.FirstException}");
}
state.Result = r;
return r;
}
catch (Exception ex)
{
context.RevertChanges();
state.NumberOfRetries++;
state.FirstException ??= ex;
state.LastException = ex;
throw;
}
},
(state, cancelToken) => verifyBulkOperationSucceeded(retryState.Result),
context.GetSupportedIsolationLevel(isolationLevel));
context.ChangeTracker.AcceptAllChanges();
return result;
}
catch (Exception ex)
{
throw new InvalidOperationException(
$"DB Transaction in {nameof(SaveWithRetryAsync)} failed. " +
$"Tried {retryState.NumberOfRetries} times (retry limit={retryLimit}, ThreadId={Thread.CurrentThread.ManagedThreadId}).\n" +
$"First exception was: {retryState.FirstException}.\nLast exception was: {retryState.LastException}",
ex);
}
}
With the following CustomSqlServerRetryingExecutionStrategy
public class CustomSqlServerRetryingExecutionStrategy : SqlServerRetryingExecutionStrategy
{
public CustomSqlServerRetryingExecutionStrategy(ExecutionStrategyDependencies executionStrategyDependencies, int retryLimit, TimeSpan fromSeconds, int[] sqlErrorsToRetry)
: base(executionStrategyDependencies, retryLimit, fromSeconds, sqlErrorsToRetry)
{
}
protected override bool ShouldRetryOn(Exception exception)
{
//SqlServerRetryingExecutionStrategy does not check the base exception, maybe a bug in EF core ?!
return base.ShouldRetryOn(exception) || base.ShouldRetryOn(exception.GetBaseException());
}
}
Helper class to save the current (retry) state:
private class SaveWithRetryState<T>
{
public SaveWithRetryState(Func<Task<T>> action)
{
Action = action;
}
public Exception FirstException { get; set; }
public Exception LastException { get; set; }
public int NumberOfRetries { get; set; }
public Func<Task<T>> Action { get; }
public T Result { get; set; }
}
Now, the extension method can be used as follow. The code will try to add the bulk multiple times (5).
await _context.SaveWithRetryAsync(
// method to insert the bulk
async () =>
{
var listOfAddedItems = new List<string>();
foreach (var item in bulkImport)
{
listOfAddedItems.Add(item.Guid);
await context.Import.AddAsync(item);
}
return listOfAddedItems;
},
// method to check if the bulk insert was successful
listOfAddedItems =>
{
if (listOfAddedItems == null)
return Task.FromResult(true);
return _context.Import.AsNoTracking().AnyAsync(x => x.Guid == listOfAddedItems.First());
},
IsolationLevel.Snapshot,
5, // max retry attempts
100); // max retry time
For background information why this can happen, have a look at this discussion: https://github.com/dotnet/efcore/issues/21899
I have a piece of code where multiple threads are accessing using a shared ID property from ConcurrentBag type of string like following:
var ids = new ConcurrentBag<string>();
// List contains lets say 10 ID's
var apiKey = ctx.ApiKey.FirstOrDefault();
Parallel.ForEach(ids, id =>
{
try
{
// Perform API calls
}
catch (Exception ex)
{
if (ex.Message == "Expired")
{
// the idea is that if that only one thread can access the DB record to update it, not multiple ones
using (var ctx = new MyEntities())
{
var findApi= ctx.ApiKeys.Find(apiKey.RecordId);
findApi.Expired = DateTime.Now.AddHours(1);
findApi.FailedCalls += 1;
}
}
}
});
So in a situation like this if I have a list of 10 ids and 1 key that is being used for API call, once the key reachces hourly limit of calls, I will catch the exception from the API and then flag the key not to be used for the next hour.
However, in the code I have pasted above, all of the 10 threads will access the record from DB and count the failed calls as 10 times, instead of only 1..:/
So my question here is how do I prevent all of the threads from doing the update of the DB record, but instead to only allow one thread to access the DB, update the record (add failed calls by +1) ?
How can I achieve this?
It looks like you only need to update apiKey.RecordId once if an error occurred, why not just track the fact that an error occurred and update once at the end? e.g.
var ids = new ConcurrentBag<string>();
// List contains lets say 10 ID's
var apiKey = ctx.ApiKey.FirstOrDefault();
var expired = false;
Parallel.ForEach(ids, id =>
{
try
{
// Perform API calls
}
catch (Exception ex)
{
if (ex.Message == "Expired")
{
expired = true;
}
}
}
if (expired)
{
// the idea is that if that only one thread can access the DB record to
// update it, not multiple ones
using (var ctx = new MyEntities())
{
var findApi= ctx.ApiKeys.Find(apiKey.RecordId);
findApi.Expired = DateTime.Now.AddHours(1);
findApi.FailedCalls += 1;
}
});
You are in a parallel loop, therefore the most likely behaviour is that each of the 10 threads are going to fire, try to connect to your API with the expired key and then all fail, throwing the exception.
There are a couple of reasonable solutions to this:
Check the key before you use it
Can take the first run through the loop out of sequence? for example:
var ids = new ConcurrentBag<string>();
var apiKey = ctx.ApiKey.FirstOrDefault();
bool expired = true;
try {
// Perform API calls
expired = false;
}
catch(Exception ex) {
// log to database once
}
// Or grab another, newer key?
if (!expired)
{
Parallel.ForEach(ids.Skip(1), id =>
{
// Perform API Calls
}
}
This would work reasonable well if the key was likely to have expired before you use it, but will be active while you use it.
Hold on to the failures
If the key is possibly valid when you start but could expire while you are using it you might want to try capturing that failure and then logging at the end.
var ids = new ConcurrentBag<string>();
var apiKey = ctx.ApiKey.FirstOrDefault();
// Assume the key hasn't expired - don't set to false within the loops
bool expired = false;
Parallel.ForEach(ids.Skip(1), id =>
{
try {
// Perform API calls
}
catch (Exception e) {
if (e.Message == "Expired") {
// Doesn't matter if many threads set this to true.
expired = true;
}
}
if (expired) {
// Log to database once.
}
}
I'm running the below code to update some records based on a bank transaction history file that is sent to us each morning. It's pretty basic stuff but, for some reason, when I hit the end, dbContext.GetChangeSet() reports "0" for all actions.
public void ProcessBatchFile(string fileName)
{
List<string[]> failed = new List<string[]>();
int recCount = 0;
DateTime dtStart = DateTime.Now;
using (ePermitsDataContext dbContext = new ePermitsDataContext())
{
try
{
// A transaction must be begun before any data is read.
dbContext.BeginTransaction();
dbContext.ObjectTrackingEnabled = true;
// Load all the records for this batch file.
var batchRecords = (from b in dbContext.AmegyDailyFiles
where b.FileName == fileName
&& b.BatchProcessed == false
&& (b.FailReason == null || b.FailReason.Trim().Length < 1)
select b);
// Loop through the loaded records
int paymentID;
foreach (var r in batchRecords)
{
paymentID = 0;
try
{
// We have to 'parse' the primary key, since it's stored as a string value with leading zero's.
if (!int.TryParse(r.TransAct.TrimStart('0'), out paymentID))
throw new Exception("TransAct value is not a valid integer: " + r.TransAct);
// Store the parsed, Int32 value in the original record and read the "real" record from the database.
r.OrderPaymentID = paymentID;
var orderPayment = this.GetOrderPayment(dbContext, paymentID);
if (string.IsNullOrWhiteSpace(orderPayment.AuthorizationCode))
// If we haven't processed this payment "Payment Received" do it now.
this.PaymentReceived(orderPayment, r.AuthorizationNumber);
// Update the PaymentTypeDetailID (type of Credit Card--all other types will return NULL).
var paymentTypeDetail = dbContext.PaymentTypes.FirstOrDefault(w => w.PaymentType1 == r.PayType);
orderPayment.PaymentTypeDetailID = (paymentTypeDetail != null ? (int?)paymentTypeDetail.PaymentTypeID : null);
// Match the batch record as processed.
r.BatchProcessed = true;
r.BatchProcessedDateTime = DateTime.Now;
dbContext.SubmitChanges();
}
catch (Exception ex)
{
// If there's a problem, just record the error message and add it to the "failed" list for logging and notification.
if (paymentID > 0)
r.OrderPaymentID = paymentID;
r.BatchProcessed = false;
r.BatchProcessedDateTime = null;
r.FailReason = ex.Message;
failed.Add(new string[] { r.TransAct, ex.Message });
dbContext.SubmitChanges();
}
recCount++;
}
dbContext.CommitTransaction();
}
// Any transaction will already be commited, if the process completed successfully. I just want to make
// absolutely certain that there's no chance of leaving a transaction open.
finally { dbContext.RollbackTransaction(); }
}
TimeSpan procTime = DateTime.Now.Subtract(dtStart);
// Send an email notification that the processor completed.
System.Text.StringBuilder sb = new System.Text.StringBuilder();
sb.AppendFormat("<p>Processed {0} batch records from batch file '{1}'.</p>", recCount, fileName);
if (failed.Count > 0)
{
sb.AppendFormat("<p>The following {0} records failed:</p>", failed.Count);
sb.Append("<ul>");
for (int i = 0; i < failed.Count; i++)
sb.AppendFormat("<li>{0}: {1}</li>", failed[i][0], failed[i][1]);
sb.Append("<ul>");
}
sb.AppendFormat("<p>Time taken: {0}:{1}:{2}.{3}</p>", procTime.Hours, procTime.Minutes, procTime.Seconds, procTime.Milliseconds);
EMailHelper.SendAdminEmailNotification("Batch Processing Complete", sb.ToString(), true);
}
The dbContext.BeginTransaction() method is something I added to the DataContext just to make it easy to use explicit transactions. I'm fairly confident that this isn't the problem, since it's used extensively elsewhere in the application. Our database design makes it necessary to use explicit transactions for a few, specific operations, and the call to "PaymentReceived" happens to be one of them.
I have stepped through the code and confirmed that the Rollback() method on the transaction itself is not begin called, and I have also checked the dbContext.GetChangeSet() before the call to CommitTransaction() happens with the same result.
I have included the BeginTransaction(), CommitTransaction() and RollbackTransaction() method bodies below, just for clarity.
/// <summary>
/// Begins a new explicit transaction on this context. This is useful if you need to perform a call to SubmitChanges multiple times due to "circular" foreign key linkage, but still want to maintain an atomic write.
/// </summary>
public void BeginTransaction()
{
if (this.HasOpenTransaction)
return;
if (this.Connection.State != System.Data.ConnectionState.Open)
this.Connection.Open();
System.Data.Common.DbTransaction trans = this.Connection.BeginTransaction();
this.Transaction = trans;
this._openTrans = true;
}
/// <summary>
/// Commits the current transaction (if active) and submits all changes on this context.
/// </summary>
public void CommitTransaction()
{
this.SubmitChanges();
if (this.Transaction != null)
this.Transaction.Commit();
this._openTrans = false;
this.RollbackTransaction(); // Since the transaction has already been committed, this just disposes and decouples the transaction object itself.
}
/// <summary>
/// Disposes and removes an existing transaction on the this context. This is useful if you want to use the context again after an explicit transaction has been used.
/// </summary>
public void RollbackTransaction()
{
// Kill/Rollback the transaction, as necessary.
try
{
if (this.Transaction != null)
{
if (this._openTrans)
this.Transaction.Rollback();
this.Transaction.Dispose();
this.Transaction = null;
}
this._openTrans = false;
}
catch (ObjectDisposedException) { } // If this gets called after the object is disposed, we don't want to let it throw exceptions.
catch { throw; }
}
I just found the problem: my DBA didn't put a primary key on the table when he created it for me, so LinqToSql did not generate any of the "PropertyChanged" event/handler stuff in the entity class, which is why the DataContext was not aware that changes were being made. Apparently, if your table has no primary key, Linq2Sql won't track any changes to that table, which makes sense, but it would be nice if there were some kind of notification to that effect. I'm sure my DBA didn't think about it, because of this just being a way of "tracking" which of these line items from the text file had been processed and doesn't directly relate to any other tables.
I've got a table in database:
USERID MONEY
______________
1 500
The money value could be changed only by logged in user that owns account. I've got a function like:
bool buy(int moneyToSpend)
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
if(moneyRow.MONEY < moneyToSpend)
return false;
//code for placing order
moneyRow.MONEY -= moneyToSpend;
return true;
}
I know that mvc sessions are always synchronous, so there will never be 2 symulateous calls to this function in one user session. But what if user logs in to the site 2 times from different browsers? Will it be still single threaded session or I can get 2 concurrent requests to this function?
And if there will be concurrency then how should I handle it with EF? Normally in ADO I would use MSSQL's "BEGIN WORK" for this type of situation, but I have no idea on how to make it with EF.
Thank you for your time!
I would suggest you to use RowVersion to handle concurrent requests.
Good reference here: http://www.asp.net/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
// in UserMoney.cs
[Timestamp]
public byte[] RowVersion { get; set; }
// in model builder
modelBuilder.Entity<UserMoney>().Property(p => p.RowVersion).IsConcurrencyToken();
// The update logic
public bool Buy(int moneyToSpend, byte[] rowVersion)
{
try
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
if(moneyRow.MONEY < moneyToSpend)
{
return false;
}
//code for placing order
moneyRow.MONEY -= moneyToSpend;
return true;
}
catch (DbUpdateConcurrencyException ex)
{
var entry = ex.Entries.Single();
var submittedUserMoney = (UserMoney) entry.Entity;
var databaseValue = entry.GetDatabaseValues();
if (databaseValue == null)
{
// this entry is no longer existed in db
}
else
{
// this entry is existed and have newer version in db
var userMoneyInDb = (UserMoney) databaseValue.ToObject();
}
}
catch (RetryLimitExceededException)
{
// probably put some logs here
}
}
I do not think it would be a major problem for you since the idea is that MSSQL as far as i know will not allow asyncroneus data commits to the same user from the same thread it has to finish one process before moving to the next one but you can try something like this
using (var db = new YourContext())
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
moneyRow.MONEY -= moneyToSpend;
bool saveFailed;
do
{
saveFailed = false;
try
{
db.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Update original values from the database
var entry = ex.Entries.Single();
entry.OriginalValues.SetValues(entry.GetDatabaseValues());
}
} while (saveFailed);
}
More can be found here Optimistic Concurrency Patterns
I have a real time app that tracks assets around a number of sites across the country. As part of this solution I have 8 client apps that update a central server.
My question is that sometimes the apps lose connection to the central server and I am wondering what is the best way to deal with this ? I know I could just increase the max send/receive times to deal with the timeout BUT I also want a graceful solution to deal with if the connection to the server is down:
For example I'm calling my services like this :
using (var statusRepository = new StatusRepositoryClient.StatusRepositoryClient())
{
statusId = statusRepository.GetIdByName(licencePlateSeen.CameraId.ToString());
}
I was thinking of adding a try/catch so...
using (var statusRepository = new StatusRepositoryClient.StatusRepositoryClient())
{
try
{
statusId = statusRepository.GetIdByName(licencePlateSeen.CameraId.ToString());
}
catch (TimeoutException timeout)
{
LogMessage(timeout);
}
catch (CommunicationException comm)
{
LogMessage(comm);
}
}
Dealing it this way doesn't allow me to rerun the code without having a ton of code repeat. Any one got any suggestions ?
EDIT: Looking into Sixto Saez and user24601 answers having an overall solution is better than dealing with timeouts on an individual exception level BUT... I'm was thinking that the below would solve my problem (but it would add a TON of extra code error handling):
void Method(int statusId)
{
var statusRepository = new StatusRepositoryClient.StatusRepositoryClient()
try
{
IsServerUp();
statusId = statusRepository.GetIdByName(licencePlateSeen.CameraId.ToString());
statusRepository.Close();
}
catch (Exception ex)
{
statusRepository.Abort();
if (ex is TimeoutException || ex is CommunicationException)
{
LogMessage(timeout);
Method(statusId);
}
else
{
throw new Exception(ex.Message + ex.InnerException);
}
}
}
}
bool IsServerUp()
{
var x = new Ping();
var reply = x.Send(IPAddress.Parse("127.0.0.1"));
if (reply == null)
{
IsServerUp();
}
else
{
if (reply.Status != IPStatus.Success)
{
IsServerUp();
}
}
return true;
}
For starters I think your Wcf error handling is wrong. It should look like this
var statusRepository = new StatusRepositoryClient.StatusRepositoryClient();
try
{
statusId = statusRepository.GetIdByName(licencePlateSeen.CameraId.ToString());
statusRepository.Close()
}
catch(Exception e)
{
statusRepository.Abort();
LogMessage(e);
throw; //I would do this to let user know.
}
I would also re-throw the error to let the user know about the problem.
Prior to designing your exception handling, one important decision to make is whether you want guaranteed delivery of each message the client sends or is it OK for the service to "lose" some. For guaranteed delivery, the best built-in solution is the netMsmqBinding assuming the client can be configured to support it. Otherwise, there is a lightweight reliable messaging capability built into WCF. You'll be going down a rabbit hole if you try to handle message delivery purely through exception handling... :)
I have a two-pronged approach to verifying the server is up:
1) I have set up a 'PING' to the server every 5 seconds. The server responds with a 'PONG' and a load rating (low, medium, high, so the client can adjust its load on the server). If the client EVER doesn't receive a pong it assumes the server is down (since this is very low stress on the server - just listen and respond).
2) Random timeouts like the one you are catching are logged in a ConnectionMonitor class along with all successful connections. A single one of these calls timing out is not enough to consider the server down since some may be very processor heavy, or may just take a very long time. However, a high enough percentage of these will cause the application to go into server timeout.
I also didn't want to throw up a message for every single connection timeout, because it was happening too frequently to people who use poorer servers (or just some computer lying in their lab as a server). Most of my calls can be missed once or twice, but missing 5 or 6 calls are clearly going to cause instrusion.
When a server-timeout state happens, I throw up a little dialog box explaining what's happening to the user.
Hi Please see my solution below. Also please note that the below code has not been compliled so may have some logic and typing errors.
bool IsServerUp()
{
var x = new Ping();
var reply = x.Send(IPAddress.Parse("127.0.0.1"));
if (reply == null) return false;
return reply.Status == IPStatus.Success ? true : false;
}
int? GetStatusId()
{
try
{
using (var statusRepository = new StatusRepositoryClient.StatusRepositoryClient())
{
return statusRepository.GetIdByName(licencePlateSeen.CameraId.ToString());
}
}catch(TimeoutException te)
{
//Log TimeOutException occured
return null;
}
}
void GetStatus()
{
try
{
TimeSpan sleepTime = new TimeSpan(0,0,5);
int maxRetries = 10;
while(!IsServerUp())
{
System.Threading.Thead.Sleep(sleepTime);
}
int? statusId = null;
int retryCount = 0;
while (!statusId.HasValue)
{
statusId = GetStatusId();
retryCount++;
if (retryCount > maxRetries)
throw new ApplicationException(String.Format("{0} Maximum Retries reached in order to get StatusId", maxRetries));
System.Threading.Thead.Sleep(sleepTime);
}
}catch(Exception ex)
{
//Log Exception Occured
}
}