I've got a table in database:
USERID MONEY
______________
1 500
The money value could be changed only by logged in user that owns account. I've got a function like:
bool buy(int moneyToSpend)
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
if(moneyRow.MONEY < moneyToSpend)
return false;
//code for placing order
moneyRow.MONEY -= moneyToSpend;
return true;
}
I know that mvc sessions are always synchronous, so there will never be 2 symulateous calls to this function in one user session. But what if user logs in to the site 2 times from different browsers? Will it be still single threaded session or I can get 2 concurrent requests to this function?
And if there will be concurrency then how should I handle it with EF? Normally in ADO I would use MSSQL's "BEGIN WORK" for this type of situation, but I have no idea on how to make it with EF.
Thank you for your time!
I would suggest you to use RowVersion to handle concurrent requests.
Good reference here: http://www.asp.net/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
// in UserMoney.cs
[Timestamp]
public byte[] RowVersion { get; set; }
// in model builder
modelBuilder.Entity<UserMoney>().Property(p => p.RowVersion).IsConcurrencyToken();
// The update logic
public bool Buy(int moneyToSpend, byte[] rowVersion)
{
try
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
if(moneyRow.MONEY < moneyToSpend)
{
return false;
}
//code for placing order
moneyRow.MONEY -= moneyToSpend;
return true;
}
catch (DbUpdateConcurrencyException ex)
{
var entry = ex.Entries.Single();
var submittedUserMoney = (UserMoney) entry.Entity;
var databaseValue = entry.GetDatabaseValues();
if (databaseValue == null)
{
// this entry is no longer existed in db
}
else
{
// this entry is existed and have newer version in db
var userMoneyInDb = (UserMoney) databaseValue.ToObject();
}
}
catch (RetryLimitExceededException)
{
// probably put some logs here
}
}
I do not think it would be a major problem for you since the idea is that MSSQL as far as i know will not allow asyncroneus data commits to the same user from the same thread it has to finish one process before moving to the next one but you can try something like this
using (var db = new YourContext())
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
moneyRow.MONEY -= moneyToSpend;
bool saveFailed;
do
{
saveFailed = false;
try
{
db.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Update original values from the database
var entry = ex.Entries.Single();
entry.OriginalValues.SetValues(entry.GetDatabaseValues());
}
} while (saveFailed);
}
More can be found here Optimistic Concurrency Patterns
Related
I have a piece of code where multiple threads are accessing using a shared ID property from ConcurrentBag type of string like following:
var ids = new ConcurrentBag<string>();
// List contains lets say 10 ID's
var apiKey = ctx.ApiKey.FirstOrDefault();
Parallel.ForEach(ids, id =>
{
try
{
// Perform API calls
}
catch (Exception ex)
{
if (ex.Message == "Expired")
{
// the idea is that if that only one thread can access the DB record to update it, not multiple ones
using (var ctx = new MyEntities())
{
var findApi= ctx.ApiKeys.Find(apiKey.RecordId);
findApi.Expired = DateTime.Now.AddHours(1);
findApi.FailedCalls += 1;
}
}
}
});
So in a situation like this if I have a list of 10 ids and 1 key that is being used for API call, once the key reachces hourly limit of calls, I will catch the exception from the API and then flag the key not to be used for the next hour.
However, in the code I have pasted above, all of the 10 threads will access the record from DB and count the failed calls as 10 times, instead of only 1..:/
So my question here is how do I prevent all of the threads from doing the update of the DB record, but instead to only allow one thread to access the DB, update the record (add failed calls by +1) ?
How can I achieve this?
It looks like you only need to update apiKey.RecordId once if an error occurred, why not just track the fact that an error occurred and update once at the end? e.g.
var ids = new ConcurrentBag<string>();
// List contains lets say 10 ID's
var apiKey = ctx.ApiKey.FirstOrDefault();
var expired = false;
Parallel.ForEach(ids, id =>
{
try
{
// Perform API calls
}
catch (Exception ex)
{
if (ex.Message == "Expired")
{
expired = true;
}
}
}
if (expired)
{
// the idea is that if that only one thread can access the DB record to
// update it, not multiple ones
using (var ctx = new MyEntities())
{
var findApi= ctx.ApiKeys.Find(apiKey.RecordId);
findApi.Expired = DateTime.Now.AddHours(1);
findApi.FailedCalls += 1;
}
});
You are in a parallel loop, therefore the most likely behaviour is that each of the 10 threads are going to fire, try to connect to your API with the expired key and then all fail, throwing the exception.
There are a couple of reasonable solutions to this:
Check the key before you use it
Can take the first run through the loop out of sequence? for example:
var ids = new ConcurrentBag<string>();
var apiKey = ctx.ApiKey.FirstOrDefault();
bool expired = true;
try {
// Perform API calls
expired = false;
}
catch(Exception ex) {
// log to database once
}
// Or grab another, newer key?
if (!expired)
{
Parallel.ForEach(ids.Skip(1), id =>
{
// Perform API Calls
}
}
This would work reasonable well if the key was likely to have expired before you use it, but will be active while you use it.
Hold on to the failures
If the key is possibly valid when you start but could expire while you are using it you might want to try capturing that failure and then logging at the end.
var ids = new ConcurrentBag<string>();
var apiKey = ctx.ApiKey.FirstOrDefault();
// Assume the key hasn't expired - don't set to false within the loops
bool expired = false;
Parallel.ForEach(ids.Skip(1), id =>
{
try {
// Perform API calls
}
catch (Exception e) {
if (e.Message == "Expired") {
// Doesn't matter if many threads set this to true.
expired = true;
}
}
if (expired) {
// Log to database once.
}
}
I'm puzzled as to why this code is not working, it should save changes to database after the loops but when I place the SaveChanges method inside the loop, it saves the record into the database but outside it doesn't save anything? it's about only 300 ~ 1000 records
static bool lisReady = false;
static bool sacclReady = false;
static void Main(string[] args)
{
Logger("Starting services");
ConnectDBLis().Wait();
ConnectDBSaccl().Wait();
Thread.Sleep(1000);
if (lisReady & sacclReady){
//start
Logger("Services ready");
StartExport().Wait();
}
}
static async Task<bool> StartExport()
{
lis lisdb = new lis();
nrlsaccl saccldb = new nrlsaccl();
var getTestOrders = await lisdb.test_orders.ToListAsync();
Logger("Services starting");
foreach (var tO in getTestOrders.Where(x => x.entry_datetime.Value.Year == 2016))
{
foreach (var tr in tO.test_results)
{
foreach (var tL in tr.test_result_logs)
{
results_availability postResults = new results_availability
{
first_name = tO.patient_orders.patient.first_name,
middle_name = tO.patient_orders.patient.middle_name,
last_name = tO.patient_orders.patient.last_name,
birthdate = tO.patient_orders.patient.birthdate,
};
if (postResults.id == 0)
{
saccldb.results_availability.Add(postResults);
}
else
{
saccldb.Entry(postResults).State = EntityState.Modified;
}
}
}
}
await saccldb.SaveChangesAsync();
return true;
}
Edit:
So i limit the records to 100 and the save changes works, 3000 records at instant does not work, any solutions?
This code doesn't completely resolve your issue, this are some consideration for your problem.
Note: This works for me when adding 1200 records and 300 modifications
static async Task<bool> StartExport()
{
using (var db = new Entities())
{
var appraisals = await db.Appraisals.ToListAsync();
db.Database.CommandTimeout = 300;
//Disabling auto detect changes enabled will bring some performance tweaks
db.Configuration.AutoDetectChangesEnabled = false;
foreach (var appraisal in appraisals.Where(g => g.Id > 1))
{
if (appraisal.Id == 10)
{
appraisal.AppraisalName = "New name";
db.Entry(appraisal).State = EntityState.Added;
}
else
{
appraisal.AppraisalName = "Modified name";
db.Entry(appraisal).State = EntityState.Modified;
}
}
db.Configuration.AutoDetectChangesEnabled = true;
if (await db.SaveChangesAsync() > 1)
return true;
else
return false;
}
}
You could use db.Database.CommandTimeout = 300; to increase the timeout of you connection.
Entity framework 6 provides AddRange() This will insert the items in one shot, it will disable AutoDetectChangesEnabled and insert the entities
In your case you don't want to mark the entites as Modified, the EF already tracks it well. Entity Framework - Why explicitly set entity state to modified?
The purpose of change tracking to find that you have changed a value on attached entity and put it to modified state. Setting state manually is important in case of detached entities (entities loaded without change tracking or created outside of the current context).
Here we have all entities attached to the context itself
Use
saccldb.SaveChanges()
Simply because the async nature of await saccldb.SaveChangesAsync() cause your thread to continue and exit the function before the saving to the db completes. In your case it returns true.
I would suggest not using any async operations on a console application unless it has a user interface that you would want to keep going.
I'm using Entity Framework 5 with MySQL Database and just wanted to update a row attribute "user_loginstatus" between 0 and 1. The first time when I log in via client it updates just fine for the first attempt, after trying to update again it doesn't do anything with no exception.
I log in like this:
public async void LoginExecute()
{
// Checking Connection before etc...
if (await _dataService.IsLoginDataValidTask(UserObj.Username, md5))
{
Trace.WriteLine("LoginCommand Execute: Eingeloggt");
UserObj = await _dataService.GetUserDataTask(UserObj.Username);
await _dataService.SetUserStatusTask(UserObj.Id, 1);
await _dataService.WriteLog(UserObj.Id, "login", "Programm", GetLocalAdress());
Messenger.Default.Send(UserObj);
Messenger.Default.Send(new NotificationMessage("GoToMenuPage"));
}
else
{
// Error Stuff...
}
}
SetUserStatus Method in DataService Class
public Task SetUserStatusTask(int id, int status)
{
return Task.Factory.StartNew(() =>
{
try
{
var user = _entities.users.Find(id);
user.user_loginstatus = status;
_entities.SaveChanges();
}
catch (Exception ex)
{
Trace.WriteLine("DataService SetUserStatusTask: " + ex.Message);
}
});
}
GetUserData Method in DataService Class
public Task<User> GetUserDataTask(string username)
{
return Task.Factory.StartNew(() =>
{
try
{
var user = from us in _entities.users
where us.user_name.Equals(username)
select new User
{
Id = us.user_id,
Username = us.user_name,
FirstName = us.user_firstname,
LastName = us.user_lastname,
Gender = us.user_gender,
Email = us.user_mail,
Group = us.user_usergroup,
Avatar = us.user_avatar,
LoginStatus = 1
};
return user.FirstOrDefault();
}
catch (Exception ex)
{
Trace.WriteLine("DataService GetUserDataTask: " + ex);
return null;
}
});
}
So "users" is my table from the database and "User" / "UserObj" my custom Object.
With the Messenger (from MVVM Light) I just set via MainViewModel the Views, reset the unused ViewModels (ViewModel = new VieModel(...); or ViewModel = null;) and pass the current / logged in User Object.
With the same strategy I just Logout like this
public ICommand LogoutCommand
{
get
{
return new RelayCommand(async () =>
{
await _dataService.SetUserStatusTask(CurrentUser.Id, 0);
if(CurrentUser.Id > 0 && IsLoggedIn)
await _dataService.WriteLog(CurrentUser.Id, "logout", "Programm", GetLocalAdress());
IsLoggedIn = false;
CurrentUser = new User();
Messenger.Default.Send(new NotificationMessage("GoToLoginPage"));
});
}
}
So I can log in with my running Client so often I want, but the "user_loginStatus" only sets the changes the first login time to 1 and back to 0, but when I log out then and login back with the same user, it wont change it anymore. When I login (still same running Client) with another user it sets again the first time the "user_loginstatus" to 1 and back to 0 and then only again when I restart my Client..
What could I do wrong?
This is just basically from my comment regarding the original question:
I had similiar problems several times. Usually it is based on the fact that the entity you modified can't be validated properly and your dbContext fails without a proper exception because it still holds on to false entity. If this is the case you could circumvent this problem by using scoped contexts and embedding your data access operations in a using statement.
Alternatively you could try to explicitly tell EF that the entity has changes e.g.:
_entities.Entry(user).State = EntityState.Modified;
Regarding your other question:
In theory you shouldn't have to tell EF explicitly that the entity's values have changed. Change tracking should do that automatically. The only exception i could think of, is when you try to modify an entity that is explicitly not tracked anymore. When you call _entities.Find(id) it will look in the context if it finds the object with the matching primary key value and load it. Since you already modified this object before, the context will simply get the old object you already modified to set the login status the first time.
This "old" object is probably not tracked anymore and you have to tell EF explicitly that it has changed, by changing it's state from attached to modified.
in LoginExecute() you have UserObj, but in LogoutCommand() you have CurrentUser. Is it OK?
I have a simple table:
IPAddress (PK, string)
Requests (int)
It's a flood limiter. Every minute the tables data is deleted. Every page request, the Requests count increments for given IPAddress.
It works great, and our website performance has increased significantly as we do suffer some accidental/intentional effective DDOSes due to the nature of our product and website.
The only problem is, when an IP does send thousands of requests a minute to our website for whatever reason, we get these errors popping up:
Violation of PRIMARY KEY constraint 'PK_v2SiteIPRequests'. Cannot insert duplicate key in object 'dbo.v2SiteIPRequests'. The duplicate key value is ([IP_ADDRESS]). The statement has been terminated.
The code that makes the insert is:
/// <summary>
/// Call everytime a page view is requested
/// </summary>
private static void DoRequest(string ipAddress)
{
using (var db = new MainContext())
{
var rec = db.v2SiteIPRequests.SingleOrDefault(c => c.IPAddress == ipAddress);
if (rec == null)
{
var n = new v2SiteIPRequest {IPAddress = ipAddress, Requests = 1};
db.v2SiteIPRequests.InsertOnSubmit(n);
db.SubmitChanges();
}
else
{
rec.Requests++;
db.SubmitChanges();
// Ban?
if (rec.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
}
What's the best way to handle this exception, and why is it being thrown? Is a try catch best here?
If you get two requests simultanously, the following happens:
Request one: is it in the database?
Request two: is it in the database?
Request one: No, not yet
Request two: No, not yet
Request one: INSERT
Request two: INSERT
Request one: WORKS
Request two: FAILS (already inserted a split second before)
There is nothing you can do here but catch the exception and handle it gracefully. Maybe by using a simple "try again" logic.
You've got a few race conditions there, especially when there are concurrent connections.
You may need to change approach, and always store each request, and then query if there are more in the timeframe than permitted and take whatever action you need
Here's the solution based on suggestions. It's ugly but works as far as I can tell.
/// <summary>
/// Call everytime a page view is requested
/// </summary>
private static void DoRequest(string ipAddress)
{
using (var db = new MainContext())
{
var rec = db.v2SiteIPRequests.SingleOrDefault(c => c.IPAddress == ipAddress);
if (rec == null)
{
// Catch insert race condition for PK violation. Especially susceptible when being hammered by requests from 1 IP
try
{
var n = new v2SiteIPRequest {IPAddress = ipAddress, Requests = 1};
db.v2SiteIPRequests.InsertOnSubmit(n);
db.SubmitChanges();
}
catch (Exception e)
{
try
{
// Can't reuse original context as it caches
using (var db2 = new MainContext())
{
var rec2 = db2.v2SiteIPRequests.Single(c => c.IPAddress == ipAddress);
rec2.Requests++;
db2.SubmitChanges();
if (rec2.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
catch (Exception ee)
{
// Shouldn't reach here
Error.Functions.NewError(ee);
}
}
}
else
{
rec.Requests++;
db.SubmitChanges();
// Ban?
if (rec.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
}
We've been building an application which has 2 parts.
Server Side: A WCF service, Client Side: A WPF app following MVVM patterns
So we also use Self Tracking Entities to get some database job done but we're having struggles.
Here's an example code:
public bool UpdateUser(User userToUpdate)
{
using (DBContext _context = new DBContext())
{
try
{
userToUpdate.MarkAsModified();
_context.Users.ApplyChanges(userToUpdate);
_context.SaveChanges();
return true;
}
catch (Exception ex)
{
// LOGS etc.
return false;
}
}
}
So when I call this function from the client, it gives us this exception:
AcceptChanges cannot continue because the object's key values conflict
with another object in the ObjectStateManager. Make sure that the key
values are unique before calling AcceptChanges.
"User" entity has one "many-to-1..0 (UserType)" and five "0..1-to-many" associations.
And that "UserType" has a "many-to-many (Modules)" association.
When we send a User instance to this function, UserType is included with it's Modules.
If you can guide me through solving this problem, that'd be great.
Thank you.
This is how I resolved this issue as a reference for the others having the same issue with me.
public bool UpdateUser(User userToUpdate)
{
using (DBContext _context = new DBContext())
{
try
{
User outUser = usersModel.Users.Single(x => x.UserId == userToUpdate.UserId);
outUser = userToUpdate;
_context.ApplyCurrentValues("Users", outUser);
_context.SaveChanges();
return true;
}
catch (Exception ex)
{
// LOGS etc.
return false;
}
}
}