How to efficiently edit data in a database? - c#

The method is supposed to receive data from a server, check if new tokens have been added, and if there are, add them to the database. If the token already exists, update its status but don't add a new row in the table. This is the code I've written so far.
IEnumerable<Token> serverTokens = JsonConvert.DeserializeObject<IEnumerable<Token>>
(server.GetTokens().Content);
IEnumerable<Token> dbTokens = _tokenService.GetAllTokens();
foreach (var token in serverTokens)
{
var dbToken = dbTokens.Where(x => x.Symbol == token.Symbol).FirstOrDefault();
if (dbToken != null)
{
Token editedToken = dbToken;
editedToken.UpdatedOn = DateTime.Now;
editedToken.Active = token.Active;
_tokenService.AddToken(editedToken);
}
else
{
token.UpdatedOn = DateTime.Now;
_tokenService.AddToken(token);
}
}
dbContext.SaveChanges();
The AddToken method is just a simple AddOrUpdate operation.
public void AddToken(Token token)
{
_dbContext.Tokens.AddOrUpdate(token);
//_dbContext.SaveChanges();
}
Now, this code does what it's supposed to, however it's extremely slow. How would I go about optimizing it?

dbTokens.Where(x => x.Symbol == token.Symbol) is IEnumerable
So he will load it each time you call it on the loop.
Store in in a list before the loop
List<Token> dbTokens = _tokenService.GetAllTokens().ToList()

Related

ReplaceOneAsync() immediately after InsertOneAsync() not always working, even when journaled

On a single-instance MongoDB server, even with the write concern on the client set to journaled, one in every couple of thousand documents isn't replacable immediately after inserting.
I was under the impression that once journaled, documents are immediately available for querying.
The code below inserts a document, then updates the DateModified property of the document and tries to update the document based on the document's Id and the old value of that property.
public class MyDocument
{
public BsonObjectId Id { get; set; }
public DateTime DateModified { get; set; }
}
static void Main(string[] args)
{
var r = Task.Run(MainAsync);
Console.WriteLine("Inserting documents... Press any key to exit.");
Console.ReadKey(intercept: true);
}
private static async Task MainAsync()
{
var client = new MongoClient("mongodb://localhost:27017");
var database = client.GetDatabase("updateInsertedDocuments");
var concern = new WriteConcern(journal: true);
var collection = database.GetCollection<MyDocument>("docs").WithWriteConcern(concern);
int errorCount = 0;
int totalCount = 0;
do
{
totalCount++;
// Create and insert the document
var document = new MyDocument
{
DateModified = DateTime.Now,
};
await collection.InsertOneAsync(document);
// Save and update the modified date
var oldDateModified = document.DateModified;
document.DateModified = DateTime.Now;
// Try to update the document by Id and the earlier DateModified
var result = await collection.ReplaceOneAsync(d => d.Id == document.Id && d.DateModified == oldDateModified, document);
if (result.ModifiedCount == 0)
{
Console.WriteLine($"Error {++errorCount}/{totalCount}: doc {document.Id} did not have DateModified {oldDateModified.ToString("yyyy-MM-dd HH:mm:ss.ffffff")}");
await DoesItExist(collection, document, oldDateModified);
}
}
while (true);
}
The code inserts at a rate of around 250 documents per second. One in around 1,000-15,000 calls to ReplaceOneAsync(d => d.Id == document.Id && d.DateModified == oldDateModified, ...) fails, as it returns a ModifiedCount of 0. The failure rate depends on whether we run a Debug or Release build and with debugger attached or not: more speed means more errors.
The code shown represents something that I can't really easily change. Of course I'd rather perform a series of Update.Set() calls, but that's not really an option right now. The InsertOneAsync() followed by a ReplaceOneAsync() is abstracted by some kind of repository pattern that updates entities by reference. The non-async counterparts of the methods display the same behavior.
A simple Thread.Sleep(100) between inserting and replacing mitigates the problem.
When the query fails, and we wait a while and then attempt to query the document again in the code below, it'll be found every time.
private static async Task DoesItExist(IMongoCollection<MyDocument> collection, MyDocument document, DateTime oldDateModified)
{
Thread.Sleep(500);
var fromDatabaseCursor = await collection.FindAsync(d => d.Id == document.Id && d.DateModified == oldDateModified);
var fromDatabaseDoc = await fromDatabaseCursor.FirstOrDefaultAsync();
if (fromDatabaseDoc != null)
{
Console.WriteLine("But it was found!");
}
else
{
Console.WriteLine("And wasn't found!");
}
}
Versions on which this occurs:
MongoDB Community Server 3.4.0, 3.4.1, 3.4.3, 3.4.4 and 3.4.10, all on WiredTiger storage engine
Server runs on Windows, other OSes as well
C# Mongo Driver 2.3.0 and 2.4.4
Is this an issue in MongoDB, or are we doing (or assuming) something wrong?
Or, the actual end goal, how can I ensure an insert is immediately retrievable by an update?
ReplaceOneAsync returns 0 if the new document is identical to the old one (because nothing changed).
It looks to me like if your test executes fast enough the various calls to DateTime.Now could return the same value, so it is possible that you are passing the exact same document to InsertOneAsync and ReplaceOneAsync.

Dynamics CRM SDK: Execute Multiple Requests for Bulk Update of around 5000 Records

I have written a function to update Default Price List for all the Active Products on the CRM 2013 Online.
//The method takes IOrganization service and total number of records to be created as input
private void UpdateMultipleProducts(IOrganizationService service, int batchSize, EntityCollection UpdateProductsCollection, Guid PriceListGuid)
{
//To execute the request we have to add the Microsoft.Xrm.Sdk of the latest SDK as reference
ExecuteMultipleRequest req = new ExecuteMultipleRequest();
req.Requests = new OrganizationRequestCollection();
req.Settings = new ExecuteMultipleSettings();
req.Settings.ContinueOnError = true;
req.Settings.ReturnResponses = true;
try
{
foreach (var entity in UpdateProductsCollection.Entities)
{
UpdateRequest updateRequest = new UpdateRequest { Target = entity };
entity.Attributes["pricelevelid"] = new EntityReference("pricelevel", PriceListGuid);
req.Requests.Add(updateRequest);
}
var res = service.Execute(req) as ExecuteMultipleResponse; //Execute the collection of requests
}
//If the BatchSize exceeds 1000 fault will be thrown.In the catch block divide the records into batchable records and create
catch (FaultException<OrganizationServiceFault> fault)
{
if (fault.Detail.ErrorDetails.Contains("MaxBatchSize"))
{
var allowedBatchSize = Convert.ToInt32(fault.Detail.ErrorDetails["MaxBatchSize"]);
int remainingCreates = batchSize;
while (remainingCreates > 0)
{
var recordsToCreate = Math.Min(remainingCreates, allowedBatchSize);
UpdateMultipleProducts(service, recordsToCreate, UpdateProductsCollection, PriceListGuid);
remainingCreates -= recordsToCreate;
}
}
}
}
Code Description : There are around 5000 active product records in the System. So I am updating Default Price List for all of them using above code.
But, I am missing here something so that, it has updated only 438 records. It loops through the While statement correctly, but it is not updating all of them here.
What should be the Batchsize when we run this function for the First Time?
Any one can help me here?
Thank you,
Mittal.
You pass remainingCreates as the batchSize parameter but your code never references batchSize so you are just going to reenter that while loop every time.
Also, I'm not sure how you are doing all your error handling but you need to update your catch block so that it doesn't just let FaultExceptions pass-through if they don't contain a MaxBatchSize value. Right now, if you take a FaultException regarding something other than batch size it will be ignored.
{
if (fault.Detail.ErrorDetails.Contains("MaxBatchSize"))
{
var allowedBatchSize = Convert.ToInt32(fault.Detail.ErrorDetails["MaxBatchSize"]);
int remainingCreates = batchSize;
while (remainingCreates > 0)
{
var recordsToCreate = Math.Min(remainingCreates, allowedBatchSize);
UpdateMultipleProducts(service, recordsToCreate, UpdateProductsCollection, PriceListGuid);
remainingCreates -= recordsToCreate;
}
}
else throw;
}
Instead of reactive handling, i prefer proactive handling of the MaxBatchSize, this is true when you already know what is MaxMatchSize is.
Following is sample code, here while adding OrgRequest to collection i keep count of batch and when it exceeds I call Execute and reset the collection to take fresh batch.
foreach (DataRow dr in statusTable.Rows)
{
Entity updEntity = new Entity("ABZ_NBA");
updEntity["ABZ_NBAid"] = query.ToList().Where(a => a.NotificationNumber == dr["QNMUM"].ToString()).FirstOrDefault().TroubleTicketId;
//updEntity["ABZ_makerfccall"] = false;
updEntity["ABZ_rfccall"] = null;
updEntity[cNBAttribute.Key] = dr["test"];
req.Requests.Add(new UpdateRequest() { Target = updEntity });
if (req.Requests.Count == 1000)
{
responseWithResults = (ExecuteMultipleResponse)_orgSvc.Execute(req);
req.Requests = new OrganizationRequestCollection();
}
}
if (req.Requests.Count > 0)
{
responseWithResults = (ExecuteMultipleResponse)_orgSvc.Execute(req);
}

Parse.com - if key exists, update

Currently, I'm sending some data to Parse.com. All works well, however, I would like to add a row if it's a new user or update the current table if it's an old user.
So what I need to do is check if the current Facebook ID (the key I'm using) shows up anywhere in the fbid column, then update it if case may be.
How can I check if the key exists in the column?
Also, I'm using C#/Unity.
static void sendToParse()
{
ParseObject currentUser = new ParseObject("Game");
currentUser["name"] = fbname;
currentUser["email"] = fbemail;
currentUser["fbid"] = FB.UserId;
Task saveTask = currentUser.SaveAsync();
Debug.LogError("Sent to Parse");
}
Okay, I figured it out.
First, I check which if there is any Facebook ID in the table that matches the current ID, then get the number of matches.
public static void getObjectID()
{
var query = ParseObject.GetQuery("IdealStunts")
.WhereEqualTo("fbid", FB.UserId);
query.FirstAsync().ContinueWith(t =>
{
ParseObject obj = t.Result;
objectID = obj.ObjectId;
Debug.LogError(objectID);
});
}
If there is any key matching the current Facebook ID, don't do anything. If there aren't, just add a new user.
public static void sendToParse()
{
if (count != 0)
{
Debug.LogError("Already exists");
}
else
{
ParseObject currentUser = new ParseObject("IdealStunts");
currentUser["name"] = fbname;
currentUser["email"] = fbemail;
currentUser["fbid"] = FB.UserId;
Task saveTask = currentUser.SaveAsync();
Debug.LogError("New User");
}
}
You will have to do a StartCoroutine for sendToParse, so getObjectID has time to look through the table.
It may be a crappy implementation, but it works.
What you need to do is create a query for the fbid. If the query returns an object, you update it. If not, you create a new.
I'm not proficient with C#, but here is an example in Objective-C:
PFQuery *query = [PFQuery queryWithClassName:#"Yourclass]; // Name of your class in Parse
query.cachePolicy = kPFCachePolicyNetworkOnly;
[query whereKey:#"fbid" equalTo:theFBid]; // Variable containing the fb id
NSArray *users = [query findObjects];
self.currentFacebookUser = [users lastObject]; // Array should contain only 1 object
if (self.currentFacebookUser) { // Might have to test for NULL, but probably not
// Update the object and save it
} else {
// Create a new object
}

Out of Memory at line XXXX

can anyone help me how to resolve the out of memory error on my asp page? im using linq to sql.. after adding data several data.. like more than 10 rows. in the grid. an out of memory error occurs.. attached herewith is my add function..
public ServiceDetail checkservicedetailid()
{
string ServiceName = ViewState["Tab"].ToString();
ServiceDetail checkservicedetailid = ServiceDetails_worker.get(a => a.ServiceName == ServiceName && a.MarginAnalysisID == checkmarginanalysisid().MarginAnalysisID).SingleOrDefault();
return checkservicedetailid;
}
public IEnumerable<ServiceDetail> get(Expression<Func<ServiceDetail, Boolean>> express)
{
return ServiceDetailsDB.ServiceDetails.Where(express);
}
protected void btnSaveEmptyOC_Click(object sender, EventArgs e)
{
try
{
if (checkservicedetailid() != null)
{
CashExpense tblCashExpenses = new CashExpense();
Guid CashExpensesID = Guid.NewGuid();
tblCashExpenses.CashExpensesID = CashExpensesID;
tblCashExpenses.ServiceDetailsID = checkservicedetailid().ServiceDetailsID;
tblCashExpenses.Description = txtDescriptionEmptyOC.Text;
tblCashExpenses.Quantity = Decimal.Parse(txtQTYEmptyOC.Text);
tblCashExpenses.UnitCost = Decimal.Parse(txtUnitCostEmptyOC.Text);
tblCashExpenses.CreatedBy = User.Identity.Name;
tblCashExpenses.DateCreated = DateTime.Now;
tblCashExpenses.CashExpensesTypeID = "OTHER";
CashExpenses_worker.insert(tblCashExpenses);
CashExpenses_worker.submit();
//Clear items after saving
txtDescriptionEmptyOC.Text = "";
txtQTYEmptyOC.Text = "";
txtUnitCostEmptyOC.Text = "";
ValidationMessage.ShowValidationMessage(MessageCenter.CashExpenseMaintenace.InsertOC2, "SaveEmptyOC", this.Page);
MyAuditProvider.Insert(this.GetType().ToString(), ViewState["MarginAnalysisID"].ToString(), MessageCenter.Mode.ADD, MessageCenter.CashExpenseMaintenace.InsertOC2, Page.Request, User);
divOtherCost.Visible = false;
grd_othercost.Visible = true;
btnaddothercost.Visible = true;
}
else
{
//Displays a Message on the Validation Summary (Service Id does not exist)
ValidationMessage.ShowValidationMessage(MessageCenter.CashExpenseMaintenace.SaveServiceDetailOC, "SaveEmptyOC", this.Page);
}
}
catch
{
//Displays a Message on the Validation Summary (Error on Saving)
ValidationMessage.ShowValidationMessage(MessageCenter.CashExpenseMaintenace.InsertOCError, "SaveEmptyOC", this.Page);
}
finally
{
//Rebinds the Grid
populategrd_othercost();
}
}
I'm guessing from your code here:
ServiceDetail checkservicedetailid = ServiceDetails_worker.get(
a => a.ServiceName == ServiceName &&
a.MarginAnalysisID == checkmarginanalysisid().MarginAnalysisID
).SingleOrDefault();
that .get() is taking a Func<SomeType, bool>, and you are doing something like:
var row = dbCtx.SomeTable.Where(predicate);
(please correct me here if I'm incorrect)
This, however, is using LINQ-to-Objects, meaning: it is loading every row from the table to the client and testing locally. That'll hurt memory, especially if a different db-context is created for each row. Additionally, the checkmarginanalysisid() call is being executed per row, when presumably it doesn't change between rows.
You should be testing this with an Expression<Func<SomeType, bool>> which would be translated to TSQL and executed at the server. You may also need to remove untranslatable methods, i.e.
var marginAnalysisId = checkmarginanalysisid().MarginAnalysisID;
ServiceDetail checkservicedetailid = ServiceDetails_worker.get(
a => a.ServiceName == ServiceName &&
a.MarginAnalysisID == marginAnalysisId
).SingleOrDefault();
where that is get(Expression<Func<SomeType, bool>>).
I tried all of the solution given to me both by my peers as well as the solution provided here, from GC.Collect, to disposing linq datacontext after use etc. however the error keeps on occurring, i then tried to remove the update panel, Ive read a site that showed how ridiculous update panel when it comes to handling data esp when a function is done repeatedly. And poof! the memory problem is gone!

SPListItem Save conflict while Update

(1) var list1 = web.GetList("/lists/list1");
(2) var item1 = list1.GetItemById(10001);
(3) ...
take breakpoint here, open item with ID = 10001 for edit, change 'Title' fields and save it. Then run code follow:
(4)item1[SPBuiltInFieldId.Title] = "some text";
(5)item1.Update();
row (5) throws save conflict exception.
How can to lock item for edit at line (3)? Or any other approach to avoid conflict?
You have to check the SPListItem manually
try
{
var item = list.GetItemById(3);
item["MyField"] = "FooBar";
item.Update();
}
catch(SPException conflictEx)
{
// handle conflict by re-evaluating SPListItem
var item = list.GetItemById(3);
// ..
}
I don't know any other mechanism atm.
// *create a new SPWeb object for each list modification otherwise
we'll get Save Conflict*
from the following URL
http://platinumdogs.me/2010/01/21/sharepoint-calling-splist-update-causes-save-conflict-spexception/
exceptions
using (var thisWeb = featSite.OpenWeb(featWeb.ID))
{
try
{
var listUpdate = false;
var theList = thisWeb.Lists[att.Value];
// change list configuration
// .....
// commit List modifications
if (listUpate)
theList.Update();
}
catch
{
// log the event and rethrow
throw;
}
}
}
}
Another approach is using Linq to SharePoint, Linq to SharePoint offers you a conflict resolution mechanism
SharePoint's LINQ provider is querying for concurrent changes when you try to save changes you've made using the SubmitChanges method.
When a conflict has been found, a ChangeConflictException will be thrown.
foreach(var notebook in spSite.Notebooks)
{
notebook.IsTopNotebook = true;
}
try
{
spSite.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch(ChangeConflictException ex)
{
foreach(ObjectChangeConflict occ in spSite.ChangeConflicts)
{
if (((Notebook)occ.Object).Memory > 16)
{
foreach (MemberChangeConflict field in occ.MemberConflicts)
{
if (field.Member.Name == "IsTopNotebook")
{
field.Resolve(RefreshMode.KeepCurrentValues);
}
else
{
field.Resolve(RefreshMode.OverwriteCurrentValues);
}
}
}
else
{
occ.Resolve(RefreshMode.KeepCurrentValues);
}
}
spSite.SubmitChanges();
}

Categories

Resources