How to check that InsertOne(document) inserted document to the database? - c#

Since collection.InsertOne(document) returns void how do i know that the document written to the database for sure? I have a function which need to be run exactly after document is written to the database.
How can I check that without running a new query?

"Since collection.InsertOne(document) returns void" - is wrong, see db.collection.insertOne():
Returns: A document containing:
A boolean acknowledged as true if the operation ran with write concern or false if write concern was disabled.
A field insertedId with the _id value of the inserted document.
So, run
ret = db.collection.insertOne({your document})
print(ret.acknowledged);
or
print(ret.insertedId);
to get directly the _id of inserted document.

The write concern can be configured on either the connection string or the MongoClientSettings which are both passed in to the MongoClient object on creation.
var client = new MongoClient(new MongoClientSettings
{
WriteConcern = WriteConcern.W1
});
More information on write concern can be found on the MongoDB documentation - https://docs.mongodb.com/manual/reference/write-concern/
If the document is not saved the C# Driver will throw an exception (MongoWriteException).
Also if you have any write concern > Acknowledged, you'll also get back the Id of the document you've just save.
var client = new MongoClient(new MongoClientSettings
{
WriteConcern = WriteConcern.W1
});
var db = client.GetDatabase("test");
var orders = db.GetCollection<Order>("orders");
var newOrder = new Order {Name = $"Order-{Guid.NewGuid()}"};
await orders.InsertOneAsync(newOrder);
Console.WriteLine($"Order Id: {newOrder.Id}");
// Output
// Order Id: 5f058d599f1f033f3507c368
public class Order
{
public ObjectId Id { get; set; }
public string Name { get; set; }
}

Related

Idempotency for BigQuery load jobs using Google.Cloud.BigQuery.V2

You are able to create a csv load job to load data from a csv file in Google Cloud Storage by using the BigQueryClient in Google.Cloud.BigQuery.V2 which has a CreateLoadJob method.
How can you guarantee idempotency with this API to ensure that say the network dropped before getting a response and you kicked off a retry you would not end up with the same data being loaded into BigQuery multiple times?
Example API usage
private void LoadCsv(string sourceUri, string tableId, string timePartitionField)
{
var tableReference = new TableReference()
{
DatasetId = _dataSetId,
ProjectId = _projectId,
TableId = tableId
};
var options = new CreateLoadJobOptions
{
WriteDisposition = WriteDisposition.WriteAppend,
CreateDisposition = CreateDisposition.CreateNever,
SkipLeadingRows = 1,
SourceFormat = FileFormat.Csv,
TimePartitioning = new TimePartitioning
{
Type = _partitionByDayType,
Field = timePartitionField
}
};
BigQueryJob loadJob = _bigQueryClient.CreateLoadJob(sourceUri: sourceUri,
destination: tableReference,
schema: null,
options: options);
loadJob.PollUntilCompletedAsync().Wait();
if (loadJob.Status.Errors == null || !loadJob.Status.Errors.Any())
{
//Log success
return;
}
//Log error
}
You can achieve idempotency by generating your own jobid based on e.g. file location you loaded and target table.
job_id = 'my_load_job_{}'.format(hashlib.md5(sourceUri+_projectId+_datasetId+tableId).hexdigest())
var options = new CreateLoadJobOptions
{
WriteDisposition = WriteDisposition.WriteAppend,
CreateDisposition = CreateDisposition.CreateNever,
SkipLeadingRows = 1,
JobId = job_id, #add this
SourceFormat = FileFormat.Csv,
TimePartitioning = new TimePartitioning
{
Type = _partitionByDayType,
Field = timePartitionField
}
};
In this case if you try reinsert the same job_id you got error.
You can also easily generate this job_id for check in case if pooling failed.
There are two places you could end up losing the response:
When creating the job to start with
When polling for completion
The first one is relatively tricky to recover from without a job ID; you could list all the jobs in the project and try to find one that looks like the one you'd otherwise create.
However, the C# client library generates a job ID so that it can retry, or you can specify your own job ID via CreateLoadJobOptions.
The second failure time is much simpler: keep the returned BigQueryJob so you can retry the polling if that fails. (You could store the job name so that you can recover even if your process dies while waiting for it to complete, for example.)

ReplaceOneAsync() immediately after InsertOneAsync() not always working, even when journaled

On a single-instance MongoDB server, even with the write concern on the client set to journaled, one in every couple of thousand documents isn't replacable immediately after inserting.
I was under the impression that once journaled, documents are immediately available for querying.
The code below inserts a document, then updates the DateModified property of the document and tries to update the document based on the document's Id and the old value of that property.
public class MyDocument
{
public BsonObjectId Id { get; set; }
public DateTime DateModified { get; set; }
}
static void Main(string[] args)
{
var r = Task.Run(MainAsync);
Console.WriteLine("Inserting documents... Press any key to exit.");
Console.ReadKey(intercept: true);
}
private static async Task MainAsync()
{
var client = new MongoClient("mongodb://localhost:27017");
var database = client.GetDatabase("updateInsertedDocuments");
var concern = new WriteConcern(journal: true);
var collection = database.GetCollection<MyDocument>("docs").WithWriteConcern(concern);
int errorCount = 0;
int totalCount = 0;
do
{
totalCount++;
// Create and insert the document
var document = new MyDocument
{
DateModified = DateTime.Now,
};
await collection.InsertOneAsync(document);
// Save and update the modified date
var oldDateModified = document.DateModified;
document.DateModified = DateTime.Now;
// Try to update the document by Id and the earlier DateModified
var result = await collection.ReplaceOneAsync(d => d.Id == document.Id && d.DateModified == oldDateModified, document);
if (result.ModifiedCount == 0)
{
Console.WriteLine($"Error {++errorCount}/{totalCount}: doc {document.Id} did not have DateModified {oldDateModified.ToString("yyyy-MM-dd HH:mm:ss.ffffff")}");
await DoesItExist(collection, document, oldDateModified);
}
}
while (true);
}
The code inserts at a rate of around 250 documents per second. One in around 1,000-15,000 calls to ReplaceOneAsync(d => d.Id == document.Id && d.DateModified == oldDateModified, ...) fails, as it returns a ModifiedCount of 0. The failure rate depends on whether we run a Debug or Release build and with debugger attached or not: more speed means more errors.
The code shown represents something that I can't really easily change. Of course I'd rather perform a series of Update.Set() calls, but that's not really an option right now. The InsertOneAsync() followed by a ReplaceOneAsync() is abstracted by some kind of repository pattern that updates entities by reference. The non-async counterparts of the methods display the same behavior.
A simple Thread.Sleep(100) between inserting and replacing mitigates the problem.
When the query fails, and we wait a while and then attempt to query the document again in the code below, it'll be found every time.
private static async Task DoesItExist(IMongoCollection<MyDocument> collection, MyDocument document, DateTime oldDateModified)
{
Thread.Sleep(500);
var fromDatabaseCursor = await collection.FindAsync(d => d.Id == document.Id && d.DateModified == oldDateModified);
var fromDatabaseDoc = await fromDatabaseCursor.FirstOrDefaultAsync();
if (fromDatabaseDoc != null)
{
Console.WriteLine("But it was found!");
}
else
{
Console.WriteLine("And wasn't found!");
}
}
Versions on which this occurs:
MongoDB Community Server 3.4.0, 3.4.1, 3.4.3, 3.4.4 and 3.4.10, all on WiredTiger storage engine
Server runs on Windows, other OSes as well
C# Mongo Driver 2.3.0 and 2.4.4
Is this an issue in MongoDB, or are we doing (or assuming) something wrong?
Or, the actual end goal, how can I ensure an insert is immediately retrievable by an update?
ReplaceOneAsync returns 0 if the new document is identical to the old one (because nothing changed).
It looks to me like if your test executes fast enough the various calls to DateTime.Now could return the same value, so it is possible that you are passing the exact same document to InsertOneAsync and ReplaceOneAsync.

C# Async Function To Replace Mongo Document Does Not Work

I am currently running into an issue where I am unable to replace a Mongo document thats supposed to be updated with the following function:
public async void updateSelectedDocument(BsonDocument document)
{
var collection = mongoClient.GetDatabase("db").GetCollection<BsonDocument>("collection");
var id = document.GetElement("_id").ToString().Remove(0, 4);
var filter = Builders<BsonDocument>.Filter.Eq("_id", id);
var result = await collection.ReplaceOneAsync(filter, document, new UpdateOptions { IsUpsert = true });
}
There is no error to go off of and none of the variables are null. My connection to the mongo instance and collection works because I can perform a find and an insert. Please Let me know if you need additional information.

Parse.com - if key exists, update

Currently, I'm sending some data to Parse.com. All works well, however, I would like to add a row if it's a new user or update the current table if it's an old user.
So what I need to do is check if the current Facebook ID (the key I'm using) shows up anywhere in the fbid column, then update it if case may be.
How can I check if the key exists in the column?
Also, I'm using C#/Unity.
static void sendToParse()
{
ParseObject currentUser = new ParseObject("Game");
currentUser["name"] = fbname;
currentUser["email"] = fbemail;
currentUser["fbid"] = FB.UserId;
Task saveTask = currentUser.SaveAsync();
Debug.LogError("Sent to Parse");
}
Okay, I figured it out.
First, I check which if there is any Facebook ID in the table that matches the current ID, then get the number of matches.
public static void getObjectID()
{
var query = ParseObject.GetQuery("IdealStunts")
.WhereEqualTo("fbid", FB.UserId);
query.FirstAsync().ContinueWith(t =>
{
ParseObject obj = t.Result;
objectID = obj.ObjectId;
Debug.LogError(objectID);
});
}
If there is any key matching the current Facebook ID, don't do anything. If there aren't, just add a new user.
public static void sendToParse()
{
if (count != 0)
{
Debug.LogError("Already exists");
}
else
{
ParseObject currentUser = new ParseObject("IdealStunts");
currentUser["name"] = fbname;
currentUser["email"] = fbemail;
currentUser["fbid"] = FB.UserId;
Task saveTask = currentUser.SaveAsync();
Debug.LogError("New User");
}
}
You will have to do a StartCoroutine for sendToParse, so getObjectID has time to look through the table.
It may be a crappy implementation, but it works.
What you need to do is create a query for the fbid. If the query returns an object, you update it. If not, you create a new.
I'm not proficient with C#, but here is an example in Objective-C:
PFQuery *query = [PFQuery queryWithClassName:#"Yourclass]; // Name of your class in Parse
query.cachePolicy = kPFCachePolicyNetworkOnly;
[query whereKey:#"fbid" equalTo:theFBid]; // Variable containing the fb id
NSArray *users = [query findObjects];
self.currentFacebookUser = [users lastObject]; // Array should contain only 1 object
if (self.currentFacebookUser) { // Might have to test for NULL, but probably not
// Update the object and save it
} else {
// Create a new object
}

NetSuite custom record search through suiteTalk using C#

We are having an issue with searching a custom record through SuiteTalk. Below is a sample of what we are calling. The issue we are having is in trying to set up the search using the internalId of the record. The issue here lies in in our initial development account the internal id of this custom record is 482 but when we deployed it through the our bundle the record was assigned with the internal Id of 314. It would stand to reason that this internal id is not static in a site per site install so we wondered what property to set up to reference the custom record. When we made the record we assigned its “scriptId’ to be 'customrecord_myCustomRecord' but through suitetalk we do not have a “scriptId”. What is the best way for us to allow for this code to work in all environments and not a specific one? And if so, could you give an example of how it might be used.
Code (C#) that we are attempting to make the call from. We are using the 2013.2 endpoints at this time.
private SearchResult NetSuite_getPackageContentsCustomRecord(string sParentRef)
{
List<object> PackageSearchResults = new List<object>();
CustomRecord custRec = new CustomRecord();
CustomRecordSearch customRecordSearch = new CustomRecordSearch();
SearchMultiSelectCustomField searchFilter1 = new SearchMultiSelectCustomField();
searchFilter1.internalId = "customrecord_myCustomRecord_sublist";
searchFilter1.#operator = SearchMultiSelectFieldOperator.anyOf;
searchFilter1.operatorSpecified = true;
ListOrRecordRef lRecordRef = new ListOrRecordRef();
lRecordRef.internalId = sParentRef;
searchFilter1.searchValue = new ListOrRecordRef[] { lRecordRef };
CustomRecordSearchBasic customRecordBasic = new CustomRecordSearchBasic();
customRecordBasic.recType = new RecordRef();
customRecordBasic.recType.internalId = "314"; // "482"; //THIS LINE IS GIVING US THE TROUBLE
//customRecordBasic.recType.name = "customrecord_myCustomRecord";
customRecordBasic.customFieldList = new SearchCustomField[] { searchFilter1 };
customRecordSearch.basic = customRecordBasic;
// Search for the customer entity
SearchResult results = _service.search(customRecordSearch);
return results;
}
I searched all over for a solution to avoid hardcoding internalId's. Even NetSuite support failed to give me a solution. Finally I stumbled upon a solution in NetSuite's knowledgebase, getCustomizationId.
This returns the internalId, scriptId and name for all customRecord's (or customRecordType's in NetSuite terms! Which is what made it hard to find.)
public string GetCustomizationId(string scriptId)
{
// Perform getCustomizationId on custom record type
CustomizationType ct = new CustomizationType();
ct.getCustomizationTypeSpecified = true;
ct.getCustomizationType = GetCustomizationType.customRecordType;
// Retrieve active custom record type IDs. The includeInactives param is set to false.
GetCustomizationIdResult getCustIdResult = _service.getCustomizationId(ct, false);
foreach (var customizationRef in getCustIdResult.customizationRefList)
{
if (customizationRef.scriptId == scriptId) return customizationRef.internalId;
}
return null;
}
you can make the internalid as an external property so that you can change it according to environment.
The internalId will be changed only when you install first time into an environment. when you deploy it into that environment, the internalid will not change with the future deployments unless you choose Add/Rename option during deployment.

Categories

Resources