Too fast response - c#

I have an ASP.NET-MVC application that:
opens a db transaction
updates a cart status and other things
submits this cart to another web server via an HttpRequest
register in database the transmission with its code status
send a confirmation mail, that the command has been sent
then if no error has occurs commit the transaction else rollback it.
Normally, after that the distant server sends another web request to my application to a controller action that will update the previous transmission and set an aknowledge field.
My problem is that the distant web server sometimes is very fast and sends the aknowledge status before the transmission insertion in the database is committed, so the update fails. How could I prevent this?
Thanks.

Just make your commit operation in two stages. First is too create record. Then do processing like create mail and so on. And second to make real(logical) commit.
using(var db = new Db(){
db.Insert(
} // This will commit first stage
// Send email do other stuff
using(var db = new Db(){
var t = db.getTransmission()
r.Commited = true;
db.Save();
} // This will logically commit

Can you update the database, commit and mark as inactive or invalid, then take away this mark once you get our acknowledgement status?
I may be misunderstanding what exacty it is you're doing.

Related

Masstransit - Can we use Send req and res for valiating

As per myunderstanding on masstransit
Publish: Sends a message to subscribers .
Send: Used to send messages in fire and forget fashion
Requests: uses request/reply pattern to just send a message and get a response
In my requirement i need to validate my request before calling the Send method. Here the validation should occur at DB level to check for say duplicate records.
I tried to use publish before my send method , but send method doesnt wait for the publish consumer to execute.
My scenario is if (validation is success) proceed with saving data ie the send request job to save data.
So should i use request response pattern here for doing the validation. I am a newbie to masstransit and microservice.
MyTestController{
if(validation success) // how to validate here
Send request to save data.
}
It sounds like you want to validate the data before it is sent out. Something conceptually like this.
class MyTestController
{
// ..
public async Task<IActionResult> Post(SomeData data)
{
if(DataIsValid(data))
_publishEndpoint.Publish(new Message())
}
}
You can validate the data (like null checking) like any other code before you publish. Nothing special here, so I'm guessing its something else.
You want to validate the data using some other data in a database. If its the same database that the website/api is using - that is also not a special thing, so I'm guessing that is not it either.
You want to, some how, validate the data is correct before sending the message. But that you need to use the data of the application that the message is going to. That is typically where I see people get tripped up.
Assuming its something like number three. Let's call the sending service "Service A" and the receiving service "Service B". Today it sounds like you are trying to put the validation in "Service A" but it really has to be in "Service B". What I would do is implement a Saga in "Service B". The first step would be to take request (creating an instance of a saga), then validate the data, then if it passes validation, the saga can take the next step that you want in the process. That should give you what you want in terms of validation before action (we just need to move it to "Service B").
Now "Service B" can expose the state of the Saga at an endpoint like /saga-instance/42 where the controller takes the 42, digs into the database, grabs the saga data and converts it into an API response. Service A can poll that endpoint to get updated status details.
Ultimately, I hope you see that there are a lot of variables at play, but that there is a path forward. You may have to simply adjust where certain actions are taken.

Reliable/idempotent mailing from Azure Function app

I'm currently evaluating Azure Functions and I'm trying to find a way/pattern to reliable and idempotent send Emails (and store them in a db). I already read a lot about Sagas, 2PC, Eventual Consistency, but I don't know how to apply these concepts to my situation.
I already have a few business objects stored in a database. Now I would like to add an endpoint which e.g. sends a project summary based on a template. Therefore I created a http triggered function and a CreateEmail method. This is the pseudo code of it
public static async void CreateEmail(QueueClient queue, Guid id)
{
// add the message to the queue, but keep it hidden for 3 min
var sendReceipt = await queue.SendMessageAsync(id.ToString(), TimeSpan.FromSeconds(180))
.ConfigureAwait(false);
//message.PopReceipt is now populated, and only this client can operate on the message until visibility timeout expires
try
{
//Create the mail entity in the db and commit
CreateEmailEntityAndCommit(id);
}
catch (Exception)
{
// Delete the SendMail queue message, because an error occured in db operations
queue.DeleteMessage(sendReceipt.Value.MessageId, sendReceipt.Value.PopReceipt);
throw;
}
// Everything is fine. Mark the message as visible to the email send function
queue.UpdateMessage(sendReceipt.Value.MessageId, sendReceipt.Value.PopReceipt,
visibilityTimeout: TimeSpan.Zero);
}
The code actually does not send the mail, but only creates a database entity and queues a message to the Azure Queue Storage. Another, queue triggered function picks up the messages, sends the mail and updates the status in the db:
public void Run([QueueTrigger("myqueue-items")]string id, ILogger log)
{
if (CheckEmailStatus() == Status.Sent)
{
// Message received twice
return;
}
SendEmail();
UpdateEmailStatus(Status.Sent); // How do we deal with exceptions here? email sent successfully, but status not updated...
}
And here is my problem: If anything goes wrong immediately after sending the mail, the status is not updated. When azure delivers the message again, the Mail would be send again. I guess there is a pattern to avoid such a situation.
Since you are using storage queue, you need to handle idempotency or deduplication at the receiver function end using some identifier of the entity. For example, you can maintain a cache of ids which you can look up to check if currently received id exists, the cache can be set with a reasonable TTL of your desired time window.
Note: Duplicate detection is out of the box in Service Bus queue.
Also you might want to look at Durable Functions.

Non-simultaneous commits in a distributed transaction involving EntityFramework/SQL Server and NServiceBus/MSMQ

There is a .NET 4.7 WebAPI application working with SQL Server using Entity Framework and hosting NServiceBus endpoint with MSMQ transport.
Simplified workflow can be described by a controller action:
[HttpPost]
public async Task<IHttpActionResult> SendDebugCommand()
{
var sample = new Sample
{
State = SampleState.Initial,
};
_dataContext.Set<Sample>().Add(sample);
await _dataContext.SaveChangesAsync();
sample.State = SampleState.Queueing;
var options = new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadCommitted,
};
using (var scope = new TransactionScope(TransactionScopeOption.Required, options, TransactionScopeAsyncFlowOption.Enabled))
{
await _dataContext.SaveChangesAsync();
await _messageSession.Send(new DebugCommand {SampleId = sample.Id});
scope.Complete();
}
_logger.OnCreated(sample);
return Ok();
}
And DebugCommand handler, that is sent to the same NServiceBus endpoint:
public async Task Handle(DebugCommand message, IMessageHandlerContext context)
{
var sample = await _dataContext.Set<Sample>().FindAsync(message.SampleId);
if (sample == null)
{
_logger.OnNotFound(message.SampleId);
return;
}
if (sample.State != SampleState.Queueing)
{
_logger.OnUnexpectedState(sample, SampleState.Queueing);
return;
}
// Some work being done
sample.State = SampleState.Processed;
await _dataContext.SaveChangesAsync();
_logger.OnHandled(sample);
}
Sometimes, message handler retrieves the Sample from the DB and its state is still Initial, not Queueing as expected. That means that distributed transaction initiated in the controller action is not yet fully complete. That is also confirmed by time-stamps in the log file.
The 'sometimes' happens quite rarely, under heavier load and network latency probably affects. Couldn't reproduce the problem with local DB, but easily with a remote DB.
I checked DTC configurations. I checked there is escalation to a distributed transaction for sure. Also if scope.Complete() is not called then there will be no DB update neither message sending happening.
When the transaction scope is completed and disposed, intuitively I expect both DB and MSMQ to be settled before a single further instruction is executed.
I couldn't find definite answers to questions:
Is this the way DTC work? Is this normal for both transaction parties to do commits, while completion is not reported back to the coordinator?
If yes, does it mean I should overcome such events by altering logic of the program?
Am I misusing transactions somehow? What would be the right way?
In addition to the comments mentioned by Evk in Distributed transaction with MSMQ and SQL Server but sometimes getting dirty reads here's also an excerpt from the particular documentation page about transactions:
A distributed transaction between the queueing system and the persistent storage guarantees atomic commits but guarantees only eventual consistency.
Two additional notes:
NServiceBus uses IsolationLevel.ReadCommitted by default for the transaction used to consume messages. This can be configured although I'm not sure whether setting it to serialized on the consumer would really solve the issue here.
In general, it's not advised to use a shared database between services as this highly increases coupling and opens the door for issues like you're experiencing here. Try to pass relevant data as part of the message and keep the database an internal storage for one service. Especially when using web servers, a common pattern is to add all the relevant data to a message and fire it while confirming success to the user (as the message won't be lost) while the receiving endpoint can store the data to it's database if necessary. To give more specific recommendations, this requires more knowledge about your domain and use case. I can recommend the particular discussion community to discuss design/architectural question like this.

Row by row streaming of data while waiting for response

I have a WCF service and client developed using C# which handles data transfer from SQL server database on the server to the SQL server database on the client end. I am facing some issues with the current architecture and planning to modify it to an idea I have, and would like to know if it is possible to achieve it, or how best can I modify the architecture to suite my needs.
The Server side database server is SQL 2008 R2 SP1 and client side servers are SQL 2000
Before I state the idea, below is the overview and current shortcomings of the architecture design I am using.
Overview:
Client requests for a table’s data.
WCF service queries the Server database for all pending data for the requested table. This data is loaded into a dataset.
WCF Compresses the Dataset using GZIP compression and converts it to byte for the client to download.
Client receives the Byte stream, un-compresses it and replicates the data from the Dataset to the physical table on the client database. This data is inserted row by row since in need the Primary key column filed to be returned to the server so that it can be flagged of as transferred.
Once the client has finished replicating the data, it uploads the successful rows Primary key fields back to the server, and in turn the server update each field one by one.
The above procedure uses a basic http binding, with streamed transfer mode.
Shortcomings:
This works great for little data, but when it comes to bulk data, maintaining the dataset in memory as the download is ongoing and also at the client side as replication is ongoing, is becoming impossible as sometimes the dataset size goes up to 4gb. The server can hold this much data since it’s a 32gb RAM server, but at the client side I get System out of memory exception since the client machine has 2gb RAM.
There are numerous deadlocks as the select query is running and also when updating since I am using transaction mode as read committed.
For bulk data it is very slow and completely hangs the client machine when the DTS is ongoing.
Idea in mind:
Maintain the same service and logic of row by row transfer since I cannot change this due to sensitivity of the data, but rather than downloading bulk data I plan to use the sample given in http://code.msdn.microsoft.com/Custom-WCF-Streaming-436861e6.
Thus the new flow will be as:
Upon receiving the download request, the server will open a connection to the DB using snapshot isolation as the transaction level.
Build the row by row object on the server and send it to the client on the requested channel, as the client receives each row object, it gets processed and a success or failure response is sent back to the server on the same method same channel, as I need to update the data on the same snapshot transaction.
This way I will reduce bulk objects in memory, and rely on SQL for the snapshot data that will be maintained in temdb once the transaction is initiated.
Challenge:
How can I send the row object and wait for a confirmation before sending the next one, as the update to the server row has to occur on the same snapshot transaction. Since if I create another method on the service to perform the flagging off, the snapshots will be different and this will cause issues in the integrity of the data in case the data undergoes changes after the snapshot transaction was initiated.
If this is the wrong approach, then please suggest a better one, as I am open to any suggestions.
If my understanding of the snapshot isolation is wrong, then please correct me as I am new to this.
Update 1:
I would like to achieve something like this when the client is the one requesting:
//Client Invokes this method on the server
public Stream GetData(string sTableName)
{
//Open the Snapshot Transaction on the Server
SqlDataReader rdr = operations.InitSnapshotTrans("Select * from " + sTableName + " Where Isnull(ColToCheck,'N') <> 'Y'");
//Check if there are rows available
if(rdr.HasRows)
{
while rdr.read()
{
SendObj sendobj = Logic.CreateObejct(rdr);
//Here is where i am stuck
//At this point I want to write the object to the Stream
...Write sendobj to Stream
//Once the client is done processing it reverts with a true for success or false for failuer.
if (returnObj == true)
{
operations.updateMovedRecord(rdr);
}
}
}
}
For the server sending i have written the code as Such (I used Pub Sub Model for this):
public void ServerData(string sServerText)
{
List<SubList> subscribers = Filter.GetClients();
if (subscribers == null) return;
Type type = typeof(ITransfer);
MethodInfo publishMethodInfo = type.GetMethod("ServerData");
foreach (SubList subscriber in subscribers)
{
try
{
//Open the Snapshot Transaction on the Server
SqlDataReader rdr = operations.InitSnapshotTrans("Select * from " + sTableName + " Where Isnull(ColToCheck,'N') <> 'Y'");
//Check if there are rows available
if(rdr.HasRows)
{
while rdr.read()
{
SendObj sendobj = Logic.CreateObejct(rdr);
bool rtnVal = Convert.ToBoolean(publishMethodInfo.Invoke(subscriber.CallBackId, new object[] { sendobj }));
if (rtnVal == true)
{
operations.updateMovedRecord(rdr);
}
}
}
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
}
Just off the top of my head, this sounds like it might take longer. That may or may not be a concern.
Given the requirement in challenge 1 (that everything happen in the context of one method call), it sounds like what actually needs to happen is for the server to call a method on the client, sending a record, and then waiting for the client to return confirmation. That way, everything that needs to happen, happens in the context of a single call (server to client). I don't know if that's feasible in your situation.
Another option might be to use some kind of double-queue system (perhaps with MSMQ?) so that the server and client can maintain an ongoing conversation within a single session.
I assume there's a reason why you can't just divide the data to be downloaded into manageable chunks and repeatedly execute the original process on the chunks. That sounds the least ambitious option, but you probably would have done it already if it met all your needs.

WPF, WCF and Code First - Disconnected mode

Some architecture dilemma:
I'm using WPF as my client-side, EF Code First as my Data Access Layer, and WCF to connect between those. My probelm is hou to reupdate the UI after I did some changes to the DB, for example:
User insert new "Person" on the UI (ID=0)
User save the "Person" to the DB (ID=10, for example)
When talking about one user it's very simple - I can return the ID and update my UI as well (so next change to this person will be considered as "Update"), but what about adding more than one user at once, or updating other properties that was calculated on the server? should I return the whole graph? not to mention is very hard to remap it on the client side.
Before CodeFirst we could use STE, but it has it's own problems. anyone knows about known CodeFirst approach?
Would be happy to hear your voice.
Thanks!
You can send as the request to your wcf service the dateTime of your last update in client-side. But in the server-side you take all Persons which was updated/added after that dateTime and return it as the result. In this way you will get only modified/added Persons from your server-side.
So add lastUpdate collumn to your entity Person.
EDIT 1
If you want to server update the information in client but not client ask for news from server.
You can use the way like it works in Web Programming.
(1)The client-side asks server-side - "hey, my last update was at 20:00 10.02.2013", then server looks into DB - "is news after 20:00 10.02.2013?" if yes:
a) returns the news to the client
if no news in DB:
b) He dont returns null, but he does Thread.Sleep(somevalue). He sleeps then repeats the query to db and asks is "there news in db". So it's all repeats untill the news in DB will apear. After news in db appears he return the List<data> which is in updated after the dateTime. After that client gets the data he goes back to the point - (1).
So you dont make a lot of requests to the server but making only one request and wait for the news from the server.
Notice 2 things:
1) If the client waits too long the server side will throw the exception(dont remember actually the error code but it's not important now), so you have to catch this exception on client-side and make a new request to server-side. Also you have to configure on the server-side as long as you can wait time, to minimize the amount of requests from client.
2) You have to run this data-updater in the new thread not in the main, where the application runs.
How it will looks from the code(it may not work, i just want to show you the logic):
Server side:
public List<SomeData> Updater(DateTime clientSideLastUpdate)
{
List<SomeData> news = new List<SomeData>();
while(true)
{
List<SomeData> news = dbContext.SomeData.Where(e=>e.UpdateDateTime > clientSideLastUpdate).ToList();
if(news.Count()>0)
{
return news;
}
}
}
Client-side:
public static void Updater()
{
try
{
var news = someServiceReference.Updater(DateTime clientSideLastUpdate);
RenewDataInForms(news);
Updater();
}
catch(ServerDiesOrWhatElseExcepption)
{
Updater()
}
}
And somewhere in the code you run this updater in the new thread
Thread updaterThread = new Thread(Updater());
updaterThread.Start();
Edit 2
if you want update by one request all entities but not only SomeData then you have to add Dto object which will contain the List of every entities you want to be updatable. The server-side will complete and return this Dto object.
Hope it helps.

Categories

Resources