Concurrency problem in ASP.NET Core async API - c#

I'm developing an cryptocurrency project. When a customer tries to withdraw his money, I use the code shown here to ensure that this customer has enough balance to make the withdrawal, then I pay the desired amount into customer wallet.
public async Task EnsureCustomerHasEnoughBalance(decimal withdrawAmount, Guid customerId)
{
var currentBalance = await _transactionService.GetCustomerBalance(customerId);
if (currentBalance < withdrawAmount)
throw new Exception("Insufficient balance");
}
The problem is if someone calls my async API many times and quickly, some of the requests will be processed the same time in different threads. So, any customer can hack my code and withdraw more money than his balance allows.
I've tried using lock, semaphore and etc but none of them work in async. Do you have any solution?
Thanks for any help

I have some experiences for this case. To be honest when running a payment application, and with the critical action relate to balance, we need to be really carefully. In this case i don't use interal lock since we would deploy the app in many nodes, that can not guarantee the atomic.
In my project, we have two options:
Centralize locking by using redis (redlock).
Using transaction, we write procedure and wrap it in transaction.
CREATE PROCEDURE withraw(in amount int)
BEGIN
START TRANSACTION;
select balance from accounts where id = account_id into #avaiable_balance for update;
if(#avaiable_balance > amount )
then
update accounts set balance = balance -amount,balance=balance-amount where id = account_id;
select 1;
else
select -1;
COMMIT;
END

Do you use Nethereum nuget package?
If you use Nethereum package, it's ok.
I used Nethereum package for withdraw money.
There is no problem.
Please check this site.
https://nethereum.com/
https://docs.nethereum.com/en/latest/nethereum-block-processing-detail/

Related

Multiple processors trying to update the same row in database

I have an Azure function app that triggers when data is enqueued to a service bus queue. In the Azure function, I implemented a charging method. First I get the particular row from the database and check the account points.
SELECT * FROM UsersAccountPoints WHERE UserId = #UserId
Then I update the points.
UPDATE UsersAccountPoints
SET FundsChange = #ChargeAmount, FundsAmount -= #ChargeAmount
WHERE UserId = #UserId
When I run this on local, it runs perfectly fine. But when I deploy it on Azure, the function app starts scaling. Then parallel processors start updating the same row of the UsersAccountPoints table, giving me confusing results.
Then I tried UPDLOCK.
SELECT *
FROM UsersAccountPoints WITH (UPDLOCK)
WHERE UserId = #UserId
But this also gives me the same result. I use the Azure premium function, SQL Server database, and Dapper for ORM. Is there a better way to handle this kind of scenario than UPDLOCK?

Hangfire causing locks in SQL Server

We are using Hangfire 1.7.2 within our ASP.NET Web project with SQL Server 2016. We have around 150 sites on our server, with each site using Hangfire 1.7.2. We noticed that when we upgraded these sites to use Hangfire, the DB server collapsed. Checking the DB logs, we found out there were multiple locking queries. We have identified one RPC Event “sys.sp_getapplock;1” In the all blocking sessions. It seems like Hangfire is locking our DB rendering whole DB unusable. We noticed almost 670+ locking queries because of Hangfire.
This could possibly be due to these properties we setup:
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(30),
QueuePollInterval = TimeSpan.FromHours(5)
Each site has around 20 background jobs, a few of them run every minute, whereas others every hour, every 6 hours and some once a day.
I have searched the documentation but could not find anything which could explain these two properties or how to set them to avoid DB locks.
Looking for some help on this.
EDIT: The following queries are executed at every second:
exec sp_executesql N'select count(*) from [HangFire].[Set] with (readcommittedlock, forceseek) where [Key] = #key',N'#key nvarchar(4000)',#key=N'retries'
select distinct(Queue) from [HangFire].JobQueue with (nolock)
exec sp_executesql N'select count(*) from [HangFire].[Set] with (readcommittedlock, forceseek) where [Key] = #key',N'#key nvarchar(4000)',#key=N'retries'
irrespective of various combinations of timespan values we set. Here is the code of GetHangfirServers we are using:
public static IEnumerable<IDisposable> GetHangfireServers()
{
// Reference for GlobalConfiguration.Configuration: http://docs.hangfire.io/en/latest/getting-started/index.html
// Reference for UseSqlServerStorage: http://docs.hangfire.io/en/latest/configuration/using-sql-server.html#configuring-the-polling-interval
GlobalConfiguration.Configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSqlServerStorage(ConfigurationManager.ConnectionStrings["abc"]
.ConnectionString, new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(30),
QueuePollInterval = TimeSpan.FromHours(5), // Hangfire will poll after 5 hrs to check failed jobs.
UseRecommendedIsolationLevel = true,
UsePageLocksOnDequeue = true,
DisableGlobalLocks = true
});
// Reference: https://docs.hangfire.io/en/latest/background-processing/configuring-degree-of-parallelism.html
var options = new BackgroundJobServerOptions
{
WorkerCount = 5
};
var server = new BackgroundJobServer(options);
yield return server;
}
The worker count is set just to 5.
There are just 4 jobs and even those are completed (SELECT * FROM [HangFire].[State]):
Do you have any idea why the Hangfire is hitting so many queries at each second?
We faced this issue in one of our projects. The hangfire dashboard is pretty read heavy and it polls the hangfire db very frequently to refresh job status.
Best solution that worked for us was to have a dedicated hangfire database.
That way you will isolate the application queries from hangfire queries and your application queries won't be affected by the hangfire server and dashboard queries.
There is a newer configuration option called SlidingInvisibilityTimeout when configuring SqlServerStorage that causes these database locks as part of newer fetching non-transactional message fetching algorithm. It is meant for long running jobs that may cause backups of transactional logs to error out (as there is a database transaction that is still active as part of the long running job).
.UseSqlServerStorage(
"connection_string",
new SqlServerStorageOptions { SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5) });
Our DBA did not like the database locks, so I just removed this SlidingInvisibilityTimeout option to use the old transactional based message fetching algorithm since I didn't have any long running jobs in my queue.
Whether you enable this option or not is dependent on your situation. You may want to consider moving your queue database outside of your application database if it isn't already and enable the SlidingInvisibilityTimeout option. If your DBA can't live with the locks even if the queue is a separate database, then maybe you could refactor your tasks into many more smaller tasks that are shorter lived. Just some ideas.
https://www.hangfire.io/blog/2017/06/16/hangfire-1.6.14.html
SqlServerStorage runs Install.sql which takes an exclusive schema lock on the Hangfire-schema.
DECLARE #SchemaLockResult INT;
EXEC #SchemaLockResult = sp_getapplock #Resource = '$(HangFireSchema):SchemaLock',
#LockMode = 'Exclusive'
From the Hangfire documentation:
"SQL Server objects are installed automatically from the SqlServerStorage constructor by executing statements
described in the Install.sql file (which is located under the tools folder in the NuGet package). Which contains
the migration script, so new versions of Hangfire with schema changes can be installed seamlessly, without your
intervention."
If you don't want to run this script everytime you could set SqlServerStorageOptions.PrepareSchemaIfNecessary to false.
var options = new SqlServerStorageOptions
{
PrepareSchemaIfNecessary = false
};
var sqlServerStorage = new SqlServerStorage(connectionstring, options);
Instead run the Install.sql manually by using this line:
SqlServerObjectsInstaller.Install(connection);

Send notification on specific date and time

In our App, We are storing questions with Question's startdate, enddate and resultdate. We need to send notification to app (iPhone and Andorid) once startdate of question is arrives.
Can anybody let me know how can we achieve this?
We don't want to use pull method. like in particular time interval it will check for question startdate and send notification.
I have a URL to send Notification for question. I need to call this URL when question's startdate is arrived.
Thanks.
Take a look at Quartz :
Quartz.NET is a full-featured, open source job scheduling system that can be used from smallest apps to large scale enterprise systems
Quartz Enterprise Scheduler .NET
You can create a new Quarts Job, lets call it QuestionSenderJob. Then your application can schedule a task in Quartz scheduler, jobs can have many instances of same Job with custom data - in your case QuestionId.
Additionally it supports storing Job scheduling in your SQL database (there are DDL Scripts included) so you can create some relations if you need for UI for example.
You can find table-creation SQL scripts in the "database/dbtables" directory of the Quartz.NET distribution
Lesson 9: JobStores
This way you leave firing in right moment to Quartz engine.
When you will go through Quartz .NET basics, see this code snippet I made a for your case to schedule job. Perhaps some modifications will be necessary thought.
IDictionary<string, object> jobData = new Dictionary<string, object> { { "QuestionId", questionId } };
var questionDate = new DateTime(2016, 09, 01);
var questionTriggerName = string.Format("Question{0}_Trigger", questionId);
var questionTrigger = TriggerBuilder.Create()
.WithIdentity(questionTriggerName, "QuestionSendGroup")
.StartAt(questionDate)
.UsingJobData(new Quartz.JobDataMap(jobData))
.Build();
scheduler
.ScheduleJob(questionSenderJob, questionTrigger);
Then in Job you will get your questionId through JobExecutionContext.
public class QuestionSenderJob: IJob
{
public void Execute(JobExecutionContext context)
{
JobDataMap dataMap = context.JobDetail.JobDataMap;
// Extract question Id and send message
}
}
What about using the Task Scheduler Managed Wrapper?
You do not want to use pooling, but if you write your own class that will encapsulate Timer (e.g. System.Thread.Timer) and check for the time each second, that will not take much resources. Depending on how exact you need it, you could check also less often, e.g. each minute. Maybe you should reconsider it.
If you use any third party service to manage your push notification such as Azure Notification Hub, Parse.com, ... they offer an integrated way to schedule push notifications. Either by passing in a send date or let them run a job periodically. I'm a user of the Azure service and it works very well.
The best implementation i can advice right now is for you to send the notification from a server.
All you just need is a good scheduler that can dispatch operation.
For me, my server is powered by Javascript (NodeJS) so i use "node-schedule". All i just do is
var schedule = require('node-schedule');
//Reporting rule at minute 1 every hour
var rule = new schedule.RecurrenceRule();
rule.minute = 1;
schedule.scheduleJob(rule, function () {
console.log(new Date().toTimeString() + ' Testing Scheduler! Executing Every other minute');
//sendPush()
});

Design Model and Hints For Multithreaded Win Form Application

I am trying to design a multithreaded windows application mostly serves for our clients to send emails fastly to their customers(there can be millions as there is a big telecommunication company), and I need design hints.(I am sorry that Q is long)
I fairly read articles about the multithreaded applications. I also read about SmartThread Pool, .NET ThreadPool, Task Parallel Library and other SO questions. But I could not come with a correct design. My logic is like that :
Within start of the program(Email engine), a timer starts and check if there any email campaigns in database(Campaigns table) that has Status 1(new campaign).
If there are, Campaign Subscribers should be queried from DB and should be written to another table(SqlBulkCopy) called SubscriberReports table and update the Campaign's Status to 2 in Campaigns table.
Timer also listens Campaigns with Status 2 to call another method to customize the campaign for each Subscriber, creates a Struct that has customized properties of the Subscriber.
Thirdly a SendEmail method is invoked to send the email via SMTP. What I tried so far is below(I know that ThreadPool is wrong here, and I have bunch of other mistakes). Can you pls suggest and help me how to design such an application. Highly appreciate any help. Thanks alot for your time.
private void ProcessTimer(object Source, ElapsedEventArgs e)
{
Campaigns campaign = new Campaigns();
IEnumerable<Campaigns> campaignsListStatusOne = // Get Campaign Properties to a List
IEnumerable<Campaigns> campaignsListStatusTwo = // Get Campaign Properties to a List
foreach (Campaigns _campaign in campaignsListStatusOne)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(CheckNewCampaign), _campaign.CampaignID);
}
foreach (Campaigns _campaign in campaignsListStatusTwo)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(CustomizeMail), _campaign.CampaignID);
}
}
private void CheckNewCampaign(object state)
{
int campaignID = (int)state;
DataTable dtCampaignSubscribers = // get subscribers based on Campaign ID
campaign.UpdateStatus(campaignID, 2);
}
private void CustomizeMail(object state)
{
int campaignID = (int)state;
CampaignCustomazition campaignCustomizer;
IEnumerable<SubscriberReports> reportList = // get subscribers to be sent from Reports table
foreach (SubscriberReports report in reportList)
{ // 3 database related methods are here
campaignCustomizer = new CampaignCustomazition(report.CampaignID, report.SubscriberID);
campaignCustomizer.CustomizeSource(report.CampaignID, report.SubscriberID, out campaignCustomizer.source, out campaignCustomizer.format);
campaignCustomizer.CustomizeCampaignDetails(report.CampaignID, report.SubscriberID, out campaignCustomizer.subject, out campaignCustomizer.fromName, out campaignCustomizer.fromEmail, out campaignCustomizer.replyEmail);
campaignCustomizer.CustomizeSubscriberDetails(report.SubscriberID, out campaignCustomizer.email, out campaignCustomizer.fullName);
ThreadPool.QueueUserWorkItem(new WaitCallback(SendMail), campaignCustomizer);
}
}
private void SendMail(object state)
{
CampaignCustomazition campaignCustomizer = new CampaignCustomazition();
campaignCustomizer = (CampaignCustomazition)state;
//send email based on info at campaignCustomizer via SMTP and update DB record if it is success.
}
There is little to be gained here by using threading. What threads buy you is more cpu cycles. Assuming you have a machine with multiple cores, pretty standard these days. But that's not what you need to get the job done quicker. You need more dbase and email servers. Surely you only have one of each. Your program will burn very little core, it is constantly waiting for the dbase query and the email server to complete their job.
The only way to get ahead is to overlap the delays of each. One thread could constantly be waiting for the dbase engine, the other could be constantly waiting for the email server. Which is better than one thread waiting for both.
That's not likely to buy you much either though, there's a big mismatch between the two. The dbase engine can give you thousands of email addresses in a second, the email server can only a few hundred emails in a second. Everything is throttled by how fast the email server works.
Given the low odds of getting ahead, I'd recommend you don't try to get yourself into trouble with threading at all. It has a knack for producing very hard to diagnose failure if you don't lock properly. The amount of time you can spend on troubleshooting this can greatly exceed the operational gains from moving a wee bit faster.
If you are contemplating threading to avoid freezing a user interface then that's a reasonable use for threading. Use BackgroundWorker. The MSDN Library has excellent help for it.

Bloomberg API request timing out

Having set up a ReferenceDataRequest I send it along to an EventQueue
Service refdata = _session.GetService("//blp/refdata");
Request request = refdata.CreateRequest("ReferenceDataRequest");
// append the appropriate symbol and field data to the request
EventQueue eventQueue = new EventQueue();
Guid guid = Guid.NewGuid();
CorrelationID id = new CorrelationID(guid);
_session.SendRequest(request, eventQueue, id);
long _eventWaitTimeout = 60000;
myEvent = eventQueue.NextEvent(_eventWaitTimeout);
Normally I can grab the message from the queue, but I'm hitting the situation now that if I'm making a number of requests in the same run of the app (normally around the tenth), I see a TIMEOUT EventType
if (myEvent.Type == Event.EventType.TIMEOUT)
throw new Exception("Timed Out - need to rethink this strategy");
else
msg = myEvent.GetMessages().First();
These are being made on the same thread, but I'm assuming that there's something somewhere along the line that I'm consuming and not releasing.
Anyone have any clues or advice?
There aren't many references on SO to BLP's API, but hopefully we can start to rectify that situation.
I just wanted to share something, thanks to the code you included in your initial post.
If you make a request for historical intraday data for a long duration (which results in many events generated by Bloomberg API), do not use the pattern specified in the API documentation, as it may end up making your application very slow to retrieve all events.
Basically, do not call NextEvent() on a Session object! Use a dedicated EventQueue instead.
Instead of doing this:
var cID = new CorrelationID(1);
session.SendRequest(request, cID);
do {
Event eventObj = session.NextEvent();
...
}
Do this:
var cID = new CorrelationID(1);
var eventQueue = new EventQueue();
session.SendRequest(request, eventQueue, cID);
do {
Event eventObj = eventQueue.NextEvent();
...
}
This can result in some performance improvement, though the API is known to not be particularly deterministic...
I didn't really ever get around to solving this question, but we did find a workaround.
Based on a small, apparently throwaway, comment in the Server API documentation, we opted to create a second session. One session is responsible for static requests, the other for real-time. e.g.
_marketDataSession.OpenService("//blp/mktdata");
_staticSession.OpenService("//blp/refdata");
The means one session operates in subscription mode, the other more synchronously - I think it was this duality which was at the root of our problems.
Since making that change, we've not had any problems.
My reading of the docs agrees that you need separate sessions for the "//blp/mktdata" and "//blp/refdata" services.
A client appeared to have a similar problem. I solved it by making hundreds of sessions rather than passing in hundreds of requests in one session. Bloomberg may not be to happy with this BFI (brute force and ignorance) approach as we are sending the field requests for each session but it works.
Nice to see another person on stackoverflow enjoying the pain of bloomberg API :-)
I'm ashamed to say I use the following pattern (I suspect copied from the example code). It seems to work reasonably robustly, but probably ignores some important messages. But I don't get your time-out problem. It's Java, but all the languages work basically the same.
cid = session.sendRequest(request, null);
while (true) {
Event event = session.nextEvent();
MessageIterator msgIter = event.messageIterator();
while (msgIter.hasNext()) {
Message msg = msgIter.next();
if (msg.correlationID() == cid) {
processMessage(msg, fieldStrings, result);
}
}
if (event.eventType() == Event.EventType.RESPONSE) {
break;
}
}
This may work because it consumes all messages off each event.
It sounds like you are making too many requests at once. BB will only process a certain number of requests per connection at any given time. Note that opening more and more connections will not help because there are limits per subscription as well. If you make a large number of time consuming requests simultaneously, some may timeout. Also, you should process the request completely(until you receive RESPONSE message), or cancel them. A partial request that is outstanding is wasting a slot. Since splitting into two sessions, seems to have helped you, it sounds like you are also making a lot of subscription requests at the same time. Are you using subscriptions as a way to take snapshots? That is subscribe to an instrument, get initial values, and de-subscribe. If so, you should try to find a different design. This is not the way the subscriptions are intended to be used. An outstanding subscription request also uses a request slot. That is why it is best to batch as many subscriptions as possible in a single subscription list instead of making many individual requests. Hope this helps with your use of the api.
By the way, I can't tell from your sample code, but while you are blocked on messages from the event queue, are you also reading from the main event queue while(in a seperate event queue)? You must process all the messages out of the queue, especially if you have outstanding subscriptions. Responses can queue up really fast. If you are not processing messages, the session may hit some queue limits which may be why you are getting timeouts. Also, if you don't read messages, you may be marked a slow consumer and not receive more data until you start consuming the pending messages. The api is async. Event queues are just a way to block on specific requests without having to process all messages from the main queue in a context where blocking is ok, and it would otherwise be be difficult to interrupt the logic flow to process parts asynchronously.

Categories

Resources