How to use redis pipiline(StackExchange.Redis) in c#? - c#

I have currently tested redis-benchmark on my linux system and was impressed by the results. But while benchmarking, I used pipelining of 16 commands. Now I am trying to execute it on c#.
My main problem is I want to log some thousands of random data into redis and I can't figure how to used pipelining with this.
Thanks in advance.

The most explicit way to use pipelining in StackExchange.Redis is to use the CreateBatch API:
var db = conn.GetDatabase();
var batch = db.CreateBatch();
// not shown: queue some async operations **without** awaiting them (yet)
batch.Execute(); // this sends the queued commands
// now await the things you queued
however, note that you can achieve a lot without that, since:
concurrent load from different threads (whether sync or async) is multiplexed, allowing effective sharing of a single connection
the same trick of "issue multiple async operations but don't await them just yet" still works fine even without the batch API (using the batch API ensures that the batch is sent as a contiguous block without work from concurrent threads getting interleaved within the batch; this is similar to, but less strict than, the CreateTransaction() API)
Note also that in some bulk scenarios you might also want to consider Lua (ScriptEvaluate()); this API is varadic, so can adapt to arbitrary argument lengths - your Lua simply needs to inspect the sizes of KEYS and ARGV (discussed in the EVAL documentation).

Related

Redis StackExchange batching and transactions

I'm trying to understand the nitty gritty details of Redis StackExchange.
1.
If I create a batch, and update a key and also set expiration of that key.
Could that execute out of order when sent to redis, so that the expiration is set on a non-existing key before the update is made?
e.g.
batch.ListRightPushAsync(myKey, payload.ToByteArray());
batch.KeyExpireAsync(myKey, DateTime.Now.AddDays(1));
Should I use a transaction for this instead?
2.
The API for batches and transactions feels a bit catch 22 to use, you first have to execute it and then await the tasks.
There is a WaitAll void blocking method on the IDatabase interface.
Is there any difference to use this instead of Task.WhenAll?
I assume there has to be as the clever people who made the lib would not just randomly add blocking operations for no reason.
If I ingest large amounts of telemetry, e.g. logs and metrics. and I want to write this to redis as performant as possible.
Do I benefit from first buffering these and then sending them in a batch(or transaction)?
3.
If the StackExchange API throws timeout exceptions while processing such batch/transaction, does it mean that the data was lost. or just that it took too long waiting but the data will still be written?
In such case, I assume retries would be harmful as various operations might or might not have been applied to the data already?
First of all, batches are a way to send a sequence of commands through a StackExchange.Redis multiplexer with the guarantee of not having any other command external to that sequence sent in-between those ones. Batches do not exist in Redis itself and, even within batches, the server can interleave other commands sent by other clients with your own sequence of commands.
On the other side, transactions are handled by Redis itself in an atomic way and there are multiple commands you can use to deal with them.
If I create a batch, and update a key and also set expiration of that key. Could that execute out of order when sent to redis, so that the expiration is set on a non-existing key before the update is made?
Nope, the order of commands is preserved.
Should I use a transaction for this instead?
It depends: if you wish to execute your commands in an atomic way then yes, use a transaction instead.
The API for batches and transactions feels a bit catch 22 to use, you first have to execute it and then await the tasks. There is a WaitAll void blocking method on the IDatabase interface.
Is there any difference to use this instead of Task.WhenAll?
RedisBase.WaitAll() invokes Task.WaitAll() but times out after the configured timeout:
RedisDatabase source
ConnectionMultiplexer source
If I ingest large amounts of telemetry, e.g. logs and metrics. and I want to write this to redis as performant as possible. Do I benefit from first buffering these and then sending them in a batch(or transaction)?
Generally speaking no, at the end of the day every command ends up in the connection multiplexer and SE.Redis is very smart about how/when to send data, even using pipelines automatically under the covers.
If the StackExchange API throws timeout exceptions while processing such batch/transaction, does it mean that the data was lost. or just that it took too long waiting but the data will still be written?
I think both cases are possible and would suggest to design your architecture for failure where it makes sense.
In such case, I assume retries would be harmful as various operations might or might not have been applied to the data already?
SE.Redis has a configurable backlog/retry policy which you may want to configure the behavior of the library in this scenario.

Azure function of queueing to service bus over million record does not reach to the end

I'm trying to implement a solution that runs periodically (once in a week) and calls to an external api with 1,500,000 items metadata {{domain}}/items, then trying to figure out for each of the items if it needs to be updated or inserted to the database according by some arbitrary logic.
After several attempts I've ended up with implementing a service bus solution with two azure function (one for enqueueing and the other for dequeuing).
The first azure function triggers periodically and calls the external api for 1.5 millions item's metadata (a Premium plan) - every item is ~ 1.9 KB:
[FunctionName("EnqueueFooMetadata")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
[ServiceBus("foosmetadata", Connection = "ServiceBusConnection")] IAsyncCollector<FooMetadata> foosMetadataQueue)
{
IEnumerable<FooMetadata> foosMetadata = await _service.GetFoosMetadata();
this._logger.LogTrace($"Start Enqueue {foosMetadata.Count()}");
await Task.Run(() =>
{
Parallel.ForEach(foosMetadata, new ParallelOptions() { }, async (FooMetadata fooMetadata) =>
{
await foosMetadataQueue.AddAsync(fooMetadata);
});
});
this._logger.LogTrace($"Done Enqueue {foosMetadata.Count()}");
}
And on the other side of the service bus there is a function that binds to it:
[FunctionName("DequeueGiataProperties")]
public async Task Run([ServiceBusTrigger("foosmetadata", Connection = "ServiceBusConnection")] FooMetadata foo)
{
var getGiataProperiesResult = await _service.Dequeue(foo);
this._logger.LogTrace($"dequeing item: {foo.Id}, was done successfully.");
}
It works as expected for a small amounts of items (when the count of foosMetadata in IEnumerable<FooMetadata> foosMetadata = await _service.GetFoosMetadata(); is about 15,000), and I can see the trace of Done Enqueue..., but for larger amounts of items its always stops somewhere in the middle and I can not see the the trace.
I don't want to divert the suggested answers but it looks like a timeout issue with azure function. any suggestion handling the big data issue?
I think you have a number of issues going on there and most of them are in the publisher portion of the code.
Parallel.Foreeach is not asynchronous, the compiler allows you to
write asyc code, but Parrallel.ForEach is actually a synchronous
feature. You'r using async lambas in the Parallel.ForEach which will
have unexpected behaviour.
Second issue is likely a timeout issue on your Azure function.
Depending on the plan you have at most 5 (minutes on a Consumption
Plan) and (20 minutes on a Paid Plan) for you function to complete.
Calling an API in 1.5Million times you have an expectation that it
will complete in that time frame, it's quite probably that the
overhead of even 1/10 of a second calling the API, is breaking the
timing limits.
There are a number of ways to break the Parallel.ForeEach, the main being to switch to using a Task based parallel mechanism in conjunction with something like DataFlow ActionBlock.
The timing issue is likely harder to solve give the quantity of API calls you are making, BUT
Service bus supports batching when adding messages, where you can
add multiple messages to a queue at once, you mention you are on a
premium plan, that allows multiple messages up to 1M in size be
posted to Service bus at once. This simple change may give you
sufficient performce to get all you messages published.
Without a fully working code example and samples of message sizes it's difficult to give definitive answers to the questions you are asking.
I would therefore suggest that you provide a complete working example to help others try and resolve the issues you are facing.
Having 1.5 million items to be converted to messages in a single function call sounds like the culprit here. The aforementioned parallel foreach along with Task.Run doesn't help either. Combined with the batching IAsyncCollector and no wonder it gets stalled. Likely the issue here is also the overall size of the messages that are attempted to be sent out and the underlying implementation in Functions SDK. With 60 bytes per item, and let's take on average another 40 bytes of overhead (headers, system properties, AMQP extras), that would be 150,000,000 bytes or 143 MB.
What I'd suggest are the following few options:
If possible, reduce the number of items returned from the call.
Otherwise, split the batch into smaller chunks and send those chunks as a few messages. This will also improve the reliability as your HTTP request will end up being converted into a series of messages that will be reliably processed.
Another option is to investigate flushing IAsyncCollector to force it to send smaller batches. If not possible, use your own message sender. Finally, as you're using in-process SDK, you could leverage the preview of the Service Bus Functions extensions (Microsoft.Azure.WebJobs.Extensions.ServiceBus) that is almost out of preview and is currently at 5.0.0-beta.5. With this version, you'll be able to use Azure Service Bus's latest SDK with safe batching built-in (ServiceBusMessageBatch).

How do I optimize parallel calls to a service?

I have a challenge that I am encountering when needing to pull down data from a service. I'm using the following call to Parallel.ForEach:
Parallel.ForEach(idList, id => GetDetails(id));
GetDetails(id) calls a web service that takes roughly half a second and adds the resulting details to a list.
static void GetDetails(string id)
{
var details = WebService.GetDetails(Key, Secret, id);
AllDetails.Add(id, details);
}
The problem is, I know the service can handle more calls, but I can't seem to figure out how to get my process to ramp up more calls, UNLESS I split my list and open the process multiple times. In other words, if I open this app GetDetails.exe 4 times and split the number of IDs into each, I cut the run time down to 25% of the original. This tells me that the possibility is there but I am unsure how to achieve it without ramping up the console app multiple times.
Hopefully this is a pretty simple issue for folks that are more familiar with parallelism, but in my research I've yet to solve it without running multiple instances.
A few possibilities:
There's a chance that WebService.GetDetails(...) is using some kind of mechanism to ensure that only one web request actually happens at a time.
.NET itself may be limiting the number of connections, either to a given host or in general see this question's answers for details about these kinds of problems
If WebService.GetDetails(...) reuses some kind of identifier like a session key, the server may be limiting the number of concurrent requests that it accepts for that one session.
It's generally a bad idea to try to solve performance issues by hammering the server with more concurrent requests. If you control the server, then you're causing your own server to do way more work than it needs to. If not, you run the risk of getting IP-banned or something for abusing their service. It's worth checking to see if the service you're accessing has some options to batch your requests or something.
As Scott Chamberlain mentioned in comments, you need to be careful with parallel processes because accessing structures like Dictionary<> from multiple threads concurrently can cause sporadic, hard-to-track-down bugs. You'd probably be better off using async requests rather than parallel threads. If you're careful about your awaits, you can have multiple requests be active concurrently while still using just a single thread at a time.

Best way to execute oracle SQL statements in parallel

I have asked a similar query before but now I would appreciate specifics. I have 5-11 SQL that need ran in a C# .NET 4.5 web application, currently they are done sequentially, which results in slow response times.
Talking to various architects/DBA they all tell me this can be improved by running the queries in parallel, but never give the specifics of how, when I ask they become very vague ;0)
Is there some function available in Oracle that I could call to pass queries to run in parallel?
Or I have been looking into ASYNC/AWAIT functionality, however the examples on the web are confusing (most involve returning control to the UI, then updating some text on the screen when the task finally completes), I would like to know how to call several methods for them to execute their SQL in parallel and then wait for all of them to complete before proceeding.
If anyone could point me in the direction of good documentation or provide specific examples I would appreciate it!!!!
Updated with sample code, could someone point out how to update this to async to wait for all the various calls to complete:
private CDTInspection GetDetailsInner(CDTInspection tInspection)
{
//Call Method one to get data
tInspection = Method1(tInspection);
//Call Method two to get data
Method2(tInspection);
//Call Method three to get data
Method3(tInspection);
//Call Method four to get data
Method4(tInspection);
return tInspection;
}
private void method2(CDTInspection tInspection)
{
//Create the parameter list
//Execute the query
//MarshalResults
}
You can create jobs using DBMS_SCHEDULER to run independently. Read more from the documentation about DBMS_SCHEDULER.
For example, you could run jobs in parallel as:
BEGIN
DBMS_SCHEDULER.RUN_JOB('pkg1.proc1', false);
DBMS_SCHEDULER.RUN_JOB('pkg2.proc2', false);
DBMS_SCHEDULER.RUN_JOB('pkg3.proc3', false);
END;
/
If you would like to run your 5-11 queries in parallel within your application you will have to start multiple threads and execute the queries within the threads in parallel.
However, if you want the database to execute a query in parallel on the database server(s), usually useful if the query is long running and you want to speed up the individual query execution time, then you can use Parallel Execution.
Parallel execution benefits systems with all of the following characteristics:
Symmetric multiprocessors (SMPs), clusters, or massively parallel systems
Sufficient I/O bandwidth
Underutilized or intermittently used CPUs (for example, systems where CPU usage is typically less than 30%)
Sufficient memory to support additional memory-intensive processes, such as sorting, hashing, and I/O buffers
The easiest way to implement parallel execution is via a hint:
SELECT /*+ PARALLEL */ col1, col2, col3 FROM mytable;
However, this might not be the best way as it would change your query and has other downsides (like what if you want to deactivate parallelism again, you would have to change the query again). Another way is to specify on table level:
ALTER TABLE mytable PARALLEL;
That would allow to simply deactivate parallel execution again if it is not wanted anymore without changing the query itself.

How to Achieve Parallel Fan-out processing in Reactive Extensions?

We already have parallel fan-out working in our code (using ParallelEnumerable) which is currently running on a 12-core, 64G RAM server. But we would like to convert the code to use Rx so that we can have better flexibility over our downstream pipeline.
Current Workflow:
We read millions of records from a database (in a streaming fashion).
On the client side, we then use a custom OrderablePartitioner<T> class to group the database records into groups. Let’s call an instance of this class: partioner.
We then use partioner.AsParallel().WithDegreeOfParallelism(5).ForAll(group => ProcessGroupOfRecordsAsync(group));Note: this could be read as “Process all the groups, 5 at a time in parallel.” (I.e. parallel fan-out).
ProcessGroupOfRecordsAsync() – loops through all the records in the group and turns them into hundreds or even thousands of POCO objects for further processing (i.e. serial fan-out or better yet, expand).
Depending on the client’s needs:
This new serial stream of POCO objects are evaluated, sorted, ranked, transformed, filtered, filtered by manual process, and possibly more parallel and/or serial fanned-out throughout the rest of the pipeline.
The end of the pipeline may end up storing new records into the database, displaying the POCO objects in a form or displayed in various graphs.
The process currently works just fine, except that point #5 and #6 aren’t as flexible as we would like. We need the ability to swap in and out various downstream workflows. So, our first attempt was to use a Func<Tin, Tout> like so:
partioner.AsParallel
.WithDegreeOfParallelism(5)
.ForAll(group =>ProcessGroupOfRecordsAsync(group, singleRecord =>
NextTaskInWorkFlow(singleRecord));
And that works okay, but the more we flushed out our needs the more we realized we are just re-implementing Rx.
Therefore, we would like to do something like the following in Rx:
IObservable<recordGroup> rg = dbContext.QueryRecords(inputArgs)
.AsParallel().WithDegreeOfParallelism(5)
.ProcessGroupOfRecordsInParallel();
If (client1)
rg.AnalizeRecordsForClient1().ShowResults();
if (client2)
rg.AnalizeRecordsForClient2()
.AsParallel()
.WithDegreeOfParallelism(3)
.MoreProcessingInParallel()
.DisplayGraph()
.GetUserFeedBack()
.Where(data => data.SaveToDatabase)
.Select(data => data.NewRecords)
.SaveToDatabase(Table2);
...
using(rg.Subscribe(groupId =>LogToScreen(“Group {0} finished.”, groupId);
It sounds like you might want to investigate Dataflows in the Task Parallel Library - This might be a better fit than Rx for dealing with part 5, and could be extended to handle the whole problem.
In general, I don't like the idea of trying to use Rx for parallelization of CPU bound tasks; its usually not a good fit. If you are not too careful, you can introduce inefficiencies inadvertently. Dataflows can give you nice way to parallelize only where it makes most sense.
From MSDN:
The Task Parallel Library (TPL) provides dataflow components to help increase the robustness of concurrency-enabled applications. These dataflow components are collectively referred to as the TPL Dataflow Library. This dataflow model promotes actor-based programming by providing in-process message passing for coarse-grained dataflow and pipelining tasks. The dataflow components build on the types and scheduling infrastructure of the TPL and integrate with the C#, Visual Basic, and F# language support for asynchronous programming. These dataflow components are useful when you have multiple operations that must communicate with one another asynchronously or when you want to process data as it becomes available. For example, consider an application that processes image data from a web camera. By using the dataflow model, the application can process image frames as they become available. If the application enhances image frames, for example, by performing light correction or red-eye reduction, you can create a pipeline of dataflow components. Each stage of the pipeline might use more coarse-grained parallelism functionality, such as the functionality that is provided by the TPL, to transform the image.
Kaboo!
As no one has provided anything definite, I'll point out that the source code can be browsed at GitHub at Rx. Taking a quick tour around, it looks like at least some of the processing (all of it?) is done on the thread-pool already. So, maybe it's not possibly to explicitly control the parallelization degree besides implementing your own scheduler (e.g. Rx TestScheduler), but it happens nevertheless. See also the links below, judging from the answers (especially the one provided by James in the first link), the observable tasks are queued and processed serially by design -- but one can provide multiple streams for Rx to process.
See also the other questions that are related and visible on the left side (by default). In particular it looks like this one, Reactive Extensions: Concurrency within the subscriber, could provide some answers to your question. Or maybe Run methods in Parallel using Reactive.
<edit: Just a note that if storing objects to database becomes a problem, the Rx stream could push the save operations to, say, a ConcurrentQueue, which would then be processed separately. Other option would be to let Rx to queue items with a proper combination of some time and number of items and push them to the database by bulk insert.

Categories

Resources