Some Questions About ServiceStack.Redis - c#

Which Proxy is supported? If any, how can i use it?
Whether hashtag is supported? Or something like that?
Except for unit tests, is there any complete use case? (i.e. Although I read the official GitHub document, I still don't understand how to use it.)
Official GitHub Docs

You're linking to the Configure Redis Sentinel Servers docs so I'm assuming you want to configure your ServiceStack.Redis instance to work with a Redis Sentinel configuration.
Note Redis Sentinel is Redis's high-availability solution (it's not a proxy), I'd recommend reading the Redis's official Redis Sentinel docs to learn about how it works.
First you'll want to setup a Redis Sentinel configuration. A popular setup is to have 1x Redis Master and 2x Redis replica slaves, in addition it's common to have a separate redis sentinel instance (which monitors the running redis instances) on each server that's running a redis instance. To make it easy to develop with you can use ServiceStack's redis-config project which makes it easy to run 1x master, 2x slaves and 3x sentinel processes on the same server.
Then when you have your Redis configuration running (assuming localhost) you can connect to it using ServiceStack's RedisSentinel class by passing in the IP and port of each sentinel instance, e.g:
var sentinelHosts = new[]{
"127.0.0.1:26380",
"127.0.0.1:26381",
"127.0.0.1:26382",
};
var sentinel = new RedisSentinel(sentinelHosts, masterName: "mymaster");
IRedisClientsManager redisManager = sentinel.Start();
Note: you don't have to include the IP and ports for Redis master or Redis slave instances as they'll be automatically discovered and can even change. You also can start with a single Redis Sentinel Instance as RedisSentinel will also be able to discover other sentinels in the same "mymaster" group.
Once you call sentinel.Start() it will return a configured IRedisClientsManager which maintains a pool of open Redis client connections as well as listening to Redis's sentinel server instances for any changes to the Redis Sentinel Configuration, e.g. in-case the Redis master falls over to one of the running slave replicas.
You should maintain the redisManager as a singleton and use it to resolve all redis clients you need, e.g. if using an IOC you can register it as a singleton:
container.Register<IRedisClientsManager>(redisManager);
Whenever you need to connect with Redis you can use GetClient() to resolve an redis connection with the current master instance:
using (var redis = redisManager.GetClient())
{
}
And the end of the using statement (or when calling .Dispose()) your open Redis connection will be returned to the internal connection pool, awaiting for the next time it's resolved.

Related

Filtering connections on my ASP.NET Core sever between Razor/MVC and gRPC, depending on Kestrel endpoint

In my .NET 6 ASP application I plan on exposing 3 types of network endpoints:
Razor pages-based web UI, listening either on a TCP port or Unix socket
Public gRPC API, listening on TCP or Unix socket as well
Server management gRPC API for internal use only, listening only on a Unix socket for security reasons
The public parts (Razor and public gRPC) could be used behind a reverse proxy, which is why I plan on allowing both TCP and Unix sockets (in case the proxy is on the same machine, but I can make them TCP-only). I also need to have them all in the same Host, so they can access the same services (same instances, not just services configured in similar ways), and so I can also register regular Hosted Services. Lastly, since both Microsoft and gRPC documentation say to not use Grpc.Core, I do not wish to use it, even though it would completely solve my problem.
Since it is not possible to have several WebHosts inside a single Generic Host, my best bet is now to rely on MapWhen to direct the requests depending on which endpoint received the connection. However, the only data I have for the MapWhen predicate idicates the connection origin using the IPAddress class (via HttpContext.Connection), which cannot represent a Unix socket, so I can't use it to reliably identify the connection source.
If I go the MapWhen route, the code would have the following structure:
public void KestrelSetupCallback(KestrelServerOptions options) {
// The two *Endpoint below are either IPEndpoint or UnixSocketEndpoint
// obtained from app configuration
options.Listen(razorEndpoint, o => o.Protocols = HttpProtocols.Http1AndHttp2);
options.Listen(apiEndpoint, o => o.Protocols = HttpProtocols.Http2);
options.ListenUnixSocket(managementSocketPath, o => o.Protocols = HttpProtocols.Http2);
}
public void WebappSetupCallback(IApplicationBuilder app) {
// Additional middlewares can be added here if needed by the predicates
app.MapWhen(RazorPredicate, RazorSetupCallback);
app.MapWhen(ApiPredicate, ApiSetupCallback);
app.MapWhen(ManagementPredicate, ManagementSetupCallback);
}
I have found about the ListenOptions.Use method , which I could call on each endpoint's ListenOptions to add a connection middleware that would set a Feature identifying the endpoint. I am not sure if that is possible though (if the IFeatureCollection is writable at this point), and I would like to know if there are other options.
Would this approach work, given my use case? Would I need to alter the code structure?
Aside from that, what approach could I take to implement the *Predicate methods?
Is there a better alternate method to achieve my use case? This feels very XYZ Problem to me and I'm afraid of missing a feature designed precisely for that.
There are ~3 ways to do this (there's more but lets start here):
Split the pipeline like you have above. Use MapWhen to determine what conditions should run which branch
You can use RequireHost on endpoints/controllers etc to determine which ones will get matched depending on the incoming host.
You can boot up multiple hosts in the same process and treat them like separate islands. They'll have separate config, DI, logging etc.

Endpoint x.x.x.x:port serving hashslot nnnn is not reachable at this point of time

I am using Stackexchange.Redis and trying to connect to a redis cluster and run HashGetAll(). But I'm getting an exception:
Endpoint 172.18.0.2:6379 serving hashslot 4038 is not reachable at this point of time. Please check connectTimeout value. If it is low, try increasing it to give the ConnectionMultiplexer a chance to recover from the network disconnect.
I don't have errors when I work with my cluster via redis-cli.
I am using windows and set up my redis cluster in Docker.
Here is how i connect to my db
var connectionMultiplexer = ConnectionMultiplexer.Connect(new ConfigurationOptions
{
ConnectTimeout = 99000,
EndPoints =
{
"127.0.0.1:6381",
"127.0.0.1:6382",
"127.0.0.1:6383",
"127.0.0.1:6384",
"127.0.0.1:6385",
"127.0.0.1:6386"
}
});
_database = connectionMultiplexer.GetDatabase();
I was trying to restart docker with redis but it did not help.
Then stop problematic node at all, let master to change to another node and application apply change. After a few minutes when the error disappeared I have launched redis again and it started to work again.
I fixed it. I was using multiple containers in my Docker with Redis instances, that I connected into cluster. There was some problem with configuration, I don't know what problem exactly, to fix my problem I used redis-cluster container, that has a pre-configured redis cluster.

SignalR - Switch between different Redis backplanes

Let's assume we have 2 Redis Server Backplanes, one as Master and the other as Slave.
Each web application is using SignalR in order to push content to the connected clients as it happens and in order to connect them to the backplane I am using in Application_Start
GlobalHost.DependencyResolver.UseRedis(host, port, "", new[] {"signalr.key"});
RouteTable.Routes.MapHubs();
Now in case Master Redis Backplane fails, I would like to promote the Slave Redis server to Master and switch all existing connections from web servers to the new Master Redis Server.
In order to promote the Slave Server to Master I am using the following code
using (var conn = new RedisConnection(host, port, allowAdmin: true))
{
if (conn.ServerType != ServerType.Master)
{
conn.Open();
var makeMaster = conn.Server.MakeMaster();
var info = conn.Wait(conn.GetInfo());
conn.Wait(makeMaster);
}
}
that seems to do the work.
Can you please help me on how I can inform my web application that the backplane has changed how to connect to the new one, in order to sustain communication between my connected clients?
We don't use SignalR specifically, but we have something pretty similar in the way we use redis, especially when switching between nodes. Specifically, we use redis pub/sub to subscribe to a channel, and we broadcast to that channel when changing master.
Our configuration is a little different, because we use the delimited configuration version based around ConnectionUtils.Connect(...). This means we can specify multiple nodes, with ConnectionUtils handling the concerns of figuring out which is the current master. But in your case you could perhaps publish the new master information as part of the pub/sub. I should also note that much of the code to handle switching masters (with notification) is wrapped up behind ConnectionUtils.SwitchMaster. This includes a broadcast of the change, which you can subscribe to via ConnectionUtils.SubscribeToMasterSwitch. As a minor implementation detail, the channel it uses for this is "__Booksleeve_MasterChanged" - but that is opaque if you just use the public methods.

RabbitMQ in a WCF webservice, model usage and performance

I need to call a RabbitMQ RPC Service from within a C# WCF Web service hosted in IIS.
We have this working OK, but being a good little soldier I was reading the RabbitMQ client documentation and it states the following "IModel should not be shared between threads".
My understanding is that in RabbitMQ an IModel is actually a socket connection.
this would mean that for every call the WCF service makes it's needs to create an IModel and dispose of it once completed.
This would seem to me to be somewhat excessive on performance and socket usage and I am wondering if my understanding is actually correct, or if there are other options available like using a connection pool of IModels between threads.
Any suggestions would be gratefully received. Here's a sample of the code I'm using below, the rabbitMQ connection is actually initialized in the Global.asax, I just have it there to you can see the usage.
var connectionFactory = new ConnectionFactory();
connectionFactory.HostName = "SampleHostName";
connectionFactory.UserName = "SampleUserName";
connectionFactory.Password = "SamplePassword";
IConnection connection = connectionFactory.CreateConnection();
// Code below is what we actually have in the service method.
var model = connection.CreateModel();
using (model)
{
model.ExchangeDeclare("SampleExchangeName", ExchangeType.Direct, false);
model.QueueDeclare("SampleQueueName", false, false, false, null);
model.QueueBind("SampleQueueName", "SampleExchangeName", "routingKey" , null);
// Do stuff, like post messages to queues
}
IModel is actually a socket connection
This is incorrect. IConnection represents a connection :) Model was introduced in order to allow several clients to use the same tcp connection. So Model is a "logical" connection over a "physical" one.
One of tasks Model does is splitting and re-assembling large messages. If message exceeds certain size, it is split into frames, frames are labeled and are assembled back by receiver. Now, imagine that 2 threads send large messages... Frame numbers will be messed up, and you will end up with Frankenstein message which consists of random parts of 2 messages.
You are right assuming that Model creation have some cost. Client sends a request to server to create a model, server creates a structure in memory for this model, and sends model Id back to the client. It is done over tcp connection which is already open, so no overhead due to establishing connection. But there is still some overhead because of network round trip.
I'm not sure about WCF binding, but base rabbit's .net library does not provide any pooling for models. If it is a problem in your case, you'll have to come up with something on your own.
You need a single IModel object for each session. This is pretty normal for network-based API's. For example the Azure Table Storage client is exactly the same.
Why, well you can't have a single Channel with multiple concurrent communication streams running over them.
I would expect that a certain level of caching to occur (e.g. DNS) which would reduce the overhead of creating subsequent IModel instances.
Performance is alright when doing the same thing with Azure Tables so it should be perfectly fine with IModel. Only attempt to optimise this when you can prove you have a real need.

TIBCO EMS Failover reconnect for C# (TIBCO.EMS.dll)

We have a TIBCO EMS solution that uses built-in server failover in a 2-4 server environment. If the TIBCO admins fail-over services from one EMS server to another, connections are supposed to be transfered to the new server automatically at the EMS service level. For our C# applications using the EMS service, this is not happening - our user connections are not being transfered to the new server after failover and we're not sure why.
Our application connection to EMS at startup only so if the TIBCO admins failover after users have started our application, they users need to restart the app in order to reconnect to the new server (our EMS connection uses a server string including all 4 production EMS servers - if the first attempt fails, it moves to the next server in the string and tries again).
I'm looking for an automated approach that will attempt to reconnect to EMS periodically if it detects that the connection is dead but I'm not sure how best to do that.
Any ideas? We are using TIBCO.EMS.dll version 4.4.2 and .Net 2.x (SmartClient app)
Any help would be appreciated.
First off, yes, I am answering my own question. Its important to note, however, that without ajmastrean, I would be nowhere. thank you so much!
ONE:
ConnectionFactory.SetReconnAttemptCount, SetReconnAttemptDelay, SetReconnAttemptTimeout should be set appropriately. I think the default values re-try too quickly (on the order of 1/2 second between retries). Our EMS servers can take a long time to failover because of network storage, etc - so 5 retries at 1/2s intervals is nowhere near long enough.
TWO:
I believe its important to enable the client-server and server-client heartbeats. Wasn't able to verify but without those in place, the client might not get the notification that the server is offline or switching in failover mode. This, of course, is a server side setting for EMS.
THREE:
you can watch for failover event by setting Tibems.SetExceptionOnFTSwitch(true); and then wiring up a exception event handler. When in a single-server environment, you will see a "Connection has been terminated" message. However, if you are in a fault-tolerant multi-server environment, you will see this: "Connection has performed fault-tolerant switch to ". You don't strictly need this notification, but it can be useful (especially in testing).
FOUR:
Apparently not clear in the EMS documentation, connection reconnect will NOT work in a single-server environment. You need to be in a multi-server, fault tolerant environment. There is a trick, however. You can put the same server in the connection list twice - strange I know, but it works and it enables the built-in reconnect logic to work.
some code:
private void initEMS()
{
Tibems.SetExceptionOnFTSwitch(true);
_ConnectionFactory = new TIBCO.EMS.TopicConnectionFactory(<server>);
_ConnectionFactory.SetReconnAttemptCount(30); // 30retries
_ConnectionFactory.SetReconnAttemptDelay(120000); // 2minutes
_ConnectionFactory.SetReconnAttemptTimeout(2000); // 2seconds
_Connection = _ConnectionFactory.CreateTopicConnectionM(<username>, <password>);
_Connection.ExceptionHandler += new EMSExceptionHandler(_Connection_ExceptionHandler);
}
private void _Connection_ExceptionHandler(object sender, EMSExceptionEventArgs args)
{
EMSException e = args.Exception;
// args.Exception = "Connection has been terminated" -- single server failure
// args.Exception = "Connection has performed fault-tolerant switch to <server url>" -- fault-tolerant multi-server
MessageBox.Show(e.ToString());
}
This post should sum up my current comments and explain my approach in more detail...
The TIBCO 'ConnectionFactory' and 'Connection' types are heavyweight, thread-safe types. TIBCO suggests that you maintain the use of one ConnectionFactory (per server configured factory) and one Connection per factory.
The server also appears to be responsible for in-place 'Connection' failover and re-connection, so let's confirm it's doing its job and then lean on that feature.
Creating a client side solution is going to be slightly more involved than fixing a server or client setup problem. All sessions you have created from a failed connection need to be re-created (not to mention producers, consumers, and destinations). There are no "reconnect" or "refresh" methods on either type. The sessions do not maintain a reference to their parent connection either.
You will have to manage a lookup of connection/session objects and go nuts re-initializing everyone! or implement some sort of session failure event handler that can get the new connection and reconnect them.
So, for now, let's dig in and see if the client is setup to receive failover notification (tib ems users guide pg 292). And make sure the raised exception is caught, contains the failover URL, and is being handled properly.
Client applications may receive notification of a failover by setting the tibco.tibjms.ft.switch.exception system property
Perhaps the library needs that to work?

Categories

Resources