Azure Cache intermittent response times from WCF REST service - c#

I'm building an EF6 web app in Azure and I'm using Azure Cache.
I'm testing calls to my WCF service and I'm getting wildly erratic response times - between 300ms and 15sec!
I configured my code according to the this example and it runs fine locally
I've debugged remotely and I can see that the cache key is being found and the data is getting called from cache, so I'm struggling to understand why there is sych a huge variation in response times. Most of the time it's 5+sec which is obviously waaay too long.
The example I've been testing is as follows:
WCF service GET request to:
http://feniksdev-staging.azurewebsites.net/EkckoNewsService.svc/getFriends
// Cache client configured by settings in application configuration file.
public DataCacheFactory cacheFactory = new DataCacheFactory();
public DataCache _cache;
public DataCache cache
{
get
{
if (_cache == null)
_cache = cacheFactory.GetDefaultCache();
return _cache;
}
set { }
}
...
...
[OperationContract]
[System.ServiceModel.Web.WebGet(ResponseFormat = WebMessageFormat.Json, UriTemplate = "/getFriends")]
public string getFriends()
{
string cachekey = "getFriends/{" + user.Id + "}";
object result = cache.Get(cachekey);
if (result == null)
{
using (EkckoContext entities = new EkckoContext())
{
var frnds = entities.UserConnections.Where(uc => uc.UserId == user.Id).Select(uc => new { Name = uc.Friend.Username }).ToList();
JsonSerializerSettings jsonSettings = new JsonSerializerSettings { PreserveReferencesHandling = PreserveReferencesHandling.Objects };
string json = JsonConvert.SerializeObject(frnds, jsonSettings);
cache.Add(cachekey, json);
return json;
}
}
else
{
return (string)result;
}
}
UserConnection is a simple table in my db and currently has no data, so the call returns an empty JSON array. user is a Session object and currently defaults to 1 for user.Id
When remote-debugging this, the object is found in cache and the cached object is returned. So all good, except the response time still varies by a factor of 20 (300ms - 6sec).
When remote debugging one of the other web service methods, I got the following error when attempting to access the cached object using the corresponding key (object result = cache.Get(cachekey);):
{"ErrorCode:SubStatus:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.). Additional Information : The client was trying to communicate with the server: net.tcp://ekckodev.cache.windows.net:22238."}
I then set the maxBufferSize in my config as follows:
<configSections>
<section name="dataCacheClients" type="Microsoft.ApplicationServer.Caching.DataCacheClientsSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere" />
<section name="cacheDiagnostics" type="Microsoft.ApplicationServer.Caching.AzureCommon.DiagnosticsConfigurationSection, Microsoft.ApplicationServer.Caching.AzureCommon" allowLocation="true" allowDefinition="Everywhere" />
</configSections>
...
...
<system.web>
...
...
<caching>
<outputCache defaultProvider="AFCacheOutputCacheProvider">
<providers>
<add name="AFCacheOutputCacheProvider" type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="default" dataCacheClientName="default" applicationName="AFCacheOutputCache" />
</providers>
</outputCache>
</caching>
</system.web>
....
....
...
<dataCacheClients>
<dataCacheClient name="default">
<autoDiscover isEnabled="true" identifier="ekckodev.cache.windows.net" />
<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />
<securityProperties mode="Message" sslEnabled="false">
<messageSecurity authorizationInfo="xxxxxxxxxxxxxxxxxxxxxxx" />
</securityProperties>
<transportProperties connectionBufferSize="131072" maxBufferPoolSize="268435456"
maxBufferSize="8388608" maxOutputDelay="2" channelInitializationTimeout="60000"
receiveTimeout="600000"/>
</dataCacheClient>
</dataCacheClients>
But I still get such erratic response times - particularly when hitting the same service call repeatedly.
After adding the maxbuffersize config, the cache calls are still hit-and-miss. some fetch the object; other times I get the same exception, however the port is different
"... The client was trying to communicate with the server: net.tcp://ekckodev.cache.windows.net:22233."}"
Could this be a firewall issue? If so, how do I open the appropriate ports?
I also just got the following exception when instantiating the DataCache object:
_cache = cacheFactory.GetDefaultCache();
ErrorCode:SubStatus:There is a temporary failure. Please retry later.
(One or more specified cache servers are unavailable, which could be caused by busy network or servers.
For on-premises cache clusters, also verify the following conditions. Ensure that security permission has
been granted for this client account, and check that the AppFabric Caching Service is allowed through the
firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the
serialized object size sent from the client.)
Any thoughts on why I'm getting such results? it's certainly no quicker WITH the cache than WIHTOUT it, so it appears there is some sort of latency in the cache which doesn't seem right...
Thanks in advance for any help!
UPDATE:
After doing some more searching, it seems I'm not the only one with this issue:
poor performance with azure cache
I find it hard to believe that this is the performance I should expect
UPDATE 2
I have commented out all cache-related code from my service and ran the same tests again. The response times are appreciably lower WITHOUT the cache! the "getFriends" callaverages about 250ms wihtout the cache, but peaks at over 5sec WITH the cache.
My other method that fetches about 4kb of data, was peaking at 20+ seconds with cache and now averages about 2sec WITHOUT the cache.
Again: I find it hard to believe that this is the performance I should expect
UPDATE 3
I have now scrapped Azure Cache in favour of MemoryCache. Nice example here
My service calls are now consistently taking approx 300ms in the browser.
I've opened a ticket with Microsoft Azure support regarding Azure Cache, so I'll update this post when they get in touch and I've asked them why their cache is so rubbish. Just when my faith in Microsoft was climbing :/

Looks like you've arrived at the correct conclusion, which is don't use Azure Managed Cache. About 6 months ago, Microsoft started recommending all new development be done against their Redis-based cache offering in Azure.
We recommend all new developments use Azure Redis Cache.
Strangely, they don't show the option to create a Redis cache in the 'old' Azure management site (manage.windowsazure.com), but they do have it in the "preview" Azure management portal.

Related

Unable to connect with remote RabbitMQ server

I'm creating a client application with the idea of publish new messages to a remote RabbitMQ queue. I'm using MassTransit to create this client, and my code looks this way:
static IBusControl CreateBus()
{
return Bus.Factory.CreateUsingRabbitMq(x =>
{
var host = x.Host(new Uri(ConfigurationManager.AppSettings["RabbitMQHost"]), h =>
{
h.Username("user");
h.Password("password");
});
});
}
static IRequestClient<ISyncProject, IProjectSynced> CreateRequestClient(IBusControl busControl)
{
var serviceAddress = new Uri(ConfigurationManager.AppSettings["ServiceAddress"]);
IRequestClient<ISyncProject, IProjectSynced> client =
busControl.CreateRequestClient<ISyncProject, IProjectSynced>(serviceAddress, TimeSpan.FromDays(1));
return client;
}
private static async Task MainLogic(IBusControl busControl)
{
IRequestClient<ISyncProject, IProjectSynced> client = CreateRequestClient(busControl);
//I'm using the client here as I show below, this part is not important it works with localhost
IProjectSynced response = await client.Request(new ProjecToSync() { OriginalOOMID = OriginalOOMID });
}
And the config file looks like this:
<appSettings>
<add key="RabbitMQHost" value="rabbitmq://ServerName" />
<add key="ServiceQueueName" value="queueName" />
<add key="ServiceAddress" value="rabbitmq://ServerName/queueName" />
</appSettings>
I'm not using guest user, I created a new one and I added all the rights as administrator.
Now this code works if I run the client application in the same server where is running RabbitMQ and also changing ServerName by localhost. If I run the client in my local machine using whatever ServerName or IP address of server, RabbitMQ is blocking my connection:
I presume this is has to be with some configuration that I need to do in the server but I have not found it so far.
One thing I noticed now is disk space is in red and and a big amount of generic exchanges have been created
As your question shows, down at the bottom you have a connection, but it is blocked.
The RabbitMQ documentation lists some conditions where a connection is blocked. These generally have to do with resource limitations on the broker machine itself. In this case, we've managed to get a clear picture that the free disk space available to the broker is below its low-water mark. Thus, all connections will be blocked until this condition is resolved (either lower the mark - not recommended, or increase the available free space).

File download async on 8 core CPU [duplicate]

Those "fine" RFCs mandate from every RFC-client that they beware of not using more than 2 connections per host...
Microsoft implemented this in WebClient. I know that it can be turned off with
App.config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.net>
<connectionManagement>
<add address="*" maxconnection="100" />
</connectionManagement>
</system.net>
</configuration>
(found on http://social.msdn.microsoft.com/forums/en-US/netfxnetcom/thread/1f863f20-09f9-49a5-8eee-17a89b591007 )
But how can I do it programmatically?
Accordin to
http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultconnectionlimit.aspx
"Changing the DefaultConnectionLimit property has no effect on existing
ServicePoint objects; it affects only ServicePoint objects that are
initialized after the change. If the value of this property has not been
set either directly or through configuration, the value defaults to the
constant DefaultPersistentConnectionLimit."
I'd like best to configure the limit when I instanciate the WebClient, but just removing this sad limitation programmatically at the start of my programm would be fine, too.
The server I access is not a regular webserver in the internet, but under my control and in the local lan. I want to do API-calls, but I don't use webservices or remoting
for those interested:
System.Net.ServicePointManager.DefaultConnectionLimit = x (where x is your desired number of connections)
no need for extra references
just make sure this is called BEFORE the service point is created as mentioned above in the post.
With some tips from here and elsewhere I managed to fix this in my application by overriding the WebClient class I was using:
class AwesomeWebClient : WebClient {
protected override WebRequest GetWebRequest(Uri address) {
HttpWebRequest req = (HttpWebRequest)base.GetWebRequest(address);
req.ServicePoint.ConnectionLimit = 10;
return (WebRequest)req;
}
}
This solution allows you to change the connection limit at any time:
private static void ConfigureServicePoint(Uri uri)
{
var servicePoint = ServicePointManager.FindServicePoint(uri);
// Increase the number of TCP connections from the default (2)
servicePoint.ConnectionLimit = 40;
}
The 1st time anyone calls this FindServicePoint, a ServicePoint instance is created and a WeakReference is created to hold on to it inside the ServicePointManager. Subsequent requests to the manager for the same Uri return the same instance. If the connection isn't used after, the GC cleans it up.
If you find the ServicePoint object being used by your WebClient, you can change its connection limit. HttpWebRequest objects have an accessor to retrieve the one they were constructed to use, so you could do it that way. If you're lucky, all your requests might end up sharing the same ServicePoint so you'd only have to do it once.
I don't know of any global way to change the limit. If you altered the DefaultConnectionLimit early enough in execution, you'd probably be fine.
Alternately, you could just live with the connection limit, since most server software is going to throttle you anyway. :)
We have a situation regarding the above piece of configuration in App.Config
In order for this to be valid in a CONSOLE Application,
we added the System.Configuration reference dll.
Without the reference, the above was useless.

Storing Larger (2MB) objects in Redis Azure

I'm trying to use Redis in Azure for caching in my application. Each of my keys could be upwards of 2-4MB each. When I run my app against Redis on my local machine, all is great, however when running on Azure performance is terrible, retrieve of keys is often taking 8-10 seconds, its actually quicker for me to re-get this data from the original source than from the cache.
So I guess the first question is, are my keys too big? Am I just barking up the wrong tree altogther with using Redis?
If not, any ideas why it's so slow? The application is an Azure website, and the website and redis instance are in the same zone. I am using the stackexchange redis client and creating the multiplexer in the global.asax file as a singleton, to avoid re-creating this, the code for this is below:
Global.asax:
redisConstring = ConfigurationManager.ConnectionStrings["RedisCache"].ConnectionString;
if (redisConstring != null)
{
if (RedisConnection == null || !RedisConnection.IsConnected)
{
RedisConnection = ConnectionMultiplexer.Connect(redisConstring);
}
RedisCacheDb = RedisConnection.GetDatabase();
Application["RedisCache"] = RedisCacheDb;
}
Web API Controller:
IDatabase redisCache = System.Web.HttpContext.Current.Application["RedisCache"] as IDatabase;
string cachedJson = redisCache.StringGet(id);
if (cachedJson == null)
{
cachedJson=OutfitFactory.GetMembersJson(id);
redisCache.StringSet(id, cachedJson, TimeSpan.FromMinutes(15));
}
return OutfitFactory.GetMembersFromJson(cachedJson);
From the comments, it sounds like the issue is bandwidth... So: use less bandwidth. Ideas:
Use compression (ideally only if nontrivial size, etc)
Use a denser format
for reference, at SE we use gzip-compressed protobuf-net for packaging

ASP.NET C# Sessions SessionID changes on Page_Load() [duplicate]

Why does the property SessionID on the Session-object in an ASP.NET-page change between requests?
I have a page like this:
...
<div>
SessionID: <%= SessionID %>
</div>
...
And the output keeps changing every time I hit F5, independent of browser.
This is the reason
When using cookie-based session state, ASP.NET does not allocate storage for session data until the Session object is used. As a result, a new session ID is generated for each page request until the session object is accessed. If your application requires a static session ID for the entire session, you can either implement the Session_Start method in the application's Global.asax file and store data in the Session object to fix the session ID, or you can use code in another part of your application to explicitly store data in the Session object.
http://msdn.microsoft.com/en-us/library/system.web.sessionstate.httpsessionstate.sessionid.aspx
So basically, unless you access your session object on the backend, a new sessionId will be generated with each request
EDIT
This code must be added on the file Global.asax. It adds an entry to the Session object so you fix the session until it expires.
protected void Session_Start(Object sender, EventArgs e)
{
Session["init"] = 0;
}
There is another, more insidious reason, why this may occur even when the Session object has been initialized as demonstrated by Cladudio.
In the Web.config, if there is an <httpCookies> entry that is set to requireSSL="true" but you are not actually using HTTPS: for a specific request, then the session cookie is not sent (or maybe not returned, I'm not sure which) which means that you end up with a brand new session for each request.
I found this one the hard way, spending several hours going back and forth between several commits in my source control, until I found what specific change had broken my application.
In my case I figured out that the session cookie had a domain that included www. prefix, while I was requesting page with no www..
Adding www. to the URL immediately fixed the problem. Later I changed cookie's domain to be set to .mysite.com instead of www.mysite.com.
my problem was that we had this set in web.config
<httpCookies httpOnlyCookies="true" requireSSL="true" />
this means that when debugging in non-SSL (the default), the auth cookie would not get sent back to the server. this would mean that the server would send a new auth cookie (with a new session) for every request back to the client.
the fix is to either set requiressl to false in web.config and true in web.release.config or turn on SSL while debugging:
Using Neville's answer (deleting requireSSL = true, in web.config) and slightly modifying Joel Etherton's code, here is the code that should handle a site that runs in both SSL mode and non SSL mode, depending on the user and the page (I am jumping back into code and haven't tested it on SSL yet, but expect it should work - will be too busy later to get back to this, so here it is:
if (HttpContext.Current.Response.Cookies.Count > 0)
{
foreach (string s in HttpContext.Current.Response.Cookies.AllKeys)
{
if (s == FormsAuthentication.FormsCookieName || s.ToLower() == "asp.net_sessionid")
{
HttpContext.Current.Response.Cookies[s].Secure = HttpContext.Current.Request.IsSecureConnection;
}
}
}
Another possibility that causes the SessionID to change between requests, even when Session_OnStart is defined and/or a Session has been initialized, is that the URL hostname contains an invalid character (such as an underscore). I believe this is IE specific (not verified), but if your URL is, say, http://server_name/app, then IE will block all cookies and your session information will not be accessible between requests.
In fact, each request will spin up a separate session on the server, so if your page contains multiple images, script tags, etc., then each of those GET requests will result in a different session on the server.
Further information: http://support.microsoft.com/kb/316112
My issue was with a Microsoft MediaRoom IPTV application. It turns out that MPF MRML applications don't support cookies; changing to use cookieless sessions in the web.config solved my issue
<sessionState cookieless="true" />
Here's a REALLY old article about it:
Cookieless ASP.NET
in my case it was because I was modifying session after redirecting from a gateway in an external application, so because I was using IP instead on localhost in that page url it was actually considered different website with different sessions.
In summary
pay more attention if you are debugging a hosted application on IIS instead of IIS express and mixing your machine http://Ip and http://localhost in various pages
In my case this was happening a lot in my development and test environments. After trying all of the above solutions without any success I found that I was able to fix this problem by deleting all session cookies. The web developer extension makes this very easy to do. I mostly use Firefox for testing and development, but this also happened while testing in Chrome. The fix also worked in Chrome.
I haven't had to do this yet in the production environment and have not received any reports of people not being able to log in. This also only seemed to happen after making the session cookies to be secure. It never happened in the past when they were not secure.
Update: this only started happening after we changed the session cookie to make it secure. I've determined that the exact issue was caused by there being two or more session cookies in the browser with the same path and domain. The one that was always the problem was the one that had an empty or null value. After deleting that particular cookie the issue was resolved. I've also added code in Global.asax.cs Sessin_Start method to check for this empty cookie and if so set it's expiration date to something in the past.
HttpCookieCollection cookies = Response.Cookies;
for (int i = 0; i < cookies.Count; i++)
{
HttpCookie cookie = cookies.Get(i);
if (cookie != null)
{
if ((cookie.Name == "ASP.NET_SessionId" || cookie.Name == "ASP.NET_SessionID") && String.IsNullOrEmpty(cookie.Value))
{
//Try resetting the expiration date of the session cookie to something in the past and/or deleting it.
//Reset the expiration time of the cookie to one hour, one minute and one second in the past
if (Response.Cookies[cookie.Name] != null)
Response.Cookies[cookie.Name].Expires = DateTime.Today.Subtract(new TimeSpan(1, 1, 1));
}
}
}
This was changing for me beginning with .NET 4.7.2 and it was due to the SameSite property on the session cookie. See here for more info: https://devblogs.microsoft.com/aspnet/upcoming-samesite-cookie-changes-in-asp-net-and-asp-net-core/
The default value changed to "Lax" and started breaking things. I changed it to "None" and things worked as expected.
Be sure that you do not have a session timeout that is very short, and also make sure that if you are using cookie based sessions that you are accepting the session.
The FireFox webDeveloperToolbar is helpful at times like this as you can see the cookies set for your application.
Session ID resetting may have many causes. However any mentioned above doesn't relate to my problem. So I'll describe it for future reference.
In my case a new session created on each request resulted in infinite redirect loop. The redirect action takes place in OnActionExecuting event.
Also I've been clearing all http headers (also in OnActionExecuting event using Response.ClearHeaders method) in order to prevent caching sites on client side. But that method clears all headers including informations about user's session, and consequently all data in Temp storage (which I was using later in program). So even setting new session in Session_Start event didn't help.
To resolve my problem I ensured not to remove the headers when a redirection occurs.
Hope it helps someone.
I ran into this issue a different way. The controllers that had this attribute [SessionState(SessionStateBehavior.ReadOnly)] were reading from a different session even though I had set a value in the original session upon app startup. I was adding the session value via the _layout.cshtml (maybe not the best idea?)
It was clearly the ReadOnly causing the issue because when I removed the attribute, the original session (and SessionId) would stay in tact. Using Claudio's/Microsoft's solution fixed it.
I'm on .NET Core 2.1 and I'm well aware that the question isn't about Core. Yet the internet is lacking and Google brought me here so hoping to save someone a few hours.
Startup.cs
services.AddCors(o => o.AddPolicy("AllowAll", builder =>
{
builder
.WithOrigins("http://localhost:3000") // important
.AllowCredentials() // important
.AllowAnyMethod()
.AllowAnyHeader(); // obviously just for testing
}));
client.js
const resp = await fetch("https://localhost:5001/api/user", {
method: 'POST',
credentials: 'include', // important
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
})
Controllers/LoginController.cs
namespace WebServer.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class UserController : ControllerBase
{
[HttpPost]
public IEnumerable<string> Post([FromBody]LoginForm lf)
{
string prevUsername = HttpContext.Session.GetString("username");
Console.WriteLine("Previous username: " + prevUsername);
HttpContext.Session.SetString("username", lf.username);
return new string[] { lf.username, lf.password };
}
}
}
Notice that the session writing and reading works, yet no cookies seem to be passed to the browser. At least I couldn't find a "Set-Cookie" header anywhere.

How to access a Coherence cache with the same name across multiple clusters?

I have several Oracle Coherence clusters, and on each cluster I have the same set of caches with the same cache names. How can I access a single cache (say "Cache1") from each cluster within my application? For example, I may want to check the count of "Cache1" across all environments to display to the user.
The clusters are set up using Coherence Extend, and I have setup the client-side cache-config with separate cache-mappings and remote-cache-schemes for each cluster. However, if I set the cache-name element to "Cache1" for each cluster, it only retrieves data from the first cluster listed in the xml. If I set it to something else (e.g. "Cache1-Dev1"), I get a Tangosol.IO.Pof.PortableException with the message 'No scheme for cache: "Cache1-Dev1"'.
<cache-config xmlns="http://schemas.tangosol.com/cache">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>Cache1-Dev1</cache-name>
<scheme-name>extend-direct-dev1</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>Cache1-Dev2</cache-name>
<scheme-name>extend-direct-dev2</scheme-name>
</cache-mapping>
</cache-scheme-mapping>
<caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-direct-dev1</scheme-name>
<service-name>ExtendTcpCacheService-dev1</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>dev1-address</address>
<port>9500</port>
</socket-address>
</remote-addresses>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>60s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
<remote-cache-scheme>
<scheme-name>extend-direct-dev2</scheme-name>
<service-name>ExtendTcpCacheService-dev2</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>dev2-address</address>
<port>9500</port>
</socket-address>
</remote-addresses>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>60s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</cache-config>
Found the answer elsewhere.
First, get the proxy service instance, and cast it to a CacheService.
You should then be able to get the cache from that service instance.
Java implementation:
Service service = CacheFactory.getService("ExtendTcpCacheService-dev1");
CacheService cacheService = (CacheService) service;
NamedCache cache = cacheService.ensureCache("Cache1");
The code is almost identical in C#:
var service = CacheFactory.GetService("ExtendTcpCacheService-dev1");
var cacheService = (ICacheService)service;
var cache = cacheService.EnsureCache("Cache1");
This also means you no longer need to list the caches in the cache-mapping section of your cache-config xml file, though you need at least one cache-mapping containing cache-name and scheme-name for coherence to run, even if it isn't used.

Categories

Resources