I'm trying to use Redis in Azure for caching in my application. Each of my keys could be upwards of 2-4MB each. When I run my app against Redis on my local machine, all is great, however when running on Azure performance is terrible, retrieve of keys is often taking 8-10 seconds, its actually quicker for me to re-get this data from the original source than from the cache.
So I guess the first question is, are my keys too big? Am I just barking up the wrong tree altogther with using Redis?
If not, any ideas why it's so slow? The application is an Azure website, and the website and redis instance are in the same zone. I am using the stackexchange redis client and creating the multiplexer in the global.asax file as a singleton, to avoid re-creating this, the code for this is below:
Global.asax:
redisConstring = ConfigurationManager.ConnectionStrings["RedisCache"].ConnectionString;
if (redisConstring != null)
{
if (RedisConnection == null || !RedisConnection.IsConnected)
{
RedisConnection = ConnectionMultiplexer.Connect(redisConstring);
}
RedisCacheDb = RedisConnection.GetDatabase();
Application["RedisCache"] = RedisCacheDb;
}
Web API Controller:
IDatabase redisCache = System.Web.HttpContext.Current.Application["RedisCache"] as IDatabase;
string cachedJson = redisCache.StringGet(id);
if (cachedJson == null)
{
cachedJson=OutfitFactory.GetMembersJson(id);
redisCache.StringSet(id, cachedJson, TimeSpan.FromMinutes(15));
}
return OutfitFactory.GetMembersFromJson(cachedJson);
From the comments, it sounds like the issue is bandwidth... So: use less bandwidth. Ideas:
Use compression (ideally only if nontrivial size, etc)
Use a denser format
for reference, at SE we use gzip-compressed protobuf-net for packaging
Related
So I have a few different parse servers setup.
One server is just to capture error logs from various applications (I have a LOT out there) in nice uniformed database.
So I might have a specific standalone data migration tool that if it encounters an error, will write out the exception into this Error_log parse table/class. No problem there.
But, if I have an app that uses a Parse Database for itself, I have not been able to figure out how to let it work on it's own parse server configuration for it's own stuff, but write out error logs to this other Parse server instance.
Yes... I could go through the trouble of writing out something via the REST api just for writing out logs,but I am I trying to avoid that and stick with native parse APIs for the particular platform I am on because of the benefits that the APIs give over REST (like save eventually for the none .NET stuff).
EDIT
Some clarification was requested so here I go...
On the app side of things (c# for this example but the same holds true for iOS etc)… I do the usual initialization of the Parse client as such …
ParseClient.Initialize(new ParseClient.Configuration
{
ApplicationId = "MyAppID",
WindowsKey = "MyDotNetKey",
Server = "www.myparseserver.com/app1"
});
So for all calls to save a parse object go through that parse client connection
But what I need to do would be something like this ….
//Main App cloud database
ParseClient1.Initialize(new ParseClient.Configuration
{
ApplicationId = "MyAppID",
WindowsKey = "MyDotNetKey",
Server = "www.myparseserver.com/app1"
});
ParseClient2.Initialize(new ParseClient.Configuration
{
ApplicationId = "MyAppID",
WindowsKey = "MyDotNetKey",
Server = "www.myparseserver.com/errorcollection"
});
try{
ParseConfig config = null;
config = await ParseConfig.GetAsync().ParseClient1;
} catch (Exception ex){
ParseObject MyError = new ParseObject("Error_Log");
MyError["Application"] = "My First App-App2";
MyError["Error"] = ex.message;
await MyError.Save().ParseClient2;
}
Yes - this is all fake code... my point is I want to be able to have multiple ParseClient instances in one app.
Now... I can simply write a routine that writes out errors that resets the ParseClient.Initialization to the error parse server instance and then redo it back to the original (primary app data) instance when it's done... but that is just asking for trouble in a multi threaded environment and will cause conflicts if some other thread in the app goes to write out parse data right at the moment the error method resets the init.
If ParseClient were IDisposable I could probably do that using :
ParseClient ParseErrorServer = new ParseClient();
ParseErrorServer.ApplicationId = "hmmm";
ParseErrorServer.WindwosKey= "hmmm";
ParseErrorServer.Server= "www.hmmm.com/errorcollection";
using ParseErrorServer {
//Do The Work
}
Is that clear as mud yet? ;P
Without alteration I believe none of the Parse SDKs have the ability to initialise multiple instances.
In the iOS SDK for example, is possible to make a new instance (say with a different server url) upon restarting the app but you cannot have multiple. There has also been discussion on the iOS SDK about being able to change the configuration without restart but no one has implemented this yet.
We would happily review a PR for this, however it would require a major and complex overhaul as you would have to manage cache, users etc across multiple instances.
I am using MS Dynamics CRM SDK with C#. In this I have a WCF service method which creates an entity record.
I am using CreateRequest in the method. Client is calling this method with 2 identical requests one after other immediately.
There is a fetch before creating a record. If the record is available we are updating it. However, 2 inserts are happening at the exact time.
So 2 records with identical data are getting created in CRM.
Can someone help to prevent concurrency?
You should force the duplicate detection rule & decide what to do. Read more
Account a = new Account();
a.Name = "My account";
CreateRequest req = new CreateRequest();
req.Parameters.Add("SuppressDuplicateDetection", false);
req.Target = a;
try
{
service.Execute(req);
}
catch (FaultException<OrganizationServiceFault> ex)
{
if (ex.Detail.ErrorCode == -2147220685)
{
// Account with name "My account" already exists
}
else
{
throw;
}
}
As Filburt commented in your question, the preferred approach would be to use an Alternate Key and Upsert requests but unfortunately that's not an option for you if you're working with CRM 2013.
In your scenario, I'd implement a very lightweight cache in the WCF service, probably using the MemoryCache object from the System.Runtime.Caching.dll library (small example). Before executing the query to CRM, you can check if the record exists in the cache and continue with you current processing if it doesn't (remembering to add the record to the cache with a small expiration time for potential concurrent executions) or handle the scenario where the record already exists in the cache (and here you can go from having quite complex checks to detect and prevent potential data loss/unnecessary updates to a simple and stupid Thread.Sleep(1000)).
I created a small WPF application that does some operations. I would like to distribute this application to some people, but I want it to be accessible only by the authorized people. I don't really need a registering mechanism.
Because the application is quite small and will be delivered as an EXE file, I don't think that having a database would be an efficient idea.
I was thinking of having a file within the application that contain the credentials of the authorized people, but as far as I know, WPF applications can be easily reversed engineered. I turned my thinking into having the application contact a server to authorize the person or something, but wasn't sure whether it is a good choice or not.
Can you please suggest or throw at me some readings or best practices to study, because whenever I search about this topic I get an example of implementing the UI (which is something i know how to do) and not the login mechanism.
Design Guidelines for Rich Client Applications by MSDN
https://msdn.microsoft.com/en-in/library/ee658087.aspx
Read Security Considerations, Data Handling Considerations and Data
Access
It is very easy to reverse any .Net app , So the point of having an authentication system is for dealing with Noobs and people who do not know about reverse programming , you can use authentication system using Cpu Id for example witch i use , but any way like i said any .Net is reversible .
I will shier my authentication logic with you:
public static string GetId( )
{
string cpuInfo = string.Empty;
ManagementClass mc = new ManagementClass("win32_processor");
ManagementObjectCollection moc = mc.GetInstances( );
foreach (ManagementObject mo in moc)
{
if (cpuInfo == "")
{
//Get only the first CPU's ID
cpuInfo = mo.Properties["processorID"].Value.ToString( );
break;
}
}
return cpuInfo;
}
After you have cpu id do some encryption
Public static string Encrypt(string CpuId)
{ // do some encryption
retuen encryptionCpuId;
}
after that in your UI create a dialog window show the user his cpuID and he will send it to you, after that you will encrypt user's cpuID and give him his activation Key , to do that you must create an other project for generate encryption , And in your App (That you want to publish) check :
if(Key== Encrypt(GetId()) {// Welcome }
else {Environment.Exit(0); }
So every user have his own Key.
After all this you must know that any one can reflect your code and crack this mechanism.
Whenever a user hits a page on my website, I run the following code to track user hits, page views, where they are going, etc...
public static void AddPath(string pathType, string renderType, int pageid = 0, int testid = 0)
{
UserTracking ut = (UserTracking)HttpContext.Current.Session["Paths"];
if (ut == null)
{
ut = new UserTracking();
ut.IPAddress = HttpContext.Current.Request.UserHostAddress;
ut.VisitDate = DateTime.Now;
ut.Device = (string)HttpContext.Current.Session["Browser"];
if (HttpContext.Current.Request.UrlReferrer != null)
{
ut.Referrer = HttpContext.Current.Request.UrlReferrer.PathAndQuery.ToString();
ut.ReferrerHost = HttpContext.Current.Request.UrlReferrer.Host.ToString();
ut.AbsoluteUri = HttpContext.Current.Request.UrlReferrer.AbsoluteUri.ToString();
}
}
//Do some stuff including adding paths
HttpContext.Current.Session["Paths"] = ut;
}
In my Global.asax.cs file when the session ends, I store that session information. The current session timeout is set to 20 minutes.
protected void Session_End(object sender, EventArgs e)
{
UserTracking ut = (UserTracking)Session["Paths"];
if (ut != null)
TrackingHelper.StorePathData(ut);
}
The problem is that I'm not getting accurate storage of the information. For instance, I'm getting thousands of session stores that look like this within a couple minutes.
Session #1
Time: 2014-10-21 01:30:31.990
Paths: /blog
IP Address: 54.201.99.134
Session #2
Time: 2014-10-21 01:30:31.357
Paths: /blog-page-2
IP Address: 54.201.99.134
What it should be doing, is storing only one session for these instances:
What the session should look like
Time: 2014-10-21 01:30:31.357
Paths: /blog,/blog-page-2
IP Address: 54.201.99.134
Clearly, this seems like a search engine crawl, but the problem is, I'm not sure if this is the case.
1) Why is this happening?
2) How can I get an accurate # of sessions to match Google analytics as closely as possible?
3) How can I exclude bots? Or how to detect that it was a bot that fired it?
Edit: Many people are asking "Why"
For those of you that are asking "Why" we are doing this as opposed to just using analytics, to make a very long story short, we are building user profiles to mine data out of their profile. We're looking at what they are viewing, how long they are viewing it, their click paths, we also have A/B tests running for certain pages and we're detecting which pages are firing throughout the user viewing cycle and we're tracking some other information that is custom and we're not able to put this into a google analytics API and pull this information out. Once they've navigated the site, we're thing using this information to build user profiles for every session on the site. We essentially need to then detect which of these sessions is actually real and give the site owners the ability to view the data along with our data mining application to analyze the data and provide feedback to the site owners on certain criteria to help them better their website from these profiles. If you have a better way of doing this, we're all ears.
1) the asp.net session is tracked with the help of the asp.net session Cookie.
But it is disabled for anonymous users (not logged on users)
You can activate sessionId creation for anonymous user's in the web.config
<configuration>
<system.web>
<anonymousIdentification enabled="true"/>
</system.web>
</configuration>
A much better place to hook up your tackin is to add an global mvc ActionFilterAttribute.
The generated SessionId is stored in the httprequest, accessed by
filterContext.RequestContext.HttpContext.Request.AnonymousID
2) You should create a feed of tracking paths to analys it asyncronly or not even in the same process. Maybe you want to store the tracking on disk "like a Server log" to reanalyse it later.
Geo Location and db lookup's needs some processing time and most likly you cant get the accurate geo location from the ip address.
A much better source is to get it from the user profiles / user address later on. (after the order submit)
Sometimes the asp.net session cookie don't work, because the user has some notracking plugin activated. Google Analytics would fail here too. You can increase the tracking accuracy with a custom
ajax Client callback.
To make the Ajax callback happen globally for all pages, you can use the help of the ActionFilterAttribute to inject some Script-Content to the end of the html content stream Response.
To map an IPv4 address to a session can help, but it should only be a hint.
Noadays a lot of ISP supporting IPv6. They are mapping there clients
most of the time to a small IPv4 pool. So one user can switch its ipv4 very fast
and there is a high possibility that visitors of the same page are using the same ISP and so share a IPv4.
3) Most robots identify themselves by a custom user agent in the request headers.
There are good and bad ones. See http://www.affiliatebeginnersguide.com/articles/block_bots.html
But with the Ajax callback u can verify the browser presents, at least the present of a costly
html-dom with JavaScript Environment.
X) To simplfy the start and concentrate on the Analysis. Implement a simple ActionFilterAttribute
and Register it globaly in RegisterGlobalFilters
filters.Add(new OurTrackingActionFilterAttribute(ourTrackingService));
In the filter override OnActionExecuting
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
base.OnActionExecuting(filterContext);
OnTrackingAction(filterContext);
}
public virtual void OnTrackingAction(ActionExecutingContext filterContext)
{
var context = filterContext.RequestContext.HttpContext;
var track = new OurWebTrack(context);
trackingService.Track(track);
}
To don't delay the Server Response with some tracking processing,
take a look into the Reactive package http://msdn.microsoft.com/en-us/data/gg577609.aspx
It's a good way to split the capture from the processing.
Create a "Subject" in the TrackingService and simple push our tracking objects into it.
You can write observers to transmit, save or process the tracking objects.
As default the observers will only get one object at a time and so you dont need to syncronise/lock your status variables/Directory/memeory-cache and maybe u want to load the data and reprocess it with a new version of your application later on (maybe in debuging).
I have several Oracle Coherence clusters, and on each cluster I have the same set of caches with the same cache names. How can I access a single cache (say "Cache1") from each cluster within my application? For example, I may want to check the count of "Cache1" across all environments to display to the user.
The clusters are set up using Coherence Extend, and I have setup the client-side cache-config with separate cache-mappings and remote-cache-schemes for each cluster. However, if I set the cache-name element to "Cache1" for each cluster, it only retrieves data from the first cluster listed in the xml. If I set it to something else (e.g. "Cache1-Dev1"), I get a Tangosol.IO.Pof.PortableException with the message 'No scheme for cache: "Cache1-Dev1"'.
<cache-config xmlns="http://schemas.tangosol.com/cache">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>Cache1-Dev1</cache-name>
<scheme-name>extend-direct-dev1</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>Cache1-Dev2</cache-name>
<scheme-name>extend-direct-dev2</scheme-name>
</cache-mapping>
</cache-scheme-mapping>
<caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-direct-dev1</scheme-name>
<service-name>ExtendTcpCacheService-dev1</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>dev1-address</address>
<port>9500</port>
</socket-address>
</remote-addresses>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>60s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
<remote-cache-scheme>
<scheme-name>extend-direct-dev2</scheme-name>
<service-name>ExtendTcpCacheService-dev2</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>dev2-address</address>
<port>9500</port>
</socket-address>
</remote-addresses>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>60s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</cache-config>
Found the answer elsewhere.
First, get the proxy service instance, and cast it to a CacheService.
You should then be able to get the cache from that service instance.
Java implementation:
Service service = CacheFactory.getService("ExtendTcpCacheService-dev1");
CacheService cacheService = (CacheService) service;
NamedCache cache = cacheService.ensureCache("Cache1");
The code is almost identical in C#:
var service = CacheFactory.GetService("ExtendTcpCacheService-dev1");
var cacheService = (ICacheService)service;
var cache = cacheService.EnsureCache("Cache1");
This also means you no longer need to list the caches in the cache-mapping section of your cache-config xml file, though you need at least one cache-mapping containing cache-name and scheme-name for coherence to run, even if it isn't used.