I'm working with Azure shared cache. In the development process I need to write a lot of data which takes time.
Is it possible to configure the worker-role to write to a local cache?
Have you considered the new in-role cache? You can specify either a portion of your existing role's RAM for cache, or create a dedicated cache role. Either way, it would be running within your deployment. Assuming you colocate your cache with, say, your web role, then you'd effectively have a local cache.
Look here to see a .net tutorial. Note that this cache can also be used with any other language; this just shows integration with Visual Studio (and it works the same in Eclipse).
Related
I'm new to azure and cloud platform development - I have a web application and I create multiple companies using a
company table and
seperate the
company_products using a foreign key: companyid
Is it possible to run multiple instances in which each has it's own SQL database? I want to do this because every customer is unique and they may need tailored modules.
There are no restrictions on how you build your app. You many create as many databases as you wish, and have multiple web apps if you wish (whether in the same app service plan or across multiple app service plans). How you do this is strictly up to you, but no - there is nothing that forces you to use a single database for anything.
If I am understanding your question correctly, you would like each of your customers to have their own instance and separate databases?
Have a look at this article: https://msdn.microsoft.com/en-us/library/ff966499.aspx
I'd suggest using Azure App Service, and running each of your customers in their own app. You can save money as all the apps run under the same App Service Plan. Usage on one instance however, does not affect performance on another. There are quite a lot of benefits with App Service.
https://azure.microsoft.com/en-us/documentation/articles/azure-web-sites-web-hosting-plans-in-depth-overview/
For starting out, I'd suggest using individual Azure SQL Databases for each customer. This is to save money, as you can spin up S0/S1 databases for relatively cheap. Then you can set the connection string through the Azure Portal for each app you have under your App Service Plan.
If you end up scaling quickly, have a look at Elastic Databases. You pay for a database server and get something like 200 databases per server. So its really only economical if you have quite a few customers and can justify the cost. However there are some useful Azure tools that make managing elastic database pools easier. Check out the Azure documentation for more details on this.
Once you have this architecture set up, you can either manage your instances/databases through the portal or setup another logic app to manage all your instances. It would be a lot less development work to just manage it through the portal for starting out, however if this is a SaaS product, and is going to scale quite quickly, you may want to invest ahead of time in automating some processes so that deploying new instances doesn't have to be done manually.
I prefer this approach as well, because you can then point different subdomains at your individual custom apps. (i.e. customer1.yourdomain.com, customer2.yourdomain.com). Each app already has it's own domain under azurewebsites.net, so if you don't mind using that domain, you can just stick with it. It's nice cause then you don't have to manage your own DNS or worry about SSL Certificates and what not, as it's already managed for you. If you do want your own custom domains, there's plenty of documentation on this. Azure also has a DNS service for automating creating CNAME records while automatically spinning up a new app and deploying a DB to your pool and initializing the DB, etc...
As David says, you can do this how you like. My suggestion would be to use connection strings in your application's web.config to control the database instance you want to communicate with, and you can then configure your azure web app deployment's "slot settings" in the azure portal (or your ARM template) to override the web config settings for that deployment.
So - you can create your ARM template which describes your infrastructure, and may deploy your app service plan, web app, web config, sql database to a specifically named resource group (ie. targeting one of your customers). This would have configuration that points to the database instance in that resource group, and possibly other config that turns customer specific functionality on/off. Try to keep the code and deployment as common as you can, otherwise you'll end up with a maintenance nightmare in the future.
See https://azure.microsoft.com/en-gb/documentation/articles/web-sites-configure/ for information on configuration settings in azure web apps.
I am writing an MVC webAPI that will be used to return values that will be bound to dropdown boxes or used as type-ahead textbox results on a website, and I want to cache values in memory so that I do not need to perform database requests every time the API is hit.
I am going to use the MemoryCache class and I know I can populate the cache when the first request comes in but I don't want the first request to the API to be slower than others. My question is: Is there a way for me to automatically populate the cache when the WebAPI first starts? I see there is an "App_Start" folder, maybe I just throw something in here?
After the initial population, I will probably run an hourly/daily request to update the cache as required.
MemoryCache:
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.aspx
UDPATE
Ela's answer below did the trick, basically I just needed to look at the abilities of Global.asax.
Thanks for the quick help here, this has spun up a separate question for me about the pros/cons of different caching types.
Pros/Cons of different ASP.NET Caching Options
You can use the global.asax appplication start method to initialize resources.
Resources which will be used application wide basically.
The following link should help you to find more information:
http://www.asp.net/web-forms/tutorials/data-access/caching-data/caching-data-at-application-startup-cs
Hint:
If you use in process caching (which is usually the case if you cache something within the web context / thread), keep in mind that your web application is controlled by IIS.
The standard IIS configuration will shut down your web application after 20 minutes if no user requests have to be served.
This means, that any resources you have in memory, will be freed.
After this happens, the next time a user accesses your web application, the global asax, application start will be excecuted again, because IIS reinitializes your web application.
If you want to prevent this behaviour, you either configure the application pool idle timeout to not time out after 20minutes. Or you use a different cache strategy (persistent cache, distributed cache...).
To configure IIS for this, here you can find more information:
http://brad.kingsleyblog.com/IIS7-Application-Pool-Idle-Time-out-Settings/
Something that seems to be absent from the otherwise great new features for Windows Azure (announced on June 7th), is the ability to define distributed caches for the reserved instances of a Website Cluster in Reserved Instance Mode.
As of now it seems to be only possible to create distributed caches for standalone webroles or worker roles. Does anyone know a workaround or know if this is something that is coming?
The reason why I'm asking this is because it forces me to create a dedicated worker role for caching and since I'm contrained by costs I can't afford another three instances just for caching. This leaves me with a caching service that's not fault tolerant when in reality my three Webroles hosting the Websites would be a) fault tolerant and b) could contribute enough memory to the distributed cache that I'd gain a much larger cache without a single point of failure as with a single caching workerrole.
This scenario is not supported as of today by Windows Azure Caching (Preview). Thanks for the feedback. I will take this up to the appropriate folks in our team to consider the same for future releases.
As mentioned by Jason and Win, for now you can use Windows Azure Shared Caching. Though you are right that it is limited in Size and has a quota system.
Previously known as the app fabric cache, I think this does what you want?
http://msdn.microsoft.com/en-us/library/windowsazure/hh914133.aspx
http://msdn.microsoft.com/en-us/magazine/gg983488.aspx
You sure can create Dedicated Cache for windows Azure websites in reserved mode. As of now you may not be able to find how to create it in Windows Azure June SDK (1.7) however if really want to do it you need to accomplish it manually.
I had some discussion around this and after some digging I found that it can be done by understanding the dedicated cache in Windows Azure Web Role first and the migrating the references & configuration to your ASP.NET Website. Here are some steps you can follow to try it by yourself:
Create a Web Role with dedicate cache
Understand the references and configuration settings used for Dedicated Cache in web role
Now create your ASP.NET Website and migrate dedicated cache related settings and references to your Windows Azure website
I'm in progress of optimizing a ASP.NET site by storing commonly used database objects in a cache and I'm wondering what are good tools to manage the cache?
I found http://aspalliance.com/cachemanager/ which seem pretty cool, but old? Also I have to install this in the webapp itself. I'd prefer an external tool? What else is out there?
(I also found Visual Studio 2005 add-in "Cache Visualizer" but download page http://blog.bretts.net is broken?)
Is there any way to access one webapps's Cache from other webapp running on the same server?
For example a typical object in my cache is the "type of user" (individual, company, student, etc.) that is pretty much static data. But once every year I might update this table and add a value. This is done in our admin app. Is there any way the admin app can access and invalidate "type of user" cache in the public app? (Without restarting the entire app).
I've looked at SqlCacheDependency but this won't work for us in this case.
The Cache is specific to an AppDomain so if you have more than one Web Application neither can access the other's Cache.
You might want to look into external cache arrangements such as Memcached, redis or perhaps even ASP.NET State Server.
You can still find the download for brett's visualizers using the internet wayback machine.
http://web.archive.org/web/20060512123557/http://blog.bretts.net/wp-content/uploads/2006/03/Johnson.Visualizers.zip
We are planning to move one of our applications on cloud, but somewhere I read that using session in cloud can be dangerous. but this blog dosen't explain any danger as such.
I wanted to know that is there really any threat in using session for cloud applications?
I am new to the forum so excuse if I have commited any mistake and please guide me to correct the same.
If you plan to run your application across several nodes, you will need to take load balancing and out-of-proc sessions into account, but there's nothing inherently insecure about using sessions while your servers are hosted somewhere else.
That just doesn't make any sense.
If 'dangerous' means that in certain situations the use of Session won't work, then you're right if you would be using Azure to host your cloud application. Then it depends on the number of instances you are running.
If you're only running 1 instance then you can use Session (that lives in memory on the instance) without changing anything. But if you're using more than 1 instance (the requests are being load balanced and each request can be handled at a different instance) in memory Session won't work out of the box. To resolve this you're able to use 3 different ways to store session.
See this question for more information:
ASP.NET session state provider in Azure