I need to implement a cache in my application using Azure Cache for Reddis but I went to some blogs where I have an option to store my responses or data using Azure CDN.
Could someone suggest me what is the difference between them?
As per my understanding Reddis is used to store the cache data whereas CDN used to cache data as well as a faster response from the nearby server
Azure Redis Cache
It perfectly complements Azure database services such as Cosmos DB. It provides a cost-effective solution to scale read and write throughput of your data tier. Store and share database query results, session states, static contents, and more using a common cache-aside pattern.
Here is the diagram below of Cache-Aside Pattern on Azure Storage.
We can see that we need to first hit to Redis Cache to see if we have our item available. if so, we will fetch it otherwise, pull the item from Table to re-cache.
Azure CDN
“A content delivery network (CDN) is a distributed network of edge servers that can efficiently deliver web content to users. CDNs store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency. A CDN profile, belonging to one Azure subscription, can have multiple CDN endpoints.”
What is a content delivery network on Azure? #Microsoft
It lets you reduce load times, save bandwidth, and speed responsiveness—whether you’re developing or managing websites or mobile apps, or encoding and distributing streaming media, gaming software, firmware updates, or IoT endpoints.
Web-Queue-Worker on Azure App Service
Conclusion
Azure Cache for Redis stores session state and other data that needs low latency access.
Azure CDN is used to cache static content such as images, CSS, or HTML.
Related
I have a requirement to migrate encrypted blobs from source Azure storage account to destination storage account in decrypted format (Key vault key).
I have written C# code but it was taking almost 3 days for single container. I am trying event grid triggered azure function connected to destination storage account on new file captured event and migrating blobs using Azure data factory copy pipeline, azure function is using app service plan which can scale out till 10 instances.
am I on right path? is there any other performant way?
If your Azure function need is only to initiate ADF pipeline, then I guess you can take advantage of event based trigger or you can opt of LogicApp to do the same job for better performance.
Event-driven architecture (EDA) is a popular data integration paradigm that entails event creation, detection, consumption, and response. Data integration situations frequently need users triggering pipelines based on storage account events, such as the arrival or deletion of a file in an Azure Blob Storage account.
Please check below link to know more about event based triggers: Create a trigger that runs a pipeline in response to a storage event | Microsoft Docs
Also, you can consider increasing DTUs/Parallel copy options as well inside copy activity which helps you to improve performance of your copy.
If there is a need to migrate a big amount of data from a data lake or an enterprise data warehouse (EDW) to Azure. Other times, you may need to import huge volumes of data into Azure from several sources for big data analytics. In each scenario, achieving optimal performance and scalability is important.
Please check below link to know more details about : Copy activity performance and scalability guide
There are too many options to chose from in Microsoft Azure when planning application design. Azure itself not stands still, looks like many options added recently. I'm a pretty nooby solo developer so I need some entry points to choose architecture.
The application consists of next parts:
1. Database
Classic SQL database is already implemented with Azure SQL database.
2. Server-side application. (architecture refactor needed)
For now application is a .NET C#/WPF desktop application hosted on classic Azure Virtual Machine with Windows Server onboard.
This is an always-running scheduler that performs kind of tasks one by one.
Tasks are mainly long-running works getting some data from Web, CPU-bound proccessing with recieved data, working with the DB.
It feels like its kind of ancient and wrong design (having in mind amount of azure features):
a) The application really don't need a GUI, just ability to control scheduler's status required.
b) Logically some kind of tasks can be performed simultaneously, and some of them must wait others to finish before start. Now all of tasks performed one by one, that caused by virtual machine performance limit. I think there must be a way to achieve parallel working and control results on higher level of abstaction than inside desktop app. I wanna somehow move scheduling logic to level up. (Maybe IaaS->Paas goes here?)
3. Client applications.
Client applications. Registered users work with the DB.
Here questions:
Which server-side application design should be chosen in this case, what Azure features required?
Is there an Azure built-it abilities to manage registered users accounts, or only way is to implement it as a part of application?
Did you explore other storage options or SQL database is what you need?
lets start from scratch:
STORAGE:you can choose from
1. Storage - Blob, Table, Queue, and File storage and disks for VM
2. SQL database - relational database service in the cloud based on the market leading Microsoft SQL Server engine, with mission-critical capabilities
3. Document DB - schema-free NoSQL document database service designed for modern mobile and web applications
4. StorSimple- integrated storage solution that manages storage tasks between on-premises devices and Microsoft Azure cloud storage
5. SQL data Warehouse- enterprise-class distributed database capable of processing petabyte volumes of relational and non-relational data
6. Redis Cache- High throughput, consistent low-latency data access to build fast, scalable applications
7. Azure Search- Search-as-a-service for web and mobile app development
SCHEDULAR: You can pick from
1. Virtual Machine
2. Cloud Service (worker role): you have more control over the VMs. You can install your own software on Cloud Service VMs and you can remote into them.
3. Batch: Cloud-scale job scheduling and compute management
4. Service Fabric: distributed systems platform used to build scalable, reliable, and easily-managed applications for the cloud
5. App Service: Scalable Web Apps, Mobile Apps, API Apps, and Logic Apps for any device
CLIENT: you can try out
1. Web Apps
2. Cloud Service (web role)
Use this link as one stop shop for all Azure services beautifully categorized based on functionality. From here you can pick and choose various services and amp it to your app's requirement.
MASTER LIST: http://azure.microsoft.com/en-in/documentation/
I am new to Azure, I have small instance of cloud service, In last one week my instance is changed 2 times & all my project data is lost, it will roll back to 1 month older. All my client data is lost, Is there any way to recover that data & why this issue occurs.
There is no way to recover your data and there's no way to prevent this from happening. This is by design.
Whenever your machine crashes or there's an update to the system, it is completely wiped. A new system image will be copied, the machine will boot again and your application is copied over. Azure cloud services are Platform-as-a-Service (PaaS).
This leaves you with two possible options. The first would be to not store persistent data on the cloud service in the first way. This is no proper way for Azure Cloud Services. Instead store your data in the Azure Storage or an Azure SQL database (or wherever you like).
Another option would be to use a virtual machine instead of a cloud service. That machine is completely in your hand. It's your duty to update it, keep it secure and to do whatever it takes to keep it running. With this approach you also have to take care yourself about a loadbalancer, about multiple instances, etc, so outscaling becomes a lot more hard. This is Infrastructure-as-a-Service (IaaS).
So it actually depends on what you want to do.
Cloud instances are stateless, this means that anything that you've stored on the local storage for the virtual machines can and will be deleted on the event of a node failure, or a system upgrade, or even a new deployment of a package that you upload.
A couple of things you can do:
If you need to add additional files or configurations to your project upon deployment, then make use of the OnStart() to perform it. This assures than on each deployment or failure restore you get back the same environment you always had.
To avoid losing your source code I recommend you setup source control and integrate it with you cloud instance implementation. You can either do this with Git or with Team Foundation Service (checkout tfspreview.com)
If you need to store files on the server such as assets or client-updated media, consider using Azure Blob Storage. Blob storage is replicated both locally on the datacenter and geo-replicated to other datacenters if you choose to do so.
Hope that helps
I'm working on an enterprise application re-write using Silverlight. The project is still in its early stages of development, but there is a heavy initial data load on the application start as it pulls several sets of business objects from the server. Some of these data sets are infrequently changed once they're set up by the user; like the list of all customized data types in use by the user.
In these cases, the thought is to cache the data objects (probably in a serialized form) in Isolated Storage so there's no wait on an asynchronous call to the server to grab the data after the first application load.
I thought that Isolated Storage is meant to store configuration data such as user preferences, or to share across the in-browser and out-of-browser version of an app...that it works a lot like a cookie store.
My main concern is that I'm unsure of how secure Isolated Storage is, and I don't trust caching application data in it. To be fair, the user would also have access to the Silverlight .xap file.
Is this an appropriate use for Isolated Storage, why or why not?
It's a fair use of isolated storage, if you're comfortable with the caveats.
The first caveat in my mind is that whatever you store in isolated storage on one machine will not be available when the user fires up your app on another machine - you lose the mobility advantage of web applications over desktop installed apps. If the user spends some time configuring their preferences, etc, they will be irritated that they have to do it all over again just because they switched to a different computer to view your web app. To solve this, you should replicate the user's customizations to cloud storage so that it can be copied down to whatever machine they choose to run your web app on. Treat the isolated storage as a performance optimization cache for data that officially lives in the cloud.
I believe Silverlight isolated storage is written to disk in the user's private data area in the file system. \users\\AppData or similar. This will keep it isolated away from other users on the same machine, but will not provide any protection from other programs running for the same user. I don't recall if Silverlight isolated storage is encrypted on disk. I highly doubt it.
A second caveat is that Silverlight isolated storage has a quota limit, and it's fairly small by default (1MB). The quota can be increased with a call to IncreaseQuotaTo(), which will prompt the end user to ok the request.
The third caveat is that if you're going to use local storage as a cache of data that lives in the cloud, you have to manage the data synchronization yourself. If the user makes changes locally, you need to push that up to the storage of authority in the cloud, and you'll have to decide when or how often to refresh the local cache from the cloud, and what to do when both have been changed at the same time (collision).
The browser cookie store is not a great metaphor for describing Silverlight isolated storage. Browser cookies for a given domain are attached to every http request that is made from the client to the server. The cookies are transmitted to the server constantly. The data in Silverlight isostorage is only accessible to the Silverlight code running on the client machine - it is never transmitted anywhere by Silverlight or the browser.
Treat Silverlight's isolated storage as a local cache of cloud data and you should be fine. Treat isostorage as a permanent storage and you'll piss off your customers because the data won't follow them everywhere they can use your web app.
Not a complete answer to your story but a data point to consider:
Beware the IO speeds of IsolatedStorage. While there has been considerable effort put into speeding it up, you may want to consider other options if you plan to do multiple small reads/writes as it can be extremely slow. (That, or use appropriate buffering techniques to ensure your reads/writes are larger and infrequent.)
What are the challenges in porting your existing applications to Azure?
Here are few points I'm already aware about.
1) No Support for Session Affinity (Azure is Stateless) - I'm aware that Azure load balancing doesn't support Session Affinity - hence if the existing web application should be changed if it has session affinity.
2) Interfacing with COM - Presently I think there is no support for deploying COM components to the cloud to interface with them - if my current applications need to access some legacy components.
3) Interfacing with other systems from the cloud using non-http protocols
Other than the above mentioned points, what are other significant limitations/considerations that you are aware off?
Also, how these pain points are addressed in the latest release?
our biggest challenge is the stateless nature of the cloud. though we've tried really really hard, some bits of state have crept through to the core and this is what is being addressed.
the next challenge is the support of stale data and caching as data can be offline for weeks at a time. this is hard regardless.
Be prepared for a lengthy deployment process. At this time (pre-PDC 2009), uploading a deployment package and spinning up host services sometimes has taken me more than 30 minutes (depends on time of day, size of package, # of roles, etc).
One side effect of this is that making configuration changes in web.config files is expensive because it requires the entire app package to be re-packaged and re-deployed. Utilize the Azure configuration files instead for config settings - as they do not require a host suspend/restart.
My biggest problem with Azure today is operability with other OS’es. Here I am comparing Azure to EC2/Rackspace instances (Even though Azure as PAAS offers a lot more than them e.g. load balancing, storage replication, geographical deployment etc in a single cheap package).
Even if you consider me as a BizSpark startup guy, I am not inclined to run my database on SqlAzure (Sql2005 equivalent) since I can’t accept their pricing policy, which I’ll have to bear three years after of the BizSpark program. Now they don’t have an option for MySql or any other database. This to me is ridiculous for an SME. With EC2 I can run my MySql instance on another Linux VM (obviously in the same network. Azure gives you the capability to connect to network outside theirs, but that is not really an option)
2nd. This is again is related to using *nix machines. I want all my caching to be maintained by Memcached. With asp.net 4 they have even given us out of the box memcached support through extensible output caching. The reason why I am adamant about memcached is the eco system it provides. E.g.: Today I can get memcached with persistent caching as an add-on. This will even give me the opportunity to store session data with memcached. Additionally I can run map reduce jobs on the IIS logs. This is done using cloudera images on EC2. I don’t see how I can do these with Azure.
You see, in the case of Amazon/Rackspace I can run my asp.net web app on a single instance of Windows Server 2008 and the rest on *nix machines.
I am contemplating running my non hierarchical data (web app menu items) on CouchDb. With Azure I get the Azure table. But I am not very comfortable with that ATM. With EC2 I can run it on the same MySql box(don't catch me on this one :-)).
If you are ready to forget these problems, Azure gives you an environment with a lot of grunt work abstracted. And that’s a nice thing. Scaling, loading balancing, a lot of very cheap storage, CDN, storage replication, out of the box monitoring for services through Fabric Controller etc among these. With EC2/Rackspace you’ll have to hire a sysadmin shelling $150k PA to do these things (AFAIK Amazon provides some of these feature at additional cost).
My comparisons are between azure and Amazon/Rackspace instances (and not cloud). For some this might seem like apples and orange. But azure does not provide you with instances. Just the cloud with their customized offerings…
My biggest problem is/was just signing up and creating a project. And that's how far it got over the last month.
Either I am doing something very wrong, or that site is broken most of the time.
One important challenge is the learning curve, lack of experienced developers, the time it takes to become productive .
This happens with all technologies, but with the cloud there is a fundamental change in how somethings are done.
If your application needs a database, I'm not sure that Windows Azure has a relational database (right now)
Also, there are other cloud computing providers that can offer you more options in configuring your virtual machine for example, it really depends on what you actually need and want.