There are too many options to chose from in Microsoft Azure when planning application design. Azure itself not stands still, looks like many options added recently. I'm a pretty nooby solo developer so I need some entry points to choose architecture.
The application consists of next parts:
1. Database
Classic SQL database is already implemented with Azure SQL database.
2. Server-side application. (architecture refactor needed)
For now application is a .NET C#/WPF desktop application hosted on classic Azure Virtual Machine with Windows Server onboard.
This is an always-running scheduler that performs kind of tasks one by one.
Tasks are mainly long-running works getting some data from Web, CPU-bound proccessing with recieved data, working with the DB.
It feels like its kind of ancient and wrong design (having in mind amount of azure features):
a) The application really don't need a GUI, just ability to control scheduler's status required.
b) Logically some kind of tasks can be performed simultaneously, and some of them must wait others to finish before start. Now all of tasks performed one by one, that caused by virtual machine performance limit. I think there must be a way to achieve parallel working and control results on higher level of abstaction than inside desktop app. I wanna somehow move scheduling logic to level up. (Maybe IaaS->Paas goes here?)
3. Client applications.
Client applications. Registered users work with the DB.
Here questions:
Which server-side application design should be chosen in this case, what Azure features required?
Is there an Azure built-it abilities to manage registered users accounts, or only way is to implement it as a part of application?
Did you explore other storage options or SQL database is what you need?
lets start from scratch:
STORAGE:you can choose from
1. Storage - Blob, Table, Queue, and File storage and disks for VM
2. SQL database - relational database service in the cloud based on the market leading Microsoft SQL Server engine, with mission-critical capabilities
3. Document DB - schema-free NoSQL document database service designed for modern mobile and web applications
4. StorSimple- integrated storage solution that manages storage tasks between on-premises devices and Microsoft Azure cloud storage
5. SQL data Warehouse- enterprise-class distributed database capable of processing petabyte volumes of relational and non-relational data
6. Redis Cache- High throughput, consistent low-latency data access to build fast, scalable applications
7. Azure Search- Search-as-a-service for web and mobile app development
SCHEDULAR: You can pick from
1. Virtual Machine
2. Cloud Service (worker role): you have more control over the VMs. You can install your own software on Cloud Service VMs and you can remote into them.
3. Batch: Cloud-scale job scheduling and compute management
4. Service Fabric: distributed systems platform used to build scalable, reliable, and easily-managed applications for the cloud
5. App Service: Scalable Web Apps, Mobile Apps, API Apps, and Logic Apps for any device
CLIENT: you can try out
1. Web Apps
2. Cloud Service (web role)
Use this link as one stop shop for all Azure services beautifully categorized based on functionality. From here you can pick and choose various services and amp it to your app's requirement.
MASTER LIST: http://azure.microsoft.com/en-in/documentation/
Related
We have been using Azure Cloud Services (Web Roles) since 2013. We use it, because In-Role cache was the only available cache at that time in order for Web Farm to work in Azure.
As of today, App Service (formerly Web App/Web Sites) and Redis Cache are available, and App Service can do pretty much what Cloud Services offers.
According to this comparison, we only see 4 minor areas (IMHO) that App Service can't do –
Remote desktop access to servers
Install any custom MSI
Ability to define/execute start-up tasks
Can listen to ETW events
Question
Does it worth converting existing Cloud Service to App Service while updating In-Role cache to Redis Cache anyway?
In other words, should we even consider hosting in Azure Cloud Service (instead host in App Service)?
I think you may get opinions on this question more than facts, so here is my opinion.
I've been using Azure since the early days when it was just Cloud Services and have done my fair share of edge case implementation with them.
Today (say the past 1-2 years), I've take the approach that I start off with Web Apps and WebJobs until I find a reason not too. For the majority of my clients App Services works fine, though there are some projects that still need Cloud Services.
I find easy deployment and management of WebApps and WebJobs the huge win for me - not having to create that monster package file and redeploy the whole thing just for small changes adds up over time.
I also find WebJobs (using the SDK) are faster to be productive with than with WebRoles - though I sometimes find I need a WebApp with no UI to host the webjobs if they are processor and memory hogs. The fact that you can have your code watch a queue using the QueueTrigger just by adding a single attribute is huge time saver and cuts out all that boilerplate code.
I've used Redis on projects too (though none at the moment) and it was easy to work with - once you work out a few kinks and get used to it.
I want to develop a C# web application that will allow customers to go online to view current inventory levels for a retail shop I have. The shop uses a POS system with a SQL Server backend.
The app would be hosted outside the firewall on a separate server. I'm wondering if it'd be more prudent from a security and/or performance prospective to create a local DB script that replicates the requisite data out to a separate DB (likely on the same server the app is hosted) refreshing every 10-20min or so, than to simply have the web app talk directly to the live POS database.
I can't afford to have the app impact the performance of the POS system in any way. The app connection would be read-only and limited to that sole inventory table, but even with pooling I'm unsure if a few hundred web users pinging the live DB would impart any latency or undesired effects.
If your web application is strictly read-only, then you can have perfect security by having the web application have only datareader permissions on your POS database - no replication or other complicated steps will be necessary.
As for performance - even a basic (Core i3, 5200rpm HDD, 2GB RAM) server can handle a few hundred simple SQL queries per second for a modestly-sized database. Considering how modern database servers cache a lot of data in RAM it means that read queries are amazingly cheap.
I've an enterprise database store used by some rich applications and a website with it own database store.
Enterprise application work with local data and some of these data (like orders,prices ...) have to be "synchronized" to the web site datastore.
On the other side, internet customers are able to edit their profile which have to be "synchronized" to the enterprise datastore too.
Basically i need this architecture :
WebSite => WebSite Database <=> || Internet || <=> Enterprise Database <= Rich Applications
Merge Replication
Service Broker
Synch Framework
Pick your poisson. Merge replication is probably the easiest to deploy. Service Broker is the most performant, but requires a steep learning curve. Synch Framework is not really appropiate, unless you plan to deploy datasets on the client too (mobile devices).
You can do a pull or push from the WebSite Database (WSDB) to the Enterprise Database (EDB) and vice-versa. It depends on what type of latency you are comfortable with. If you want near real-time syncing, go with a push. Otherwise the EBD can pull from the WSDB every hour or so through web service methods or REST calls (and vice-versa).
Push Example:
Customer updates profile and pushes "Submit" button.
WebSite calls web service method PushCustomerInfo(params) on EDB.
Pull Exmaple:
Every hour EBD calls web service method GetCustomerInfo that returns a Customer object on WSDB.
What are the challenges in porting your existing applications to Azure?
Here are few points I'm already aware about.
1) No Support for Session Affinity (Azure is Stateless) - I'm aware that Azure load balancing doesn't support Session Affinity - hence if the existing web application should be changed if it has session affinity.
2) Interfacing with COM - Presently I think there is no support for deploying COM components to the cloud to interface with them - if my current applications need to access some legacy components.
3) Interfacing with other systems from the cloud using non-http protocols
Other than the above mentioned points, what are other significant limitations/considerations that you are aware off?
Also, how these pain points are addressed in the latest release?
our biggest challenge is the stateless nature of the cloud. though we've tried really really hard, some bits of state have crept through to the core and this is what is being addressed.
the next challenge is the support of stale data and caching as data can be offline for weeks at a time. this is hard regardless.
Be prepared for a lengthy deployment process. At this time (pre-PDC 2009), uploading a deployment package and spinning up host services sometimes has taken me more than 30 minutes (depends on time of day, size of package, # of roles, etc).
One side effect of this is that making configuration changes in web.config files is expensive because it requires the entire app package to be re-packaged and re-deployed. Utilize the Azure configuration files instead for config settings - as they do not require a host suspend/restart.
My biggest problem with Azure today is operability with other OS’es. Here I am comparing Azure to EC2/Rackspace instances (Even though Azure as PAAS offers a lot more than them e.g. load balancing, storage replication, geographical deployment etc in a single cheap package).
Even if you consider me as a BizSpark startup guy, I am not inclined to run my database on SqlAzure (Sql2005 equivalent) since I can’t accept their pricing policy, which I’ll have to bear three years after of the BizSpark program. Now they don’t have an option for MySql or any other database. This to me is ridiculous for an SME. With EC2 I can run my MySql instance on another Linux VM (obviously in the same network. Azure gives you the capability to connect to network outside theirs, but that is not really an option)
2nd. This is again is related to using *nix machines. I want all my caching to be maintained by Memcached. With asp.net 4 they have even given us out of the box memcached support through extensible output caching. The reason why I am adamant about memcached is the eco system it provides. E.g.: Today I can get memcached with persistent caching as an add-on. This will even give me the opportunity to store session data with memcached. Additionally I can run map reduce jobs on the IIS logs. This is done using cloudera images on EC2. I don’t see how I can do these with Azure.
You see, in the case of Amazon/Rackspace I can run my asp.net web app on a single instance of Windows Server 2008 and the rest on *nix machines.
I am contemplating running my non hierarchical data (web app menu items) on CouchDb. With Azure I get the Azure table. But I am not very comfortable with that ATM. With EC2 I can run it on the same MySql box(don't catch me on this one :-)).
If you are ready to forget these problems, Azure gives you an environment with a lot of grunt work abstracted. And that’s a nice thing. Scaling, loading balancing, a lot of very cheap storage, CDN, storage replication, out of the box monitoring for services through Fabric Controller etc among these. With EC2/Rackspace you’ll have to hire a sysadmin shelling $150k PA to do these things (AFAIK Amazon provides some of these feature at additional cost).
My comparisons are between azure and Amazon/Rackspace instances (and not cloud). For some this might seem like apples and orange. But azure does not provide you with instances. Just the cloud with their customized offerings…
My biggest problem is/was just signing up and creating a project. And that's how far it got over the last month.
Either I am doing something very wrong, or that site is broken most of the time.
One important challenge is the learning curve, lack of experienced developers, the time it takes to become productive .
This happens with all technologies, but with the cloud there is a fundamental change in how somethings are done.
If your application needs a database, I'm not sure that Windows Azure has a relational database (right now)
Also, there are other cloud computing providers that can offer you more options in configuring your virtual machine for example, it really depends on what you actually need and want.
Please give reference/guidance to make a web application for managing all IT assest/devices .
Application consist of two component Web application and Windows .NET Application.
Client Windows .NET Application scan all active network & find all IT assests like printer,scanner & upload all data into the web application.
Now our team using Asp.net & c#
technology for this project.
Please give suggestion regarding the
client application & web application
interaction.
Suggest any library/reference
required for the project.
Is Microsoft sync frame work good for
client application.
Is Microsoft WCF will be a good
option for client & server
interaction(for making API).
How to scan active network for
finding devices by using client
application.
I will suggest you to go for a simple solution like
1. A windows application that takes care of scanning the entire network of computers from the installed computer and then sends the information to a web service
2. A web service to accept the assets list and then save them to the database
3. An asp.net application to display and catalog the indexed assets.
The part that will become somewhat complicated will be the assets discovery since it should be handled by the windows application. Your best bets are
1. to depend on the windows registry
2. to use the Windows Management Instrumentation (WMI)
3. If you dare you can dirctly program against the Netapi32.dll: NetServerEnum or similar low level win32 API
Hope this will help you to get started
Consider this approach:
Asset Catalog Service (WCF App). Its responsibility is to act as a repository for found assets. The service encapsulates the actual storage (database, etc).
Asset Finder (Winform app). Its responsibility is to scan all active network, find all IT assets and calls Asset Catalog Service to register the device. Asset Catalog Service will reconsile whether a device has not been registered (therefore will be stored), a device have been updated (therefore will be updated) or no change (therefore will be ignored). You may have multiple instances of Asset Finder running at the same time to speed up the discovery process. Asset Catalog Service or another service may be used to keep track the work pool of the asset finders.
The website (Asp.Net Web app). Its responsibility is to visualize the asset catalog. It queries Asset Catalog Service and display the result to the end users.
I can't see any obvious use case for using Microsoft Sync framework.
Unfortunately, I don't have any experience in writing any asset discovery algorithm. Others might be able to help on that point.
Hope that helps.
This is a bit off topic, but I would suggest looking at options from existing vendors that will meet your overall business requirements. Asset detection and management is not a simple task and creating an in-house application to do it is often a waste of time and money that could be better spent on core business needs or other IT support/resources. Purchasing software from an existing vendor will give you a much better solution than whatever you can code up in a week, a month, or even a year. If you are trying to catalog even a medium sized network with over 100 nodes then using an established system could end up being much cheaper than building your own. Here are a few examples of existing products:
http://www.vector-networks.com/components/network-discovery-and-mapping.php
http://www.manageengine.com/products/service-desk/track-it-assets.html
I haven't used either of them, but I have been down a similar route trying to create an in--house server monitoring and management system. We spent two weeks working on a prototype that was eventually scraped for a 3rd-party system that cost $1,000 a year. It would have cost us at least $10,000 to build something with 1/10th the features, let alone support and maintain it. Even just searching for a FOSS solution and then using that as the basis for your project (something like nmap) would be better than starting from scratch.
Best of luck!
Windows based application should be used to scan network and collect info about devices/assets available in the network. save those information in database.
look at following project to get and idea how you may scan the network http://sourceforge.net/projects/nexb/
the same database should be used by ASP .net app to be used for reporting purposes, you may also use it to group/tag/categorize various assets.
Also store scanned devices in separate "departments" depending on their IP schema.