I'm new to azure and cloud platform development - I have a web application and I create multiple companies using a
company table and
seperate the
company_products using a foreign key: companyid
Is it possible to run multiple instances in which each has it's own SQL database? I want to do this because every customer is unique and they may need tailored modules.
There are no restrictions on how you build your app. You many create as many databases as you wish, and have multiple web apps if you wish (whether in the same app service plan or across multiple app service plans). How you do this is strictly up to you, but no - there is nothing that forces you to use a single database for anything.
If I am understanding your question correctly, you would like each of your customers to have their own instance and separate databases?
Have a look at this article: https://msdn.microsoft.com/en-us/library/ff966499.aspx
I'd suggest using Azure App Service, and running each of your customers in their own app. You can save money as all the apps run under the same App Service Plan. Usage on one instance however, does not affect performance on another. There are quite a lot of benefits with App Service.
https://azure.microsoft.com/en-us/documentation/articles/azure-web-sites-web-hosting-plans-in-depth-overview/
For starting out, I'd suggest using individual Azure SQL Databases for each customer. This is to save money, as you can spin up S0/S1 databases for relatively cheap. Then you can set the connection string through the Azure Portal for each app you have under your App Service Plan.
If you end up scaling quickly, have a look at Elastic Databases. You pay for a database server and get something like 200 databases per server. So its really only economical if you have quite a few customers and can justify the cost. However there are some useful Azure tools that make managing elastic database pools easier. Check out the Azure documentation for more details on this.
Once you have this architecture set up, you can either manage your instances/databases through the portal or setup another logic app to manage all your instances. It would be a lot less development work to just manage it through the portal for starting out, however if this is a SaaS product, and is going to scale quite quickly, you may want to invest ahead of time in automating some processes so that deploying new instances doesn't have to be done manually.
I prefer this approach as well, because you can then point different subdomains at your individual custom apps. (i.e. customer1.yourdomain.com, customer2.yourdomain.com). Each app already has it's own domain under azurewebsites.net, so if you don't mind using that domain, you can just stick with it. It's nice cause then you don't have to manage your own DNS or worry about SSL Certificates and what not, as it's already managed for you. If you do want your own custom domains, there's plenty of documentation on this. Azure also has a DNS service for automating creating CNAME records while automatically spinning up a new app and deploying a DB to your pool and initializing the DB, etc...
As David says, you can do this how you like. My suggestion would be to use connection strings in your application's web.config to control the database instance you want to communicate with, and you can then configure your azure web app deployment's "slot settings" in the azure portal (or your ARM template) to override the web config settings for that deployment.
So - you can create your ARM template which describes your infrastructure, and may deploy your app service plan, web app, web config, sql database to a specifically named resource group (ie. targeting one of your customers). This would have configuration that points to the database instance in that resource group, and possibly other config that turns customer specific functionality on/off. Try to keep the code and deployment as common as you can, otherwise you'll end up with a maintenance nightmare in the future.
See https://azure.microsoft.com/en-gb/documentation/articles/web-sites-configure/ for information on configuration settings in azure web apps.
Related
I'm trying to use Entity Framework Core with SQL Server (not SQLite) in a Xamarin forms app but I just can't figure it out! All tutorials explain how to use EF Core with sqlite! Are there any clear documentation or tutorials?
About connect to a Remote DataBase in Xamarin.Forms, I find one article that you can take a look:
https://xamarinhelp.com/connecting-remote-database-xamarin-forms/
You may be wondering why you couldn’t just connect directly to a database from your mobile app? The main reasons are:
Security
You don’t want your mobile client apps to have a database connection string with a username and password in it. It opens your database up to anyone. You can create a user with read only permissions and only allow access to certain tables, but they could still see all data in these tables. On an API, you can implement additional security checks and have authentication based on OAuth or an existing user management system.
Performance
Database connections weren’t designed to go over high latency connections. It is likely your database connection would keep dropping, forcing you to reconnect every time.
Control
With an API you can control the flow of data to and from your database. You can implement rate limiting, and monitoring of all of your requests. If you need to change business logic, or even what database or resources are used via each API request, you can do this on the server, without having to redeploy a mobile app.
Resources
With an API, you reduce the need for server resources. While you may have to setup another server to handle an API, the REST API is designed to be stateless and efficient. Scaling to many users in the future is easier with an API.
So my application is composed of a handful of separate .NET components that all run in Azure. To give you an idea of what's involved:
A main ASP.NET MVC5/Web API 2 REST service that runs as an Azure website (I think they renamed these to web apps?).
A SQL database that the main REST service uses.
Another internal Web API REST service that the main REST service talks to that runs as an Azure website.
An Azure storage table that the the internal Web API REST service uses.
3 scheduled jobs (just .NET exe's) that do work in the background and also talk to the main SQL database.
All that's running great in Azure right now. My problem is automating the deployment and configuration.
Right now it's all manual. I right-click and publish both web apps from Visual Studio. I build and FTP up the web jobs. The database and Azure storage already exist so I don't have to re-set them up.
But say something bad happens - a datacenter goes down or something. I'd like to be able spin up a new version of my app (with all those components) that is ready to go with minimal effort.
I'm pretty new to the world of Azure so I'm not sure where to start. What are my options?
You are looking to automating deployments in Azure. I recommend to use ElasticBox to solve it.
To achieve the automation you will need to create a box for every different service or component you need to deploy (a box is the abstraction unit that uses to define the installation and configuration of the deployment of a service or application in any cloud).
It's possible also to create boxes based on VM Instances, VM Roles, or Worker Roles and also automate the deploy of Microsoft SQL Servers. Let's say near every option offered by Azure.
Then with those boxes completed (that can be customized and reuse your legacy code from your previous manual installation), you can deploy the multiple vms with near no manual intervention, just one click or a command with some parameters.
A box includes the variables necessary for your deployment (you can set default values for those variables) and your legacy scripts (In this case probably PowerShell, but they could be bash, python, perl, java, or any other language)
When you deploy your boxes:
Creates a Cloud Service or VM in the location that you choose and with the Azure configuration that you preconfigured. It takes care of provision the vm in your Azure provider, or near any other cloud provider in the market.
Installs, configure files with your specified variables and execute your SQL or Web services that you have defined.
Other ways to interact with the service:
Jenkins' Plugin could be used to build a CI environment connecting your code updated or a Pull Request with automated deployments in Azure or any other public cloud.
Command line tool that enables to do VM deployments of your boxes and also you can manage your deployed vm instances with it.
Azure Resource Manager (ARM) is intended to solve exactly the issues you described.
The basic idea is that you use a JSON template to describe all your services. You can then give that template to ARM and it will create the services as defined in the template. If you want to make a change, instead of doing it imperatively (via powershell or manually in the portal) just update your template, pass it to ARM and it will make whatever changes are necessary to make the services match your template.
Some resources:
ARM talk at MS Ignite 2015
ARM template language reference
Quickstart templates on GitHub
Azure Resource Explorer - view ARM templates of existing resources
Resource Group Deployment Projects in Visual Studio
I think your looking for something to help you handel deploys to your windows Azure servers. If that is the case I recommend looking into Jenkins CI. There are many resources available online you can look into in terms of having Jenkins and Azure work together.
First of all, I took a look to every related topic on her about this issue. However non of them was successful in answering my question fully.
Currently I am working on a desktop app, coded in C#/wpf, that requires MySQL connection both for authentication and storing user custom lists etc.
However, the problem is that apparently allowing everyone to remotely connect to MySQL db is not good practice. Also, my current host requires IPs to be whitelisted before they can connect to the db.
What are my options on this?
Thank you in advance
You should look into creating a web service (SOAP), http web-api (REST) or some other middleware to abstract your data storage.
This has the benefits of:
Allows you to move much of the business logic out of your desktop app and into middle ware
Allows you to keep business logic out of sql which might be a bottleneck
Allows you to update your business logic without redistributing your desktop app (easier if you don't have direct control of all the desktops).
Allowing you to control authentication (many web servers have their own modules, method of authentication). Your app would control access and access storage under it's own service account.
Allows you to complete change your data storage (let's say in the future you store some in sql, some in mongodb, some in cloud storage - once again, without having to update all your desktops.
Allows you to scale out your front ends and even possibly scale out your backend storage (for example, read/write DB replicas)
If you're already working with C#, then the new MVC4 web-api should be a good fit. Read more here:
http://www.asp.net/web-api
If you go that route you could control access in your service and have your service access the database either via credentials in a connection string or if you use IIS, credentials on the application pool mapped to your site.
If you're shipping your desktop app (you're not hosting the DB) then you can also self host web-api in it's own exe if your customers don't want to install/manage IIS.
Finally, if your mysql is online, your middleware could be in the cloud (azure etc...)
Create a web service, such as with WCF or MVC Web API where your app can pass through their credentials and authenticate. I'd recommend https for transport security.
Something that seems to be absent from the otherwise great new features for Windows Azure (announced on June 7th), is the ability to define distributed caches for the reserved instances of a Website Cluster in Reserved Instance Mode.
As of now it seems to be only possible to create distributed caches for standalone webroles or worker roles. Does anyone know a workaround or know if this is something that is coming?
The reason why I'm asking this is because it forces me to create a dedicated worker role for caching and since I'm contrained by costs I can't afford another three instances just for caching. This leaves me with a caching service that's not fault tolerant when in reality my three Webroles hosting the Websites would be a) fault tolerant and b) could contribute enough memory to the distributed cache that I'd gain a much larger cache without a single point of failure as with a single caching workerrole.
This scenario is not supported as of today by Windows Azure Caching (Preview). Thanks for the feedback. I will take this up to the appropriate folks in our team to consider the same for future releases.
As mentioned by Jason and Win, for now you can use Windows Azure Shared Caching. Though you are right that it is limited in Size and has a quota system.
Previously known as the app fabric cache, I think this does what you want?
http://msdn.microsoft.com/en-us/library/windowsazure/hh914133.aspx
http://msdn.microsoft.com/en-us/magazine/gg983488.aspx
You sure can create Dedicated Cache for windows Azure websites in reserved mode. As of now you may not be able to find how to create it in Windows Azure June SDK (1.7) however if really want to do it you need to accomplish it manually.
I had some discussion around this and after some digging I found that it can be done by understanding the dedicated cache in Windows Azure Web Role first and the migrating the references & configuration to your ASP.NET Website. Here are some steps you can follow to try it by yourself:
Create a Web Role with dedicate cache
Understand the references and configuration settings used for Dedicated Cache in web role
Now create your ASP.NET Website and migrate dedicated cache related settings and references to your Windows Azure website
I am going to write up a webapp hosted on a windows 2003 server to allow me to connect to local and remote servers to do some basic things.
The webapp will be hosted on serverA. It will need to be able to copy files/folders from one folder to another on this server.
It will need to be able to connect to ServerB and copy files in the same way, e.g. copy \serverB\path\to\sourcefiles to \serverB\path\to\destinationfiles
ServerB hosts an installation of MSSQL 2008, I want to be able to create new database/login etc.
How do I go about this please? I've been reading a bit about Windows Authentication, Impersonation, Delegation but i don't know where to focus on.
thanks
S
To be honest there isn't really a one size fits all complete answer to your question, however there are a number of things that you need to take into consideration early in development to ensure that your platform is built on solid foundations.
From the description you have given the most critical consideration has to be security and everything you develop has to have this at its core. Judging by your post if the wrong person was to access your front end then they could wreak havoc.
As for the model to use, I would suggest Windows Authentication as this is built into the framework and gives you the ability to segregate into usergroups with differing levels of access. It will also open up some of the functionality you need, i.e. network copy of files etc
As for the database management aspect, this again can easily be done via Windows Authentication as you can grant (in SQL) windows users the ability to perform certain tasks, i.e. Create Database, Create Login, drop x, etc
All this said, it of course assumes that the two servers share user credentials, i.e. domain controller etc.
Another method, would be to use the web "interface" as a pass through onto a WCF service that operates under a specific user account that has the access you need. You would then seperately manage authentication/authorisation in a manner that you decide.
Like I said, no simple one size answer - but hopefully this will give you something to chew on.
If your goal is to create new databases or logins, why can't you use the create database and create login commands?