I am planning to move a game server of mine to Amazon EC2. Right now the actual server runs on .Net Framework 3.5 on a windows dedicated server. Since it is a personal side-project, it's quite expensive to have a fully dedicated server to that, therefore I would like to move it to the cloud (Amazon EC2 or maybe Windows Azure).
Have someone accomplished such thing? Is it possible to do so? If yes, could you provide me with some documentaiton on the subject since I was only been able to find doc for setting up web servers over http.
The server binds and listens to 2 TCP sockets (nodelay option) on 2 different ports.
Thanks a lot!
Kel
With EC2 you own full control of the server. That means you'll be able to deploy your app without much modification and have full control to tune the system to your needs. I'm not familiar with game servers, but if you need to tune your environment (ports, accounts, services etc.) then EC2 is probably the platform for you.
If your application is very light then you may be able to get away with using the 'Mini' EC2 instances, which only cost about 3-5 cents/hr. Cost comparisons between EC2 and Azure are a bit challenging, but my understanding is that Azure can get expensive due to their billing methodology. I've written a small cloud comparison article recently that gives an overview of the main players: http://blog.labslice.com/2010/10/choosing-your-cloud.html.
There's not much more to say. The cloud solutions can be quite confusing. Each tend to come with unique terminologies, a vast amount of services and certain peculiarities. In short, you're best off to just test both EC2 and Azure simply to get the ball rolling. Costs are pretty low and there's no lock-in for testing.
Simon # http://LabSlice.com
You should be able to do this on Azure using custom AppFabric ServiceBus binding, with TcpRelayConnectionMode = Hybrid.
There's some background on how this works here.
I know you already accepted an answer but if you are running your server 24-7 it may just be cheaper to get dedicated hosting. Doing the math it would cost 86.40 to run a small instance (I did small instead of micro because you also have to calculate in EBS pricing for the data, the micro instance has no local storage). Doing a Google search for "Cheep dedicated hosting" gave me this provider for 66.95/mo. ($37.95 for the server + $29 for using windows instead of Linux)
If you are doing testing I would recommend using EC2 to get things working smoothly but when you are ready to deploy and want the game running all the time you can save a lot of money by going with a traditional hosting provider instead of doing cloud computing.
Related
We have been using Azure Cloud Services (Web Roles) since 2013. We use it, because In-Role cache was the only available cache at that time in order for Web Farm to work in Azure.
As of today, App Service (formerly Web App/Web Sites) and Redis Cache are available, and App Service can do pretty much what Cloud Services offers.
According to this comparison, we only see 4 minor areas (IMHO) that App Service can't do –
Remote desktop access to servers
Install any custom MSI
Ability to define/execute start-up tasks
Can listen to ETW events
Question
Does it worth converting existing Cloud Service to App Service while updating In-Role cache to Redis Cache anyway?
In other words, should we even consider hosting in Azure Cloud Service (instead host in App Service)?
I think you may get opinions on this question more than facts, so here is my opinion.
I've been using Azure since the early days when it was just Cloud Services and have done my fair share of edge case implementation with them.
Today (say the past 1-2 years), I've take the approach that I start off with Web Apps and WebJobs until I find a reason not too. For the majority of my clients App Services works fine, though there are some projects that still need Cloud Services.
I find easy deployment and management of WebApps and WebJobs the huge win for me - not having to create that monster package file and redeploy the whole thing just for small changes adds up over time.
I also find WebJobs (using the SDK) are faster to be productive with than with WebRoles - though I sometimes find I need a WebApp with no UI to host the webjobs if they are processor and memory hogs. The fact that you can have your code watch a queue using the QueueTrigger just by adding a single attribute is huge time saver and cuts out all that boilerplate code.
I've used Redis on projects too (though none at the moment) and it was easy to work with - once you work out a few kinks and get used to it.
Our company uses a system of which I am the sole developer. It is a C# based desktop application that is run on some 50 workstations or so and all connect to one central SQL Server database.
Our network administrator is now looking at presenting the application through Terminal Services, something that I know nothing about, yet.
As I started Googling around I saw that, apparently, some applications do not work under Terminal Services. MS Paint is mentioned as an example. So this got me wondering, what does a developer need to know to make sure that his/her application works in Terminal Services?
I don't have the time right now to investigate TS in depth but I'm hoping there might be an article somewhere that is written for developers. As in "Things not to do when you develop an application that will be run in Terminal Services".
Terminal Service RemoteApp works pretty well if you application is designed to be multi-user compatible. so you will need to ensure user session related data is not shared, but placed in isolated storage with IsolatedStorageContainment set to DomainIsolationByUser for example.
here are some useful links:
http://www.fmsinc.com/microsoftaccess/terminal-services/remoteapp.htm
https://www.youtube.com/watch?v=Nf20-76dMcg
https://msdn.microsoft.com/en-us/library/3ak841sy%28v=vs.110%29.aspx
I got a chat application (webservice) running on a website hosted by a web farm and I don't know how to temporarily store the chat messages. Im using long polling to save resources and I have specified a shared machine key.
Because its running on a web farm the HttpApplicationState won't work and saving each message to my database would cause a lot of overload and overhead, and I doubt that would be a good idea.
So is there any other approach to save the messages in server "memory", note: within a web farm?
The classic solution to this is to use a distributed cache; it's not as popular in .Net world as it might be, but here's an article on MSDN; Microsoft has a product, or you can use the open source Memcached, for which you can also get .Net client libraries and Windows versions.
Please note that while distributed caching is very cool when it works, it does introduce a lot of additional complexity, and exciting new ways for bugs to creep into your app. I'd only go down this route if I really, really needed to.
I found some more help on the topic here. It introduces different caching techniques. Without the use of third-part software.
What are the challenges in porting your existing applications to Azure?
Here are few points I'm already aware about.
1) No Support for Session Affinity (Azure is Stateless) - I'm aware that Azure load balancing doesn't support Session Affinity - hence if the existing web application should be changed if it has session affinity.
2) Interfacing with COM - Presently I think there is no support for deploying COM components to the cloud to interface with them - if my current applications need to access some legacy components.
3) Interfacing with other systems from the cloud using non-http protocols
Other than the above mentioned points, what are other significant limitations/considerations that you are aware off?
Also, how these pain points are addressed in the latest release?
our biggest challenge is the stateless nature of the cloud. though we've tried really really hard, some bits of state have crept through to the core and this is what is being addressed.
the next challenge is the support of stale data and caching as data can be offline for weeks at a time. this is hard regardless.
Be prepared for a lengthy deployment process. At this time (pre-PDC 2009), uploading a deployment package and spinning up host services sometimes has taken me more than 30 minutes (depends on time of day, size of package, # of roles, etc).
One side effect of this is that making configuration changes in web.config files is expensive because it requires the entire app package to be re-packaged and re-deployed. Utilize the Azure configuration files instead for config settings - as they do not require a host suspend/restart.
My biggest problem with Azure today is operability with other OS’es. Here I am comparing Azure to EC2/Rackspace instances (Even though Azure as PAAS offers a lot more than them e.g. load balancing, storage replication, geographical deployment etc in a single cheap package).
Even if you consider me as a BizSpark startup guy, I am not inclined to run my database on SqlAzure (Sql2005 equivalent) since I can’t accept their pricing policy, which I’ll have to bear three years after of the BizSpark program. Now they don’t have an option for MySql or any other database. This to me is ridiculous for an SME. With EC2 I can run my MySql instance on another Linux VM (obviously in the same network. Azure gives you the capability to connect to network outside theirs, but that is not really an option)
2nd. This is again is related to using *nix machines. I want all my caching to be maintained by Memcached. With asp.net 4 they have even given us out of the box memcached support through extensible output caching. The reason why I am adamant about memcached is the eco system it provides. E.g.: Today I can get memcached with persistent caching as an add-on. This will even give me the opportunity to store session data with memcached. Additionally I can run map reduce jobs on the IIS logs. This is done using cloudera images on EC2. I don’t see how I can do these with Azure.
You see, in the case of Amazon/Rackspace I can run my asp.net web app on a single instance of Windows Server 2008 and the rest on *nix machines.
I am contemplating running my non hierarchical data (web app menu items) on CouchDb. With Azure I get the Azure table. But I am not very comfortable with that ATM. With EC2 I can run it on the same MySql box(don't catch me on this one :-)).
If you are ready to forget these problems, Azure gives you an environment with a lot of grunt work abstracted. And that’s a nice thing. Scaling, loading balancing, a lot of very cheap storage, CDN, storage replication, out of the box monitoring for services through Fabric Controller etc among these. With EC2/Rackspace you’ll have to hire a sysadmin shelling $150k PA to do these things (AFAIK Amazon provides some of these feature at additional cost).
My comparisons are between azure and Amazon/Rackspace instances (and not cloud). For some this might seem like apples and orange. But azure does not provide you with instances. Just the cloud with their customized offerings…
My biggest problem is/was just signing up and creating a project. And that's how far it got over the last month.
Either I am doing something very wrong, or that site is broken most of the time.
One important challenge is the learning curve, lack of experienced developers, the time it takes to become productive .
This happens with all technologies, but with the cloud there is a fundamental change in how somethings are done.
If your application needs a database, I'm not sure that Windows Azure has a relational database (right now)
Also, there are other cloud computing providers that can offer you more options in configuring your virtual machine for example, it really depends on what you actually need and want.
I am making a medium sized standard LOB application. Currently its a web application but I am formulating a proposal to revamp it into a Desktop remote application. By this I mean that the database and the application server will be hosted in a remote location. The client application will communicate with the server via the internet through (either WCF / Webservices / Remoting).
My question is this: The only reason I am shifting this from a web platform is due to the constraints of the web (I dont want to do AJAX or Java scripting to minimize those constraints, so please no JS/AJAX recommendations). I have made traditional desktop applications and they are considerably fast but i have never made a remote or a distributed application. I am not sure weather the speed of the application will be faster then the web or not.
As I understand it, the remote desktop application would be much faster. For one, there wont be any post backs involved, (I hate them so much). The data will obviously come via internet, so in that respect, is it better to shift to the remote desktop just for sheer speed and power?
Any help in the right direction would be greatfull. Many thanks.
Zeeshan
I think biggest advantage of desktop clients over web applications is freedom in UI design, and you don't have to worry about any inconsistencies in the client environment, although those are not an issue if you are using a client that runs on silverlight.
Personally I don't like web applications that requires a lot of user interaction, there are some of them that is a pleasure to use but I think it is very easy to do it the wrong way and end up having a buggy or not so responsive application (probably because of the incompatibilities in browsers, I have IE, Firefox and Chrome installed on my computer and I use one for some websites because they run faster on it, and others for other sites because web pages show up correctly only on them). Though this might not be an issue for a silverlight client.
In case of network speed, depending on the things that goes on the wire even with binary serialization remoting might have quite a bit of overhead. For example along with the data it writes full class names, library names and their versions so it can get pretty big and slow even for small amounts of data (although it should still be smaller then HTTP). It also has the same problems that HTTP has over unreliable connections because it uses a similar protocol. For one project we had to write a custom serializer for some objects because binary serialization alone was generating 200K, but our custom serializer for those objects were generating 50K. Then we ended up writing our own network protocol because the one that comes with the runtime was frequently stalling over unreliable wireless networks, and remoting doesn't give any control on the socket created by it (which makes sense in terms of encapsulation but you can't close it and force it to open a new one).
(I am assuming that you are asking about remoting vs web app. not remote desktop vs. web apps, because of your note about post back, you can't avoid it with a remote desktop session)
Rewriting an application just for sheer speed? No, because probably user won't see much difference in response time.
You are somewhat ambiguous with your terminology - do you want a client app that runs on the user's machine, or do you want an app that runs on the server and the user connects via remote desktop (RDP)?
If you are talking about a client app that communicates to the server via WCF etc., then yes it will be faster than a standard web app, although it will still be slower than a native desktop app. It will be faster than the web app not just because of the lack of postbacks, but also because you will be sending pure data through the wire, not a massive amount of HTML/Javascript combined with your data. With a client app, you have several options so consider them carefully - do you want Silverlight, WPF, or a native WinForms app? Each have their positives and negatives.
If you were talking about having a client app running on the server which the user then access via RDP, then you have other considerations to think of. For any more than two concurrent users you will need to consider buying CALs so the users can connect to the server. At this point you should also be considering whether you should be running a terminal server or Citrix type setup instead of using remote desktop.
Edit
When using WCF over a WAN (internet) you will certainly have to consider how you will secure it. WCF makes it trivial to secure the channel, but you need to consider how you will do authentication - there are a couple of different ways, but you can easily google that stuff yourself. The method you choose will be important due to the limited resources or skill-sets of the users.
As for what you write it in, you can't argue with Winforms if that is where your experience is. Personally, i would never again use ASP.NET/Ajax/etc for a web type application, it would be WPF or Silverlight all the way (i would only use ASP.NET for simple web sites). You can use the express (free) versions of Visual Studio to write it in, you don't need Expression (it's just a nice to have, and is more aimed at the design side than the actual coding side). Deploying the app need not be difficult - Silverlight or WPF xbap are delivered via the web, the user has to do nothing (except for the simple install of the Silverlight plugin or installing the right .Net framework for WPF - check this link). Winforms or stand-alone WPF require slightly more work, but you can avoid most issues by writing a good installer.
Whichever you choose, make sure you don't under estimate the time for development (because you will have a bit of a learning curve), and also make sure you budget enough time for testing it - especially the security side of it :)
I have been in a similar situation, although started with a Winforms LOB application.
Heres what we found with WinForms...
It's going to be harder to deploy in your release cycle, to all client machines.
WinForms can't be run on other operating systems easily. (with the exception on mono)
WCF endpoints can get complicated, and you need to manage an endpoint for release/version of your application.
Authentication, Authorization and Security can be tricky to get right!
Heres why you should stick to a html web application.
it's going to be easier to deploy, as you just need to copy one set of DLL's into the bin folder. Can be scripted from a continuous integration or staging server.
Security is going to be easy, by using a SSL certificate.
Silverlight/Flash should fill in the gaps that HTML leaves out.
Microsoft has also combined the connected systems in .net 3.5, they now call it WCF (ASMX/Remoting/etc...). It's got quite a learning curve 4-5 weeks.