I'm pretty new to the Azure Platform, I have tried to search on Google for any assistance but unfortunately my Google searching skills aren't the best.
I have a Linux VM in Azure, the Linux VM occasionally will have wav files on it, that will need to be copied off of it.
My plan is to use a Wokrer Role to access the Linux VM and copy the files off using scp, and then storing them in a storage account in Azure.
Is it possible to have a few pointers in the right direction of how this could be accomplished?
In your case, there would be no need for a worker role (which is nothing more than a Windows Server VM running in a Cloud Service). If you did need a worker role instance talking to a Linux instance, you'd have to connect them with a Virtual network. I'm guessing you're just starting out, and that solution sounds over-engineered.
Instead: Just write directly to blob storage from your linux-based app. If you're using .NET, Java, PHP, Python, or Ruby, there are already SDKs that handle this for you - go here and scroll down to Developer Centers, download the SDK of choice, and then look at some of the getting-started tutorials.
Just remember that blob storage is Storage-as-a-Service, accessible from anywhere. Underneath, it's just REST calls, with the language sdk's wrapping those calls.
There are more examples in the Azure Training Kit.
Related
I am attempting to connect to my Google Drive using C# and the Google Drive API and then map that as a network or local drive. There are other programs I know that do this like NetDrive (which is extremely useful and robust), but I am looking to create something on my own. I have created a project in the developer console and have been able to connect to Drive using my application and do various read and upload operations, so I know that particular portion is ok. Access and permissions all seem to be set. I just have no idea where to start when it comes to mapping that storage as a usable drive in Windows. Any advice would be most helpful, thank you very much!
There are two basic components for implementing a NetDrive/WebDrive type of solution. What you are looking at is the creation of an Installable File System and Network Provider.
The Network Provider, or NP, is the user mode component that handles the Network layers, including the mapping and unmapping of the drive letter, along with lots of other fairly complicated UNC/Network stuff. To get an idea of what you are in for, check out the Win32 WNET*() API; you will need to implement all of the WNet() calls specifically for your IFS and 'network'.
When you are done, you'll probably have the ability to to do a "net use \MyWebDrive\" in DOS and Map Network Drive in Windows Explorer. You might also be able to use Windows Explorer to enum the contents of the remote file system.
However, now you need to make sure that all third party applications can access your network drive...to do that, you want to implement the Win32 File System API, such as CreateFile, Read(), Write(), CloseHandle(), FindFirst(), etc.
To do this, you can write an Installable File System Driver, FSD, to handle all I/O calls from User mode applications wanting to read/write to the files on that mapped network drive. This will most likely be a Kernel Mode application...a signed/certified file system device driver....probably written in old-school C and maybe even utilizing TDI depending on how you want to do your network IO.
Microsoft is becoming much more strict about installing 3rd party kernel mode drivers and network providers. The WebDrive file system driver is now securely signed using a Microsoft based TLS certificate and our Network Provider has been registered with the Microsoft Windows SDK team as a legitimate Network Provider for the Windows platform.
Once you get these pieces in place, you'll then want to think about Caching. Direct I/O through your NP/FSD over the wire to Google is not practical, so you'll need an intermediate caching system on your local drive. There are lots of ways to do that, too many to go into here. However, just keep in mind that you may have multiple user mode applications reading and writing to your network drive simultaneously (or one app like WinWord which opens multiple file handles), and you'll need to be able to handle all those requests with proper locking and ACLs, and then map those changes and access rules to the remote server.
Don't lose faith...what you are looking to do is possible as WebDrive and NetDrive have shown, but it's not really a project that can be knocked out in a few weekends. I'm not sure about the author of NetDrive, but we've been developing WebDrive full time since 1997. It seems that every Windows Patch changes something and every new version of Adobe/Office/XYZ does something quirky with IO calls that makes us pull our hair out.
Note: There's also another way to implement this beast which may get around the FSD, it's the DropBox strategy. Using a temporary folder on your local hard drive, leverage Directory Change Notifications in a User Mode application to monitor file changes in the folder and dynamically synchronize the changes to the remote end. GoogleDrive and a lot of the other online storage companies do it this way because it's quick-&-easy; however, if many changes occur in a short period, a Change Notification could get lost in Windows Messaging and data might get trashed.
I realize this is a lot to digest, but it's doable...it's cool stuff; good luck!
I suggest that before you start coding, you take time to thoroughly understand Google Drive and map its capabilities to/from Windows. Some sample points of impedance:-
folders in Drive aren't folders at all
A file in Drive = the metadata, content is optional
Drive has a lot of metadata that doesn't map to NTFS (eg. properties)
Will applicable files be converted to Google Docs, or stored as is
How will you map revisions
Permissions
There are almost certainly more, this is just off the top of my head. Your app needs to make decisions regarding all of these aspects. Generally, Drive offers more capabilities than NTFS, so provided you are simply using it as a backup repository, you should be OK.
I have developed my application with mongoDB and now I'm ready to live in cloud.
I have followed the tutorial from official mongoDB website about how to deploy it in
windows azure's worker role.
I have tried with local deployment in windows azure emulator, everything work really fine.
But when I tried to publish it into cloud service, the result wasn't like my expectation.
MongoDB.WindowsAzure.MongoDBRole_IN_X (where X is the instance number) is always in busy status with message ..
Starting role...
UnhandledException:Microsoft.WindowsAzure.StorageClient.CloudDriveException
I have no clue about this.
Could anyone have any suggestion ?
Thx.
PS1. Sorry for my english.
PS2. I have used the latest version of windows azure SDK.
In that worker role setup, MongoDb sets itself up to store the database on a durable drive (basically a vhd mounted in a blob). Since you're getting a CloudDriveException, the first thing I'd look at is the storage connection string for the account being used for mounting drives. Could it be that your configuration file is still pointing at local dev storage?
I'm going to make the iOS application and wondering on differences between development with new language Objective C with Cocoa or old language C# with Mono-Touch
The requirement of the application should be work with azure and store/retrieve information to store on local device frequently, content browsing and token login to the portal, the deadline is 2 month from today and i never develop any iPhone / iPad application previously. May i know which is easier to start and is there any resources for Mono ? while i find it's great if i could use the old programming language but seem there're no such thing to support monoTouch azure development...
Thank you for reply.
First of all your have asked lots of things in one post. About your first question the answer is very much subjective. Objective C/Cocoa is native language for iOS development however using MonoTouch, will require you to depend on whatever is provided by MonoTouch. So if it is not part of MonoTouch you wouldn't be able to do it. Here you can find lots of opinions by other fellow SO users: MonoTouch & C# VS Objective C for iphone app
I can give you some guidance on Windows Azure development from any mobile device. Connecting to different services running on Windows Azure from any mobile device is same. Most of the services provide direct HTTP/HTTPS connection if the application is running on Azure and exposed an HTTP or HTTPs endpoint or for Azure Storage you are making direct RESTful call from you code. So it does not matter which coding language you will use in mobile decide, you sure can connect to Windows Azure with native language.
So if you choose Objective C then you can use iOS SDK for Windows Azure. However if you decide to use MonoTouch, you would need to use WebClient API to create your own HTTP/HTTPS connection something described here, which could be comparative complex. On internet you may find some experiment level code to use Azure services and MonoTouch application so you may be by your own to try to get things working.
I personally will not use MonoTouch to develop application on iOS devices, if I am heavily dependent on Windows Azure Services, instead I will choose iOS Windows Azure SDK to connect Azure Service through native code.
I am planning to move a game server of mine to Amazon EC2. Right now the actual server runs on .Net Framework 3.5 on a windows dedicated server. Since it is a personal side-project, it's quite expensive to have a fully dedicated server to that, therefore I would like to move it to the cloud (Amazon EC2 or maybe Windows Azure).
Have someone accomplished such thing? Is it possible to do so? If yes, could you provide me with some documentaiton on the subject since I was only been able to find doc for setting up web servers over http.
The server binds and listens to 2 TCP sockets (nodelay option) on 2 different ports.
Thanks a lot!
Kel
With EC2 you own full control of the server. That means you'll be able to deploy your app without much modification and have full control to tune the system to your needs. I'm not familiar with game servers, but if you need to tune your environment (ports, accounts, services etc.) then EC2 is probably the platform for you.
If your application is very light then you may be able to get away with using the 'Mini' EC2 instances, which only cost about 3-5 cents/hr. Cost comparisons between EC2 and Azure are a bit challenging, but my understanding is that Azure can get expensive due to their billing methodology. I've written a small cloud comparison article recently that gives an overview of the main players: http://blog.labslice.com/2010/10/choosing-your-cloud.html.
There's not much more to say. The cloud solutions can be quite confusing. Each tend to come with unique terminologies, a vast amount of services and certain peculiarities. In short, you're best off to just test both EC2 and Azure simply to get the ball rolling. Costs are pretty low and there's no lock-in for testing.
Simon # http://LabSlice.com
You should be able to do this on Azure using custom AppFabric ServiceBus binding, with TcpRelayConnectionMode = Hybrid.
There's some background on how this works here.
I know you already accepted an answer but if you are running your server 24-7 it may just be cheaper to get dedicated hosting. Doing the math it would cost 86.40 to run a small instance (I did small instead of micro because you also have to calculate in EBS pricing for the data, the micro instance has no local storage). Doing a Google search for "Cheep dedicated hosting" gave me this provider for 66.95/mo. ($37.95 for the server + $29 for using windows instead of Linux)
If you are doing testing I would recommend using EC2 to get things working smoothly but when you are ready to deploy and want the game running all the time you can save a lot of money by going with a traditional hosting provider instead of doing cloud computing.
Is it possible to write a filesystem for Windows in pure usermode, or more specifically purely in managed code? I am thinking of something very similar to GMAILFS. Excluding what it is doing under the covers (GMAIL, Amazon, etc..) the main goal would be to provide a drive letter and support all of the basic file operations, and possibly even adding my own structures for storing metadata, etc..
Windows provides several approaches to building a user-mode file system for different purposes, depending on your storage location and features that you need to support. Two of them, Projected File System API and Cloud Files API were recently provided as part of the Windows 10 updates.
Windows Projected File System API
Projected File System API is designed to represent some hierarchical data, such as for example Windows Registry, in the form of a file system.
Unlike Cloud Files (see below) it does not provide any information about file status and hides the fact that this is not the “real” file system. Example.
Windows Cloud Sync Engine API
Cloud Sync Engine API (Cloud Files API, Cloud Filter API) is used in OneDrive on Windows 10 under the hood. It provides folder content loading during the first request, on-demand files content loading in several different modes, and offline files support. It integrates directly into Windows File Manager and Windows Notification Center and provides file status (offline, in-sync, conflict, pinned) and file content transfer progress.
The Cloud Files API runs under regular user permissions and does not require admin privileges for file system mounting or any API calls. Example.
Windows Shell Namespace Extensions API
While Shell Namespace Extension is not a real file system, in many cases you will use it to extend the functionality of the Projected File System and Cloud Files API. For example, you will it to add custom commands to context menus in Windows File Manager as well as you can create nodes that look and behave like a real file system (again, applications would not be able to read or write to such nodes, this is just a user interface).
Cloud Files API is using a namespace extension to show your sync root at the top level in Windows File Manager.
It's difficult. I'd take a look at some projects which have done some of the hard work for you, e.g. Dokan.
Yes. It's possible and has been successfully done for the ext2 filesystem.
Note that you will need to write your own driver which will require Microsoft signing to be run on some OSes.
Sure, you can abstract the regular file operations and have them running in the cloud (see Google Apps, Amazon S3, Microsoft Azure etc.). But if you'd like to talk to local devices - including the local HD - you'll have to use system APIs and those use drivers (system/kernel mode).
As long as all you want is a storage service -no problem. If you want a real OS, you'll need to talk to real hardware and that means drivers.
Just as a reference - our Callback File System is a maintained and supported solution for creation of filesystems in user-mode.