Users will be uploading files to my website and I need to distribute them evenly on more than one server and also I need to have column in DB that says to which server the particular file was upload.
So here is my design.
Have enum of server names i.e server1, server2, server3.
Get the last Uploaded server name from DB
If the last uploaded server was server1, then current file should be uploaded to server2 and update DB,
if last uploaded server was server3 \, then current file should be uploaded to server1 and update DB
Application and DB is currently hosted on single server but in future we will move to load balancing.
Let me know if there is any best method other than this.
Your solution should work depending on how your customers use it. I'll give you a quick breakdown of how I've seen this done before.
Round robin DNS (assign multiple IP addresses to the same domain)
Multiple web servers who get traffic based on the DNS round robin
Each web server then has it's own dedicated SQL server.
SQL servers use replication to keep data syncronized.
Single storage server for file uploads. (Unless file uploads is the main function I doubt you'd need >1)
Pros:
VERY easily scale-able until you hit massive levels of traffic, at which point you'll need to rethink the SQL piece.
When purchasing hardware you can spend your money in certain areas to focus it on being a file server/SQL or web server
Cons:
This doesn't provide any true redundancy. And arguably makes it worse due to the tiered approach. This could be resolved with some managed DNS but that still isn't a perfect approach and I know some sys admins who cringe at the thought of managed DNS.
Related
I have a c# project that retrieves data from sql server database and stores in it , and I have a copies of this software every one works in different place (different area ) each software stores in its own database. I have to sync the data among these databases by phone line . I have recently read about atapi.dll . Could I use this dll to make synchronization among databases by send receive data between softwares.
for ex: in the first place i have to send the new records to the other place
the first place have a phone number (dial up ex: 1234566)
the other place have a number (dial up ex: 3456784) how can send and receive file between two softwares by dialup numbers
Writing your own file-sync mechanism may sound simple, but it is not easy, especially if you need to sync multiple parties.
Rather than writing your own sync tool, I would strongly encourage you to use SQL Server replication which is a feature built-in to SQL Server itself to support exactly the scenario you describe above.
If I am understanding your scenario:
You have a master database with all records from all branch sites
You have a subset of that data at each site - the latest copy of the master data plus any changes made at the local site
You periodically want to have each site dial-in to the master server and sync data back and forth so that your site-changes are pushed up to the master server and the master DB's changes are pushed out to the branch DB.
To support this scenario, you just configure your branch offices to dial-into the master office periodically, and configure SQL Server to replicate data as appropriate.
I've previously configured a 25-branch organization to use dial-up and broadband connections to sync a large SQL Server production database in less than 2 days, including time to update their backup strategy to account for the needs of the replication strategy employed.
Compared to writing your own sync engine, using SQL Server replication will likely save you many months' of development effort and many man-years of debugging & operational support!
You don't want to be dealing with dial-up yourself. Investigate Windows RAS, which sets up a TCP/IP connection between two hosts using dial-up. It can be driven from C#.
Once you've done that, investigate SQL Server Replication in order to sync the data once the connection is up.
Full edit:
The scenario is that after uploading the file to the server via a secured web service, I'd like to save/create a copy of that file to another server in a LAN or another network.
I'd like to know what possible ways I could use to programmatically copy/create the backup of the file uploaded to the backup server (saving the file to the database would be the last option probably).
Here are a few details:
Files are of different types and sizes mostly text, documents and images that would be around a few KB to a couple of MB's.
Database is SQL Server 2008 R2 and the only way to connect to it is via calls to a secured WCF service.
Servers can be in the same LAN or on separate networks (depends on the client requesting).
The 2nd server is a redundant server and is using the 1st one as it's backup and vice versa.
Took me a while to find this post. Just map the drive to the backup server's shared folder and implement WindowsImpersonationContext.
How to Impersonate a user in managed code?
haven't seen security problems on this and doesn't need to mess with the HTTP/certificates.
The company I work for makes a complex accounting application. This is a desktop app that connects to a local database server on the client's network. Some of our clients want to get e-commerce sites built but they will need access to this data.
Is it OK to install the web site at one location and feed data to it from a web server in another location. I've built stuff like this in the past and I know it could potential be slow. I'm hoping to mitigate this problem with stacks of Asp.NET caching. Is this a reasonable architecture (for a small to medium size e-commerce site) or will it run like a dog? Due to much pain in the past, I'm trying to keep this simple and avoid any sort of replication of the database.
Cheers
Ma
Well, replication of the database might actually be the fastest option. Think about it: getting a whole bunch of data on each request, with some cache misses, or basically having a 'complete' local cache (and thus no cache misses, well, not in-transfer anyway, your DB might cache, of course).
Edit: so basically my answer would be: no, it's not OK to run the website and database in two completely different locations. Two boxes in the same rack could be OK, but it seems that it would be preferable to have your web-service and DB on the same (virtual) machine.
I see a ton of questions about uploading multiple files, but none about uploading a single file to multiple servers, so here goes...
I have an ASP.NET app that will be running on two load balanced servers, and I would like to allow users to upload files and have them end up on both servers. What is the cleanest way to do this? I am using IIS 6 btw.
Some ideas that come to mind are:
1) Use a virtual directory that points to some shared location that both servers can access. Will there be any access issues if the application runs at Network Service? I'm assuming the application will need to run as a user account that exists on the shared location machine. How should the permissions be set for this?
2) It would be nice if I could via jQuery post the request to both of my servers, referencing them by their port numbers. Even though the servers are on the same domain, this violates the same origin policy, right?
Is there another solution I'm overlooking? How do other sites do this?
I think you want to consider this problem more carefully - having a pair (or more) of servers means that some of them will be offline some of the time (at least for occasional reboots).
Uploads when not all of the servers are online won't be able to be sent to all servers immediately, so you'd need either an intermediate server (which would be a point of failure unless it was highly available itself) or a queuing system to "remember" which files were where, and to transfer them when the relevant servers were restored.
Also, you'll want a backup system, and some way to add newly provisioned servers to your cluster. You will also want a way to monitor these files are the same in case they get out of sync. Your architecture needs a lot of careful thought. I don't have the answers :)
The cleanest approach is forwarding the files server-side, really. If you force two uploads via JavaScript, not only will you have to worry about working around XSS safeguards, but you'll also force the user to use their very limited upstream bandwidth twice for each file.
You shouldn't be exposing that kind of detail to the client anyway. The browser doesn't need to know where the file ends up, just who to send it to. If you keep that logic server-side, not only do you keep the details hidden (and thus less prone to errors and exploits), but you'll also get more control over the process. You can create a gateway service later that handles a multitude of back end storages and you can handle failing servers better. You can queue failed uploads and retry. All these come at a very low cost if you do them on the server side, but are a pain to be made to work reliably on the client side.
Keep back end logic to your back end. Load balancing should be hidden from the user, so there's no need to tell them where they are sending their files exactly. Make it optional, if you want, but hide the action from them. Just swallow the file on the gateway server (which can be either of the load balancing servers -- in fact, it should probably be load balanced, too, so it should work with either of them in place) and send it to the other servers from there. The transfer from server to server will probably be faster too.
Your best bet is definitely a NAS, if one is available -- a shared file system that is not specifically associated with any machine. Then you can focus on making the NAS highly available via a clustered frontend.
If that's not an option, you can use a virtual directory on each machine that points to one folder on one of the machines, but then you lose redundancy.
I'm faced with this same challenge at my work. My app is small but needs to be highly available, but there's no NAS in sight. So in each machine's web.config I place a list of all the UNC paths that the uploaded file should be stored. After uploading to a temp folder, I copy the file to each machine one by one. It's not perfect -- a machine could go down, in which case when it came up it might not have all the files (and the copy would be slowed by the hunt for the missing machine) -- but in my situation uploads are so infrequent that it's not worth improvement.
As others have mentioned, Javascript is right out. Upload once.
I have seen this problem solved with a NAS, using credentials for the app pool that can read/write files to that NAS. Make sure your NAS is setup for high availability to prevent single point of failure ie:hot swap w/ raid, multiple array controllers, power supplies..etc
You could also put folder monitoring software on the severs that keep certain directories in sync. I don't recommend this solution.
I am scratching my head about this. My scenario are that I need to upload a file to the company server machine(to a folder on c:) from our hosting one(totally different server). I don't know how I should do this. Any of you got tips or code on how this is done.
Thanks Guys
I would set up an FTP server (like the one in IIS or a third-party server) on the Company Server. If security is an issue then you'll want to set up SFTP (secure FTP) rather than vanilla FTP since FTP is not a natively secure transfer protocol. Then create a service on the Hosting Server to pick up the file(s) as they come in and ship them to the company server using C#/.NET's FTP control. Honestly, it should be pretty straightforward.
Update: Reading your question, I am under the strong impression that you will NOT have a web site running on the company server. That is, you do not need a file upload control in your web app (or already know how to implement one given that the control is right in the web page toolbox). Your question, as I understand it, is how to get a file from the web server over to the company server.
Update 2: Added a note about security. Note that this is less of a concern if the servers are on the same subdomain and won't be routed outside of the company network and/or if the data is not sensitive. I didn't think of this at first because I am working a project like this now but our data is not, in any way, sensitive.
Darren Johnstone's File Upload control is as good a solution as you will find anywhere. It has the ability to handle large files without impacting the ASP.NET server memory, and can display file upload progress without requiring a Flash or Silverlight dependency.
http://darrenjohnstone.net/2008/07/15/aspnet-file-upload-module-version-2-beta-1/
There isnt enough info to tell your whole hosting scenario but I have a few suggestions that might get you started in the right direction:
Is your external server owned by another company or group and you cant modify it? If not you might consider hosting the process on the same machine, either in process or as a separate service on the machine. If it cannot be modified, you might consider hosting the service on the destination machine, that way its in the same place as the files need to show up at.
Do the files need to stay in sync with the process? I.e. do they need to be uploaded, moved and verified as a single operation? If not then a separate process is probably the best way to go. The separate process will give you some flexibility, but remember it will be a separate process and a separate set of code to manage and work with
How big is the file(s) that are being uploaded? Do they vary by upload? are the plain files, binaries (zips, executables, etc)? If the files are small you have more options than if they are large. If they are small enough, you can even relay them in line.
Depending on the answers to the above some of these might work for you:
Use MSMQ. This will work for simple messages under about 3MB without too much hassle. Its ideal for messages that can be directly worked with (such as XML).
Use direct HTTP(s) relaying. On the host machine open a HTTP(s) connetion to the destination machine and transfer the file. Again this will work better for smaller files (i.e. only a few KB since it will be done in-line)
If you have access to the host machine, deploy a separate process on the machine which builds or collects the files and uses any of the listed methods to send them to the destination machine.
You can use SCP, FTP (of any form SFTP, etc) on either the host machine (if you have access) or the target machine to host the incoming files and use a batch process to move the files. This will have a lot of issues to address, such as file size, keeping submissions in sync, and timing. I would consider this as a last resort, depending on the situation.
Again depending on message size, you could also use a layer of abstraction such as a DB to act as the intermediate layer between the two machines. This will work as long as the two machines can see the DB (or other storage location) and both act on it. SQL Server Service Broker could be used for this purpose (and most other DB products offer similar products).
You can look at other products like WSO2 ESB or NServiceBus to facilitate messaging between the two apps and do it inline.
Hopefully that will give you some starting points to look into.