i have multiple web server and one central file server inside my data center.
and all my Web server store the user uploaded files into central internal file server.
i would like to know what is the best way to pass the file from web server to file server in this case?
as suggested i try to add more details to question:
the solution i came up was:
after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
Is your file server just another windows/linux server or is it a NAS device. I can suggest you number of approaches based on your requirement. The question is why d you want to use HTTP protocol when you have much better way to transfer files between servers.
HTTP protocol is best when you send text data as HTTP itself is based
on text.From the client side to Server side HTTP is used as that is
the only available option for you by our browsers .But
between your servers ,I feel you should use SMB protocol(am assuming
you are using windows as it is tagged for IIS) to move data.It will
be orders of magnitude faster as much more efficient to transfer the same data over SMB vs
HTTP.
And for SMB protocol,you do not have to write any code or complex scripts to do this.As provided by one of the answers above,you can just issue a simple copy command and it will happen for you.
So just summarizing the options for you (based on my preference)
Let the files get upload to some location on the each IIS web server e.g C:\temp\UploadedFiles . You can write a simple 2-3 line powershell script which will copy the files from this C:\temp\UploadedFiles to \FileServer\Files\UserID\\uploaded.file .This same powershell script can delete the file once it is moved to the other server successfully.
E.g script can be this simple and easy to make it as windows scheduled task
$Destination = "\\FileServer\Files\UserID\<FILEGUID>\"
New-Item -ItemType directory -Path $Destination -Force
Copy-Item -Path $Source\*.* -Destination $Destination -Force
This script can be modified to suit your needs to delete the files if it is done :)
In the Asp.net application ,you can directly save the file to network location.So in the SaveAs call,you can give the network path itself. This you have to make sure this network share is accessible for the IIS worker process and also has write permission.Also in my understanding asp.net gets the file saved to temporary location first (you do not have control on this if you are using the asp.net HttpPostedFileBase or FormCollection ). More details here
You can even run this in an async so that your requests will not be blocked
if (FileUpload1.HasFile)
// Call to save the file.
FileUpload1.SaveAs("\\networkshare\filename");
https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas(v=vs.110).aspx
3.Save the file the current way to local directory and then use HTTP POST. This is worst design possible as you are first going to read the contents and then transfer it as chunked to other server where you have to setup another webservice which recieves the file.The you have to read the file from request stream and again save it to your location. Am not sure if you need to do this.
let me know if you need more details on any of the listed method.
Or you just write it to a folder on the webservers, and create a scheduled task that moves the files to the file server every x minutes (e.g. via robocopy). This also makes sure your webservers are not reliant on your file server.
Assuming that you have an HttpPostedFileBase then the best way is just to call the .SaveAs() method.
You need the UNC path to the file server and that is it. The simplest version would look something like this:
public void SaveFile(HttpPostedFileBase inputFile) {
var saveDirectory = #"\\fileshare\application\directory";
var savePath = Path.Combine(saveDirectory, inputFile.FileName);
inputFile.SaveAs(savePath);
}
However, this is simplistic in the extreme. Take a look at the OWASP Guidance on Unrestricted File Uploads. File uploads can be the source of many vulnerabilities in your application.
You also need to make sure that the web application has access to the file share. Take a look at this answer
Creating a file on network location in asp.net
for more info. Generally the best solution is to run the application pool with a special identity which is only used to access the folder.
the solution i came up was: after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
I would suggest not posting the file at once - it's then full in memory, which is not needed.
You could post the file in chunks, by using ajax. When a chunk receives at your server, just add it to the file.
With the File Reader API, you could read the file in chunks in Javascript.
Something like this:
/** upload file in chunks */
function upload(file) {
var chunkSize = 8000;
var start = 0;
while (start < file.size) {
var chunk = file.slice(start, start + chunkSize);
var xhr = new XMLHttpRequest();
xhr.onload = function () {
//check if all chunks are and then send filename or send in in the first/last request.
};
xhr.open("POST", "/FileUpload", true);
xhr.send(chunk);
start = end;
}
}
It can be implemented in different ways. If you are storing files in files server as files in file system. And all of your servers inside the same virtual network
Then will be better to create shared folder on your file server and once you received files at web server, just save this file in this shared folder directly on file server.
Here the instructions how to create shared folders: https://technet.microsoft.com/en-us/library/cc770880(v=ws.11).aspx
Just map a drive
I take it you have a means of saving the uploaded file on the web server's local filesystem. The question pertains to moving the file from the web server (which is probably one of many load-balanced nodes) to a central file system all web servers can access it.
The solution to this is remarkably simple.
Let's say you are currently saving the files some folder, say c:\uploadedfiles. The path to uploadedfiles is stored in your web.config.
Take the following steps:
Sign on as the service account under which your web site executes
Map a persistent network drive to the desired location, e.g. from command line:
NET USE f: \\MyFileServer\MyFileShare /user:SomeUserName password
Modify your web.config and change c:\uploadedfiles to f:\
Ta da, all done.
Just make sure the drive mapping is persistent, and make sure you use a user with adequate permissions, and voila.
Related
I have a local folder which contains files and directories (>2000
files).
I uploaded this entire folder on my ftp.
Now for example let's say my ftp folder is called FTPFolder and my local folder is called LOCALFolder. These two folders are exactly the same for now.
And let's say the both folders contain a file called text.txt.
Now what I would like to do:
If I change the test.txt on the FTP, how could I detect it in C#?
Getting all local files and all FTPfiles and then comparing them is just too long. Has anyone got another way of doing this ?
Basically the goal is to download all files on the FTP which are different from the same files but locally.
Usual approach to synchronize local folder with FTP folder, is to compare file modification times.
Assuming that you are using FtpWebRequest .NET class, this is actually not trivial to implement. It does not have any standard way to retrieve file modification time.
See Retrieving creation date of file (FTP).
Even better would be to use a file checksum, but that's hardly possible.
See FTP: copy, check integrity and delete.
It would be way easier to use a 3rd party FTP client library that has a better support for retrieving the modification time. And even more easier, if you use FTP client library that supports a synchronization out of the box.
For example with WinSCP .NET assembly you can use Session.SynchronizeDirectories method:
// Setup session options
SessionOptions sessionOptions = new SessionOptions
{
Protocol = Protocol.Ftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
// Synchronize files
session.SynchronizeDirectories(
SynchronizationMode.Local, #"C:\local\path", "/remote/path", false).Check();
}
WinSCP GUI can generate a code template for you.
(I'm the author of WinSCP)
Here are possible options:
Check the checksum - the best and fastest way, but it might not be present for the files in the first place.
Check the size of the file and its timestamp. Not ideal, but it might work.
Other than that, I don't think there is anything you can and it's not a C# issue per se.
if (Upload.ContentLength > 0)
{
var FileName = Upload.FileName;
var path = Path.Combine(Server.MapPath("~/Content/Images"), FileName);
Upload.SaveAs(path);
cement.ImageLocation = ("/Content/Images/" + FileName);
}
This code is working perfectly in Local IIS hosting. But having problem on AppHarbor. The error that I got from the logging is this.
**Message**
An unhandled exception has occurred.
**Exceptions**
[DirectoryNotFoundException: Could not find a part of the path 'D:\Users\apphb9840cac8716388\app\_PublishedWebsites\RMQGrainsBeta\Content\Images\Capture.PNG'.]
I tried to read some of the articles about this and Got some Information that that the Folder might be protected or something, so I removed the ReadOnly on the Images folder and try to git bash, the problem is. GitBash doesn't recognize the difference and won't push the changes of the folder.
Finally got an answer from AppHarbor. So the thing is, it is not the code that is having the problem. It is the AppHarbor services that is blocking us to upload or directly access the file system. Here is what the Tech from the AppHarbor says.
About storing images: AppHarbor does not allow write access to the
application directory by default. This is because the local filesystem is
ephemeral and may be wiped on each deployment and/or during system
maintenance. For this reason it needs to be manually enabled on the
settings page, and it should only be used for temporary storage purposes such as caching.
I'd recommend using a cloud file storage solution such as Amazon S3. You can
upload files to S3 from your application or the client can upload files
directly to Amazon S3 by using presigned URLs. This will allow you to build
a scalable, distributed file storage feature that is suitable for cloud-based
platforms like AppHarbor. Let me know if you need help
implementing/architecting a solution that suits your needs!
Best,
Rune
Sorry for wasting your time Guys. But thank you for viewing the problem.
I am writing a tool which will allow users to communicate with each other over the internet using a server and PHP files that I have set up. I have written it, but right now when I open the PHP files and pass arguments through the URL to create new files on my server, it opens the PHP file in my default browser. This is the code I am using right now to open the PHP files on my server:
private void ExecuteProcess(string FilePath)
{
Process Process = new Process();
Process.StartInfo.FileName = #FilePath;
Process.Start();
}
I want to be able to open files in a similar way without physically opening them in my browser. I have been googling around for a few hours, but whenever I try to user the methods that I find on the internet I get a 406 exception from Visual Studio, saying that the server cannot fufill my request? My write permissions are set to read for these files, do I need to change these?
Thanks for helping a PHP noobie,
-I
I think you want to make an HTTP request to your server. Check the WebRequest class.
When i used the web request class, there was a page 406 error, which meant that the servers acceptable headers were not comparable with the type of data I was requesting. By default, mod security is turned on on apache servers, and I just need to disable it to allow me to download data with the web request class.unfortunately, the server is hosted by a third party, so I will have to contact the web master in order to turn this off. I have opted just to host my own server, and avoid this hassle.
In my scenario, users are able to upload zip files to a.example.com
I would love to create a "daemon" which in specified time intervals will move-transfer any zip files uploaded by the users from a.example.com to b.example.com
From the info i gathered so far,
The daemon will be an .ashx generic handler.
The daemon will be triggered at the specified time intervals via a plesk cron job
The daemon (thanks to SLaks) will consist of two FtpWebRequest's (One for reading and one for writing).
So the question is how could i implement step 3?
Do i have to read into to a memory() array the whole file and try to write that in b.example.com ?
How could i write the info i read to b.example.com?
Could i perform reading and writing of the file at the same time?
No i am not asking for the full code, i just can figure out, how could i perform reading and writing on the fly, without user interaction.
I mean i could download the file locally from a.example.com and upload it at b.example.com but that is not the point.
Here is another solution:
Let ASP.Net in server A receive the file as a regular file upload and store it in directory XXX
Have a windows service in server A that checks directory XXX for new files.
Let the window service upload the file to server B using HttpWebRequest
Let server B receive the file using a regular ASP.Net file upload page.
Links:
File upload example (ASP.Net): http://msdn.microsoft.com/en-us/library/aa479405.aspx
Building a windows service: http://www.codeproject.com/KB/system/WindowsService.aspx
Uploading files using HttpWebRequest: Upload files with HTTPWebrequest (multipart/form-data)
Problems you gotto solve:
How to determine which files to upload to server B. I would use Directory.GetFiles in a Timer to find new files instead of using a FileSystemWatcher. You need to be able to check if a file have been uploaded previously (delete it, rename it, check DB or whatever suits your needs).
Authentication on server B, so that only you can upload files to it.
To answer your questions - yes you can read and write the files at the same time.
You can open an FTPWebRequest to ServerA and a FTPWebRequest to ServerB. On the FTPWebRequest to serverA you would request the file, and get the ResponseStream. Once you have the ResponseStream, you would read a chunk of bytes at a time, and write that chunck of bytes to the serverB RequestStream.
The only memory you would be using would be the byte[] buffer in your read/write loop. Just keep in mind though that the underlying implementation of FTPWebRequest will download the complete FTP file before returning the response stream.
Similarly, you cannot send your FTPWebRequest to upload the new file until all bytes have been written. In effect, the operations will happen synchronously. You will call GetResponse which won't return until the full file is available, and only then can you 'upload' the new file.
References:
FTPWebRequest
Something you have to take into consideration is that a long running web requests (your .ashx generic handler) may be killed when the AppDomain refreshes. Therefore you have to implement some sort of atomic transaction logic in your code, and you should handle sudden disconnects and incomplete FTP transfers if you go that way.
Did you have a look at Windows Azure before? This cloud platform supports distributed file system, and has built-in atomic transactions. Plus it scales nicely, should your service grow fast.
I would make it pretty simple. The client program uploads the file to server A. This can be done very easily in C# with an FtpWebRequest.
http://msdn.microsoft.com/en-us/library/ms229715.aspx
I would then have a service on server A that monitors the directory where files are uploaded. When a file is uploaded to that directory or on certain intervals it simply copies files over to server B. Again this can be done via Ftp or other means if they're on the same network.
you need some listener on the target domain, ftp server running there, and on the client side you will use System.Net.WebClient and UploadFile or UploadFileAsync to send the file. is that what you are asking?
It sounds like you don't really need a webservice or handler. All you need is a program that will, at regular intervals, open up an FTP connection to the other server and move the files. This can be done by any .NET program with the System.WebClient library, doesn't have to be a "web app". This other program could be a service, which could handle its own timing, or a simple app run by your cron job. If you need this to go two ways, for instance if the two servers are mirrors, you simply have the same app on the second box doing the same thing to upload files over to the first.
If both machines are in the same domain, couldn't you just do file replication at the OS level?
DFS
set up keys if you are using linux based systems:
http://compdottech.blogspot.com/2007/10/unix-login-without-password-setting.html
Once you have the keys working, you can copy the file from system A to system B by writing regular shell scripts that would not need any user interactions.
Please refer to question: Resume in upload file control
Now to find a solution for the above question, I want to work on it and develop a user control which can resume a HTTP File Upload process.
For this, I'm going to create a temporary file on server until the upload is complete. Once the uploading is done completely then will just save it to the desired location.
The procedure I have thought of is:
Create a temporary file with a unique name (may be GUID) on the server.
Read a chunk of file and append it to this temp file on the server.
Continue step 1 to 3 until EOF is reached.
Now if the connection is lost or user clicks on pause button, the writing of file is stopped and the temporary file is not deleted.
Clicking on resume again or uploading the same file again will check on the server if the file exists and user whether to resume or to overwrite. (Not sure how to check if it's the same file. Also, how to step sending the chunks from client to server.)
Clicking on resume will start from where it is required to be uploaded and will append that to the file on the server. (Again not sure how to do this.)
Questions:
Are these steps correct to achieve the goal? Or need some modifications?
I'm not sure how to implement all these steps. :-( Need ideas, links...
Any help appreciated...
What you are trying is not easy, but it is doable. Follow these guidelines:
Write 2 functions on the client side using ajax:
getUploadedPortionFromServer(filename) - This will ask the server if the file exists, and it should get an xml response from the server with the following info:
boolean(exist/not exist), size(bytes), md5(string)
The function will also run an md5 on the same size it got from the server on the local file,
If the md5 is the same, you can continue sending from the point it was stopped,
elseif not the same or size == 0, start over.
uploadFromBytes(x) - Will depend on the 1st function.
On the server you have to write matching functions, which will check the needed stuff, and send the response via XML to the client.
In order to have distinct filenames, you should use some sort of user tagging.
Are users logged in to your server? if so, use a hash of thier username appended to the filename, this will help you distinguish between the files.
If not, use a cookie.