Write file to URL Location using c# - c#

I want to write a file to a virtual directory path in same cloud.
For writing files to local we use
File.WriteAllText('c:\temp\sample.text',string)
Similarly, i want to write to network system like.
File.WriteAllText('\\\10.11.144.29\e$\projects\Map.text',string)
And to virtual directory location like.
File.WriteAllText('http://10.11.144.29/map/test.svg',string)
Is it possible to to write to URL location using c#? if possible, What class can be used?
Any help will be appreciated.

WebClient client = new WebClient();
//client.Credentials = new NetworkCredential("username", "password");
client.UploadFile("http://10.11.144.29/map/test.svg","test.svg");

The last option isn't possible as you require an HTTP PUT or POST to be able to send binary data to a URL, using the HttpWebClient classes or similar.
Examples #1 and #2 you gave should be just fine, though, providing the code running has sufficient permissions at the given network location (i.e. write access)

File.WriteAllText(Server.MapPath(#"c:\temp\sample.text"),string).*

Related

Get modified files in FTP folder

I have a local folder which contains files and directories (>2000
files).
I uploaded this entire folder on my ftp.
Now for example let's say my ftp folder is called FTPFolder and my local folder is called LOCALFolder. These two folders are exactly the same for now.
And let's say the both folders contain a file called text.txt.
Now what I would like to do:
If I change the test.txt on the FTP, how could I detect it in C#?
Getting all local files and all FTPfiles and then comparing them is just too long. Has anyone got another way of doing this ?
Basically the goal is to download all files on the FTP which are different from the same files but locally.
Usual approach to synchronize local folder with FTP folder, is to compare file modification times.
Assuming that you are using FtpWebRequest .NET class, this is actually not trivial to implement. It does not have any standard way to retrieve file modification time.
See Retrieving creation date of file (FTP).
Even better would be to use a file checksum, but that's hardly possible.
See FTP: copy, check integrity and delete.
It would be way easier to use a 3rd party FTP client library that has a better support for retrieving the modification time. And even more easier, if you use FTP client library that supports a synchronization out of the box.
For example with WinSCP .NET assembly you can use Session.SynchronizeDirectories method:
// Setup session options
SessionOptions sessionOptions = new SessionOptions
{
Protocol = Protocol.Ftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
// Synchronize files
session.SynchronizeDirectories(
SynchronizationMode.Local, #"C:\local\path", "/remote/path", false).Check();
}
WinSCP GUI can generate a code template for you.
(I'm the author of WinSCP)
Here are possible options:
Check the checksum - the best and fastest way, but it might not be present for the files in the first place.
Check the size of the file and its timestamp. Not ideal, but it might work.
Other than that, I don't think there is anything you can and it's not a C# issue per se.

efficiently pass files from webserver to file server

i have multiple web server and one central file server inside my data center.
and all my Web server store the user uploaded files into central internal file server.
i would like to know what is the best way to pass the file from web server to file server in this case?
as suggested i try to add more details to question:
the solution i came up was:
after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
Is your file server just another windows/linux server or is it a NAS device. I can suggest you number of approaches based on your requirement. The question is why d you want to use HTTP protocol when you have much better way to transfer files between servers.
HTTP protocol is best when you send text data as HTTP itself is based
on text.From the client side to Server side HTTP is used as that is
the only available option for you by our browsers .But
between your servers ,I feel you should use SMB protocol(am assuming
you are using windows as it is tagged for IIS) to move data.It will
be orders of magnitude faster as much more efficient to transfer the same data over SMB vs
HTTP.
And for SMB protocol,you do not have to write any code or complex scripts to do this.As provided by one of the answers above,you can just issue a simple copy command and it will happen for you.
So just summarizing the options for you (based on my preference)
Let the files get upload to some location on the each IIS web server e.g C:\temp\UploadedFiles . You can write a simple 2-3 line powershell script which will copy the files from this C:\temp\UploadedFiles to \FileServer\Files\UserID\\uploaded.file .This same powershell script can delete the file once it is moved to the other server successfully.
E.g script can be this simple and easy to make it as windows scheduled task
$Destination = "\\FileServer\Files\UserID\<FILEGUID>\"
New-Item -ItemType directory -Path $Destination -Force
Copy-Item -Path $Source\*.* -Destination $Destination -Force
This script can be modified to suit your needs to delete the files if it is done :)
In the Asp.net application ,you can directly save the file to network location.So in the SaveAs call,you can give the network path itself. This you have to make sure this network share is accessible for the IIS worker process and also has write permission.Also in my understanding asp.net gets the file saved to temporary location first (you do not have control on this if you are using the asp.net HttpPostedFileBase or FormCollection ). More details here
You can even run this in an async so that your requests will not be blocked
if (FileUpload1.HasFile)
// Call to save the file.
FileUpload1.SaveAs("\\networkshare\filename");
https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas(v=vs.110).aspx
3.Save the file the current way to local directory and then use HTTP POST. This is worst design possible as you are first going to read the contents and then transfer it as chunked to other server where you have to setup another webservice which recieves the file.The you have to read the file from request stream and again save it to your location. Am not sure if you need to do this.
let me know if you need more details on any of the listed method.
Or you just write it to a folder on the webservers, and create a scheduled task that moves the files to the file server every x minutes (e.g. via robocopy). This also makes sure your webservers are not reliant on your file server.
Assuming that you have an HttpPostedFileBase then the best way is just to call the .SaveAs() method.
You need the UNC path to the file server and that is it. The simplest version would look something like this:
public void SaveFile(HttpPostedFileBase inputFile) {
var saveDirectory = #"\\fileshare\application\directory";
var savePath = Path.Combine(saveDirectory, inputFile.FileName);
inputFile.SaveAs(savePath);
}
However, this is simplistic in the extreme. Take a look at the OWASP Guidance on Unrestricted File Uploads. File uploads can be the source of many vulnerabilities in your application.
You also need to make sure that the web application has access to the file share. Take a look at this answer
Creating a file on network location in asp.net
for more info. Generally the best solution is to run the application pool with a special identity which is only used to access the folder.
the solution i came up was: after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
I would suggest not posting the file at once - it's then full in memory, which is not needed.
You could post the file in chunks, by using ajax. When a chunk receives at your server, just add it to the file.
With the File Reader API, you could read the file in chunks in Javascript.
Something like this:
/** upload file in chunks */
function upload(file) {
var chunkSize = 8000;
var start = 0;
while (start < file.size) {
var chunk = file.slice(start, start + chunkSize);
var xhr = new XMLHttpRequest();
xhr.onload = function () {
//check if all chunks are and then send filename or send in in the first/last request.
};
xhr.open("POST", "/FileUpload", true);
xhr.send(chunk);
start = end;
}
}
It can be implemented in different ways. If you are storing files in files server as files in file system. And all of your servers inside the same virtual network
Then will be better to create shared folder on your file server and once you received files at web server, just save this file in this shared folder directly on file server.
Here the instructions how to create shared folders: https://technet.microsoft.com/en-us/library/cc770880(v=ws.11).aspx
Just map a drive
I take it you have a means of saving the uploaded file on the web server's local filesystem. The question pertains to moving the file from the web server (which is probably one of many load-balanced nodes) to a central file system all web servers can access it.
The solution to this is remarkably simple.
Let's say you are currently saving the files some folder, say c:\uploadedfiles. The path to uploadedfiles is stored in your web.config.
Take the following steps:
Sign on as the service account under which your web site executes
Map a persistent network drive to the desired location, e.g. from command line:
NET USE f: \\MyFileServer\MyFileShare /user:SomeUserName password
Modify your web.config and change c:\uploadedfiles to f:\
Ta da, all done.
Just make sure the drive mapping is persistent, and make sure you use a user with adequate permissions, and voila.

Write a simple text file to IBM iSeries IFS from my ASP.NET web app

So part of my job is to write a file to iSeries IFS, or this path in particular \ServerIPAddress\home\test\
I have an ASP.NET web application to do this, and this is the code (C#) I use to write a simple text file to that directory:
string filename = #"\\SomeIPAdress\home\test\test.txt";
byte[] file = Encoding.ASCII.GetBytes("hello world");
FileStream fs = new FileStream(filename, FileMode.OpenOrCreate);
fs.Write(file, 0, file.Length);
fs.Close();
When executing this code, the program gives me "Access Denied" error
Exception Details: System.UnauthorizedAccessException: Access to the path '\SomeIPAddress\home\test\test.txt' is denied.
ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity ...
I can access this directory \SomeIPAddress\home\test using windows file explorer using IBM UID and password, and I can create and edit a text file manually as well.
I know it has to have something to do with granting access right to my ASP.NET app by providing that UID and password, but I can't quite figure it out, and I have been stuck for few days.
Let me know if you need any extra information. Thanks for the help
Two solutions.
Best Practice. Setup the iseries to use the same domain controller as everything else on your network. Then it will know the asp.net account requesting access and allow access to the IFS.
or
Setup a mapped drive on the machine hosting the asp.net page that points to the IFS. The mapped drive will have credentials saved in the vault.
Thanks Mike Wills for leading me to a solution. This is the code I use to connect to the network share using P/Invoke WNet Connection, which is from this answer here
DisconnectFromShare(#"\\server-a\DBFiles", true); //Disconnect in case we are currently connected with our credentials;
ConnectToShare(#"\\server-a\DBFiles", username, password); //Connect with the new credentials
if (!Directory.Exists(#"\\server-a\DBFiles\"))
Directory.CreateDirectory(#"\\server-a\DBFiles\"+Test);
File.Copy("C:\Temp\abc.txt", #"\\server-a\DBFiles\");
DisconnectFromShare(#"\\server-a\DBFiles", false); //Disconnect from the server.
Thanks guys for the help.
I think this is what you are looking for. Sorry, I don't have access to my code at work that I know works right now. It was really simple to do and worked perfectly.
If you want my working code, please let me know and I'll pull that tomorrow.
UPDATE: Sorry, I didn't post my code earlier, we are swamped at work.
NetworkCredential nc = new NetworkCredential("user", "password", "domain");
using (new NetworkConnection(#"\\server\directory1\directory2", nc))
{
// your IO logic here
}

C# Get Full File Path

I have an ASP FileUpload control and I am uploading:
C:\Documents and Settings\abpa\Desktop\TTPublisher\apache-tomcat-6.0.26\webapps\ttpub\WEB-INF\classes\org\gtfs\tmp\GTFS_Rail\routes.txt
What is the C# code to grab this entire string using the code below:
var pathOfCsvFile = Server.MapPath(ImportRoutes.FileName);
var adapter = new GenericParsing.GenericParserAdapter(pathOfCsvFile);
DataTable data = adapter.GetDataTable();
I know that Server.MapPath needs to change.
UPDATE:
Using System.IO.Path.GetFullPath gave me the below output:
pathOfCsvFile = "C:\\Program Files\\Common Files\\Microsoft Shared\\DevServer\\10.0\\routes.txt"
You are mixing up client and server behavior, which is easy to do when you are testing locally. The issue you are having is that the FileUploadControl (and HTML file uploading in general) is specifically designed to not provide you with the full path to the file. That would be a privacy breach. What it is designed to give you is the binary data of the file that was uploaded itself. Specifically you should query the properties on FileUploadControl: FileBytes or FileContent.
Just to further clarify the issue, what would happen if the browser user was actually on a physically different machine from the web server (the usual case)? What good would the full path of the file on the client machine be to you on the server?
Server.MapPath will return the physical path to a file in or below the application root. If that path you list is outside the application root, Server.MapPath will not work.
You can map a virtual directory to a folder you want to use to hold file uploads, which you can then discover with Server.MapPath.

Difference between FileStream and WebClient

So, I'm actually trying to setup a Wopi Host for a Web project.
I've been working with this sample (the one from Shawn Cicoria, if anyone knows this), and he provides a whole code sample which tells you how to build the links to use your Office Web App servers with some files.
My problem here, is that his sample is working with files that are ON the OWA server, and i need it to work with online files (like http://myserv/res/test.docx. So when he reads his file content, he's using this :
var stream = new FileStream(myFile, FileMode.Open, FileAccess.Read);
responseMessage.Content = new StreamContent(stream);
But that ain't working on "http" files, so i changed it with this :
byte[] tmp;
using (WebClient client = new WebClient())
{
client.Credentials = CredentialCache.DefaultNetworkCredentials;
tmp = client.DownloadData(name);
}
responseMessage.Content = new ByteArrayContent(tmp);
which IS compiling. And with this sample, i managed to open excel files in my office web app, but words and powerpoint files aren't opened. So, here's my question.
Is there a difference between theses two methods, which could alter the content of the files that i'm reading, despite the fact that the WebClient alows "online reading" ?
Sorry for the unclear post, it's not that easy to explain such a problem x) I did my best.
Thanks four your help !
Is there a difference between theses two methods, which could alter
the content of the files that i'm reading, despite the fact that the
WebClient allows "online reading"
FileStream open a file handle to a file placed locally on disk, or a remote disk sitting elsewhere inside a network. When you open a FileStream, you're directly manipulating that particular file.
On the other hand, WebClient is a wrapper around the HTTP protocol. It's responsibility is to construct HTTP request and response messages, allowing you to conveniently work with them. It has no direct knowledge of a resources such as a file, or particularly where it's located. All it knows is to construct message complying with the specification, sends a request and expects a response.

Categories

Resources