I have a solution that grabs two files and compares them to see if they are the same or not. The sourceFilePath and destFilePath are both on my computer, but I want to be able to use the same solution to get files if they are on a different server. I will be able to test them by setting the parameters for the sourceServer and _destServerList to localhost. How can I make the solution use the file from the relative source server?
edit: I am using localhost for testing purposes before the solution is deployed.
This my current solution:
public class blarto
{
private Server homeServer;
private string homePath;
private ServerList awayServers;
private string awayPath;
private bool ExecuteCommand()
{
if (File.Exists(awayPath))
{
GetSum(homePath);
GetSum2(awayPath);
if (GetSum != GetSum2)
{
Console.WriteLine("they are different.");
return false;
}
else
{
Console.WriteLine("they are the same.");
return true;
}
}
else
{
Console.WriteLine("The destination file does not exist.");
return false;
}
}
}
Assuming you have access to the servers, you could use a UNC path. Something like:
\\your-server-name\share\path\to\file.txt
or
\\your-server-name.domain.com\c$\path\to\file.txt
Otherwise, your web server is going to have to serve up the files. You'd have to build a small single-page web application or HTTP handler that takes a relative path, goes and looks at the appropriate place on the file system it's running on, load the file as a Stream or byte array and write it out to the response stream (with appropriate content type and length headers). IIS will need to be able to handle the MIME type of the document.
The client will have to hold onto it in memory or write it to somewhere temporarily, which may force you to rethink your CRC implementation. All of this is amazingly insecure (you could theoretically give everyone access to every file on that server).
Alternatively, you could make the root folder of the files you need to compare a virtual directory, and then allow that directory to be browsed (an IIS setting). Then something like http://localhost/root/path/to/file.txt might work, but again, not secure at all.
It sounds to me like the file is on your localhost for testing, but will be on the server once you deploy.
If that is the case, start with the relative url of the file: _srcFile = /uploads/testfile.txt;
From that, get the real location using Server.MapPath:
var testFile = Server.MapPath(_srcFile);
Note: MapPath is also defined in HttpServerUtility.
Related
I trying to change my directory which in my local c disk, but where errors says in the title. Is there any way aside from using Server.MapPath?. I'm using a ZipOutputStream nuget package.
I want to locate my directory in C: instead inside the project folder.
public FileResult DownloadZipFileSig(string FileId){
var fileName = "FilesDL".zip";
var tempOutPutPath = Server.MapPath(Url.Content("C:/Users/SDILAP2/Desktop/ID_Esig_Files")) + fileName;
using (ZipOutputStream s = new ZipOutputStream(System.IO.File.Create(tempOutPutPath)))
{
s.SetLevel(9);
byte[] buffer = new byte[4096];
List<string> stringList = FileId.Split(',').ToList();
List<string> tempList = new List<string>();
foreach (string str in stringList)
{
if (System.IO.File.Exists(Server.MapPath("C:/Users/SDILAP2/Desktop/ID_Esig_Files/" + str + ".jpeg")))
{
tempList.Add(Server.MapPath("C:/Users/SDILAP2/Desktop/ID_Esig_Files/" + str + ".jpeg"));
}
}
stringList = tempList;
for (int i = 0; i < stringList.Count; i++)
{
ZipEntry entry = new ZipEntry(Path.GetFileName(stringList[i]));
entry.DateTime = DateTime.Now;
entry.IsUnicodeText = true;
s.PutNextEntry(entry);
using (FileStream fs = System.IO.File.OpenRead(stringList[i]))
{
int sourceBytes;
do
{
sourceBytes = fs.Read(buffer, 0, buffer.Length);
s.Write(buffer, 0, sourceBytes);
} while (sourceBytes > 0);
}
}
s.Finish();
s.Flush();
s.Close();
}
return File(finalResult, "application/zip", fileName);
}
You might be not quite grasping how web URL's work, and how server.mappath() is to be used.
Web users:
When you have a web based url, then all html markup in a page, or even user supplied URL's are so called web based.
So, if you have a folder from the root of your web site say called MyUpLoads
Then that is just a folder in the web site path names.
eg:
www.mywebsite/UpLoadFiles/cat.jpg
And if you write html markup, then you can and could provide a URL to the above picute, or say with a html image control, you could set the ImageURL or "source" (src) to that file.
And if you using IIS (and not IIS express), then of course you can add what is called a virutal folder. Say some big server drive on ANOHTER computer on the same network.
So, that virtual folder could be anywhere on your network, and of course AGAIN for web HTML, or web URL's, again you use this format:
www.mysite/MassiveFolder/info.pdf
or maybe
localhost:5403/MyUpLoads/cat.jpg
However, in code behind?
ANY code behind (c# or vb.net) ALWAYS uses plane jane WINDOWS file paths.
These are valid full windows file names.
That means that code behind is 100% free to open/read/use/see/play with ANY file on the computer, and any file even on the computer network.
So when you use
server.mapPath("localhost:5403/MyUpLoads/cat.jpg")
Then the above is translated into a local plane jane DOS/WINDOWS file path!!!!
The above may well become
C:\Users\AlbertKallal\source\repos\CSharpWebApp\MyUpLoads\cat.jpg
So keep in mind:
web urls - HTML/asp markup in a page = web based syntax/path.
computer path: plane jane full path names like all windows software.
So, in your case?
var fileName = "FilesDL".zip";
var tempOutPutPath = #"C:/Users/SDILAP2/Desktop/ID_Esig_Files")) + fileName;
So you don't need nor want to user server.mappath, since that is ONLY for a given HTML or web based URL that you want to translate into the local computer file path system.
Since your path name(s) are already in that format, then no need is required.
in fact, keep in mind that you can use this fact to your advantage.
ANY folder (or a vitural folder) will appear in your valid URL's and path names (web based).
However, you might have some pdf's, or sensitive documents. So move that folder OUT of the root or web project folders.
Now, no valid URL's exist, or are even allowed.
However, code behind? It can run, see and use ANY file on your computer - and you use code behind to get those files - but the web site, web side of things has NO ability to use or see or get those files. And you can still do things like say provide a download button, but your code behind can fetch the file, read it and pump it out to the end user (stream the file).
So you only need (have) to use the Server.MapPath function WHEN the URL comes from the web site or html markup. This will translate that web based URL into a regular good old fashion full qualified windows file path name.
However, if you already have that full windows path name, then no URL translate to windows file path is required.
So, for the most part, your code behind can look at, see, grab and play with files on the server. Web users, or web based urls MUST be part of the folders in the web site, but no such restrictions exist for the code behind.
Now, when the code is deployed to a web server, often some file security rights on in place, but as a general rule, that web code behind is NOT limited nor restricted to JUST folders in the web site. Those valued URL's are a restriction for the users and web browsers, and as noted, often a folder outside of the web site is used for security purposes, since no possible valid web based paths can use/see or even resolve to file outside of the root starting folder of the web site.
So for those existing files, you don't need server.mappath.
I'm attempting to make a basic .NET API for managing a collection of media (images and videos).
I have configured the webroot to be a folder called "site", and within that folder is a folder called "media" where these files are stored. I can access a test media file that is saved in /site/media/Smush.jpg by loading https://localhost:5001/site/media/smush.jpg - this serves the image as expected.
I have created a method that receives a POST request containing form data from my frontend, and this method saves the file to the webroot using a filestream, code below:
[HttpPost]
[Route("/media/add")]
public async Task<HttpResponseMessage> MediaAdd()
{
try
{
//get the form
var form = HttpContext.Request.Form;
//if there's a route, add it into the filepath, otherwise leave it out and have the filepath go straight to media (this prevents an exception if route is blank)
string filePath = form["route"] == "" ? Path.Combine(_hostingEnvironment.WebRootPath, "media") : Path.Combine(_hostingEnvironment.WebRootPath, "media", form["route"]);
//get the first (should be only) image - DO WE WANT TO BE ABLE TO ADD MULTIPLE IMAGES? PROBABLY TBH
IFormFile image = form.Files.First();
if (image.Length > 0)
{
//check the directory exists - create it if not
if (!Directory.Exists(filePath)) {
Directory.CreateDirectory(filePath);
}
using (Stream fileStream = new FileStream(Path.Combine(filePath, form["filename"]), FileMode.Create))
{
await image.CopyToAsync(fileStream);
return new HttpResponseMessage(HttpStatusCode.OK);
}
}
else {
return new HttpResponseMessage(HttpStatusCode.BadRequest);
}
}
catch (Exception e)
{
return new HttpResponseMessage(HttpStatusCode.BadRequest);
}
}
My frontend submits a route, filename and the media file, and this is used to save the image. This all works fine; I can submit an image with the path "test" and the name "test.jpg", and the API correctly stores the file at /site/media/test/test.jpg. I can view the file in the solution and see a preview of the image, as with Smush.jpg.
However, attempting to load https://localhost:5001/site/media/test/test.jpg results in a 404. Why is this the case? Can I not add files into the webroot through code and have them be accessible as static files as if I added them to the solution in my IDE? Are there any alternative ways of handling this?
I am using .NET 5.0, and have
app.UseStaticFiles(); in Configure() in Startup.cs.
Sorry if this is a duplicate, but I couldn't find anything else like this.
EDIT:
On checking things again, it seems like rather than my files being at https://localhost:5001/site/media, they are simply in https://localhost:5001/media. I am not sure how I was able to access Smush.jpg at https://localhost:5001/site/media/Smush.jpg before.
It seems as though the webroot is not included as part of a URL to access files within it.
As it is now, I have got what I was looking for it to do.
Well first a security concern as also #Heinzi pointed out...
string filePath = form["route"] == "" ? Path.Combine(_hostingEnvironment.WebRootPath, "media") : Path.Combine(_hostingEnvironment.WebRootPath, "media", form["route"]);
What if the user sends form.route == "../../" and instead of image he updates the appsettings.json file ?
Check this out and have that in mind if you're planing to release this code to a production environment and make sure you only accept image files.
On the other hand if you are serving static files from a folder different to wwwroot please use this configuration
Why the 404
It makes sense. You are under the controller/action paths. Going under the site url the engine does the following:
When you request https://localhost:5001/site/media/test/test.jpg the code tries to find the media controller and the test action. It is not looking for static files on the filesystem. Since there is no such controller/action pairs, it will not find anything and thus return 404 not found.
If you saved the files in https://localhost:5001/static/media/test.jpg outside of the mapped routes, you would be able to access it.
Look inside your code for:
MapHttpRoute
Which is used to configure how to identify controller actions which are not decorated with the [Route] attribute.
Security concern
When you want to upload a file, you should consider a better solution and not one that accesses directly your filesystem.
Possible options:
Blob storage on the cloud
Database blobs
Don't forget to sanitize the input with an antivirs or some similar solution.
I am currently working on a 'download file' implementation using Web API 2.
However, as the files that can be downloaded are NOT stored in the database, I am passing in the full file path as the parameter for identification.
It seems the problem with this approach is that the filePath contains characters that are invalid for a URI... Has anyone got any suggestions to resolve this or an alternate approach?
Download file method:
[HttpGet]
[Route("Files/{*filePath}")]
public HttpResponseMessage Get([FromUri]string filePath)
{
try
{
var file = new FileInfo(filePath);
byte[] bytes = System.IO.File.ReadAllBytes(filePath);
var result = Request.CreateResponse(HttpStatusCode.OK);
result.Content = new ByteArrayContent(bytes);
result.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
result.Content.Headers.ContentDisposition.FileName = file.Name + file.Extension;
return result;
}
catch(Exception ex)
{
return Request.CreateResponse(HttpStatusCode.InternalServerError, ex);
}
}
Requiring the client to put the full path in the URI (even if it were encoded so that it only contains valid characters for the URI) implies that you may be publishing these paths somewhere... this is not a great idea for a few reasons:
Security - full Path Disclosure and associated Relative Path Traversal
i.e. what's to stop someone passing in the path to a sensitive file (e.g. your web.config file) and potentially obtaining information that could assist with attacking your system?
Maintainability
Clients may maintain a copy of a URI for reuse or distribution - what happens if the file paths change? Some related conversation on this topic here: Cool URIs don't change
My suggestion - you don't have to put the files themselves in a database, but put a list of files in a database, and use a unique identifier in the URL (e.g. perhaps a slug or GUID). Look up the identifier in the database to discover the path, then return that file.
This ensures:
Nobody can read a file that you haven't indexed and determined is safe to be downloaded
If you move the files you can update the database and client URIs will not change
And to respond to your original question, you can easily ensure the unique identifier is only made up of URI safe characters
Once you have the database, over time you may also fine it useful to maintain other metadata in the database such as who uploaded the file, when, who downloaded it, and when, etc.
I have to upload a file via FTP to ftp://ftp.remoteServer.com
My root directory on remoteServer contains an "upload" and a "download" folder. I need to put my file in the "upload" directory. But on log in, the server automatically puts me in the "download" folder.
I tried doing this:
string serverTarget = "ftp://ftp.remoteServer.com/";
serverTarget += "../upload/myfile.txt";
Uri target = new Uri(serverTarget);
FTPWebRequest ftp = (FTPWebRequest)FtpWebRequest.Create(target);
using(Stream requestStream = ftp.GetRequestStream()) {
// Do upload here
}
This code fails with: (550) File unavailable (e.g., file not found, no access)
I debugged the code, and target.AbsoluteUri returns as ftp://ftp.remoteServer.com/upload instead of ftp://ftp.remoteServer.com/../upload (missing the ..)
If I put ftp://ftp.remoteServer.com/../upload in a browser, I can log in and verify this is the correct place where I want to put my file.
How can I get the FTPWebRequest to go to the correct place?
I believe you can encode the dots as %2E to keep the dots in your URI.
So something like:
string serverTarget = "ftp://ftp.remoteServer.com/%2E%2E/upload/myfile.txt";
Try this:
string serverTarget = "../upload/myfile.txt";
Uri uri = new Uri(serverTarget, UriKind.Relative);
Andy Evans' comment is correct.
Consider the URI: http://ftp.myserver.com/../. The .. means, "take me to the parent of this directory". But there is no parent! So when you derive the absolute URL, you're going to end up with http://ftp.myserver.com/ There is nothing else that the parser can do.
I think the problem is with the configuration of your FTP server. I assume that the directory structure looks something like:
ftproot
upload
download
It looks like the FTP service is automatically logging you to /ftproot/download. That is, the URI ftp.myserver.com gets mapped to /ftproot/download on the local file system. If that's the case, no amount of fiddling with the URI is going to get you anywhere. If the URI root is mapped to the download directory, there is no way you can, using the .. syntax, "go up one level and then down."
Are you able to upload using an FTP client such as Filezilla, or perhaps the Windows FTP command line tool? If so, what are the steps you take to do it? Can you make your code do the same thing?
I have an asp.net mvc app with a route that allows users to request files that are stored outside of the web application directory.
I'll simplify the scenario by just telling you that it's going to ultimately confine them to a safe directory to which they have full access.
For example:
If the user (whose ID is 100) requests:
http://mysite.com/Read/Image/Cool.png
then my app is going to append "Cool.png" to "C:\ImageRepository\Users\100\" and write those bytes to the response. The worker process has access to this path, but the anonymous user does not. I already have this working.
But will some malicious user be able to request something like:
http://mysite.com/Read/Image/..\101\Cool.png
and have it resolve to
"C:\ImageRepository\Customers\101\Cool.png"
(some other user's image?!)
Or something like that? Is there a way to make sure the path is clean, such that the user is constrained to their own directory?
How about
var fileName = System.IO.Path.GetFileName(userFileName);
var targetPath = System.IO.Path.Combine(userDirectory, fileName);
That should ensure you get a simple filename only.
Perhaps you should verify that the path starts with the user's directory path?
e.g. "C:\ImageRepository\Customers\100\"
You should also normalize the paths to uppercase letters when comparing them.
The safest way, if it is an option (you are using windows auth), is to make it a non-issue by using Active Directory rights on the folders so it doesn't matter if the user attempts to access a directory that is not valid.
Absent that, store the files so that the path is abstracted from the user. That is, use whatever name the user provides as a lookup in a table that has the REAL path to the file.
Cannolocalization protection is tricky business and it is dangerous to try and outthink a potential attacker.
Using the Request.MapPath overload is one way to check this:
try
{
string mappedPath = Request.MapPath( inputPath.Text, Request.ApplicationPath, false);
}
catch (HttpException)
{
// do exception handling
}
Also you could explode the string and delimit it by slashes, and check the username match also.
To also be able to include a subdirectory in the path you can use:
string SafeCombine(string basePath, string path)
{
string testPath = Path.GetFullPath(Path.Combine(basePath, path));
if (testPath.startsWith(basePath))
return testPath;
throw new InvalidOperationException();
}