Not downloading in network drive of particular system? - c#

I am using below code in service file for auto downloading. It works in my system and our network drive, but when i installed it client pc not working but it download in all folders except server shared location.
try
{
filename = #"N:\Shared\tna related\tna (for processing)\TNA DATA\Timesuite\LOC" + filename + ".csv";
StreamWriter file = new StreamWriter(filename);
file.WriteLine(sb.ToString());
file.Close();
TraceImplogs.TraceLTService("Download Completed");
}
catch (Exception es)
{
TraceImplogs.TraceLTService("error 2"+es.Message);
}
Error Shows in client pc:
Could not find a part of the path 'N:\Shared\tna related\tna (for processing)\TNA DATA\DXB2018-05-12.csv'.

Related

Copying the folder from server to the local directory

I am developing a software for downloading a website in C# but I have some trouble in copying the folder from server to the local directory. I am implementing following code for this purpose;
public static void CopyFilesRecursively(DirectoryInfo source, DirectoryInfo target)
{
try
{
foreach (DirectoryInfo dir in source.GetDirectories())
CopyFilesRecursively(dir, target.CreateSubdirectory(dir.Name));
foreach (FileInfo file in source.GetFiles())
file.CopyTo(Path.Combine(target.FullName, file.Name));
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Form2", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
And the function call is
private void button4_Click(object sender, EventArgs e)
{
try
{
CopyFilesRecursively(new DirectoryInfo(#"https:facebook.com"), new DirectoryInfo(#"G:\Projects\"));
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Form2", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
The message box shows that "the given path format is not supported."
As we are aware, all the web sites hosted on internet uses virtual path's (which are more readable and provide more security) for their files and folders. Actual files and folders are on a server behind those virtual path. So to copy a file or a folder from a remote server we need the actual path of the resource.
I provide following code snippet for downloading a file from a server which deployed by my self (I know the directory structure of it of course)
string filename = "MyPage.xml";
string filesource = Server.MapPath("~/MyFiles/") + filename; // server file "MyPage.xml" available in server directory "files"
System.IO.FileInfo fi = new System.IO.FileInfo(filesource);
string filedest = System.IO.Path.GetTempPath()+"MyFile.xml";
fi.CopyTo(filedest);
These are some other SO posts you can look for;
How to download a file from a URL in C#?
Copy image file from web url to local folder?
how to copy contents of a website using a .net desktop application
how to copy all text from a certain webpage and save it to notepad C#
How do you archive an entire website for offline viewing?

How to create and save a temporary file on Microsoft Azure virtual server

I am using a free MS Azure virtual webserver for my site.
On my dev machine I can successfully create a CSV file, save it to a relative temp directory, and then download it to the browser client.
However, when I run it from the Azure site, I get the following error:
System.IO.DirectoryNotFoundException: Could not find a part of the
path 'D:\home\site\wwwroot\temp\somefile.csv'.
Does the free version of Azure Websites block us from saving files to disk? If not, where are we allowed to create/save files that we generate on the fly?
Code Example
private FilePathResult SaveVolunteersToCsvFile(List<Volunteer> volunteers)
{
string virtualPathToDirectory = "~/temp";
string physicalPathToDirectory = Server.MapPath(virtualPathToDirectory);
string fileName = "Volunteers.csv";
string pathToFile = Path.Combine(physicalPathToDirectory, fileName);
StringBuilder sb = new StringBuilder();
// Column Headers
sb.AppendLine("First Name,Last Name,Phone,Email,Approved,Has Background Check");
// CSV Rows
foreach (var volunteer in volunteers)
{
sb.AppendLine(string.Format("{0},{1},{2},{3},{4},{5},{6}",
volunteer.FirstName, volunteer.LastName, volunteer.MobilePhone.FormatPhoneNumber(), volunteer.EmailAddress, volunteer.IsApproved, volunteer.HasBackgroundCheckOnFile));
}
using (StreamWriter outfile = new StreamWriter(pathToFile))
{
outfile.Write(sb.ToString());
}
return File(Server.MapPath(virtualPathToDirectory + "/" + fileName), "text/csv", fileName);
}
Make sure that the ~/temp folder gets published to the server, as it's possible your publish process isn't including it.
Azure Websites provide environment variables that you can use to get to things like a temporary storage folder. For example, there is a "TEMP" variable you could access to get a path to the TEMP folder specific to your Website.
Change line 2 in your method to this:
//string physicalPathToDirectory = Server.MapPath(virtualPathToDirectory);
string physicalPathToDirectory = Environment.GetEnvironmentVariable("TEMP");
Then change the last line to this:
//return File(Server.MapPath(virtualPathToDirectory + "/" + fileName), "text/csv", fileName);
return File(pathToFile, "text/csv", fileName);

S3 TransferUtility.Upload() Network Failure before or during locks file

I am uploading a file into Amazon S3 using the .net sdk. Calling TransferUtility.Upload() works quite well. In testing i have discovered that this method locks the file being uploaded, so i make a copy since my application still needs access to the file during the upload.
In testing network failure scenarios, i have discovered that TransferUtility does not release lock on a file it tries to upload when it fails due to connectivity. It throws an AmazonServiceException which i handle, and then still won't release the file despite both exiting a using block, or calling .Dispose() myself.
All research has yielded nothing about handling network failure other than mentioning that if .Upload() spawns a 'multi-part upload' it might not always be able to clear itself. But i'm experiencing this issue with files of any size, not just large ones.
here is my code:
private Response PutDocument(String CloudPath, String UploadFilePath)
{
var oResponse = new Response(true);
try
{
using (IAmazonS3 s3Client = AWSClientFactory.CreateAmazonS3Client())
{
using (TransferUtility filexfer = new TransferUtility(s3Client))
{
filexfer.Upload(UploadFilePath, BucketName, CloudPath);
oResponse.Message = "Upload Successful";
}
}
}
catch (AmazonS3Exception ex)
{
oResponse.OK = false;
oResponse.Message = "Error when connecting to AWS: ";
if (ex.ErrorCode != null && (ex.ErrorCode.Equals("InvalidAccessKeyId") ||
ex.ErrorCode.Equals("InvalidSecurity")))
{
oResponse.Message += ex.ErrorCode + ": " + "Please check the provided AWS Credentials in the web.config file.";
}
else
{
oResponse.Message += "Caught Exception: " + ex.Message;
}
}
catch (AmazonServiceException ex)
{
oResponse.OK = false;
oResponse.Message = "Network Error when connecting to AWS: " + ex.Message;
}
//delete temp file
//throws IO exception when file is locked due to network outage
File.Delete(UploadFilePath);
return oResponse;
}
So it doesn't seem like it should expected behavior for TransferUtility to keep a file locked after an upload has failed. Does anyone have experience with this and/or am i missing something?
Thanks in advance.
Have you considered creating your own read-only FileStream that allows concurrent readers? The TransferUtility should be able to use an InputStream rather than the file path. That would allow your application to still read the file, unless it needs exclusive access. Also that would allow yourself to close the stream on failures/completion, rather than hoping that TransferUtility does so.
using (var tu = new TransferUtility("id", "key", RegionEndpoint.EUWest1))
using (var fs = new FileStream("C:\\MyFile.xyz", FileMode.Open, FileAccess.Read, FileShare.Read))
{
tu.Upload(fs, "mybucket", "mykey");
}
On failure, try calling TransferUtility.AbortMultipartUploads(). That may clean up the file locks.
Take a look at the documentation here:
http://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/M_Amazon_S3_Transfer_TransferUtility_Upload_3.htm

How to download a file in IIS?

I have written two methods such as FileUpLoad() and FileDownLoad() to Upload and Download a single file in my local system.
void FileUpLoad()
{
string hfBrowsePath = fuplGridDocs.PostedFile.FileName; //fuplGridDocs is a fileupload control
if (hfBrowsePath != string.Empty)
{
string destfile = string.Empty;
string FilePath = Path.Combine(#"E:\Documents\");
FileInfo FP = new FileInfo(hfBrowsePath);
hfFileNameAutoGen.Value = PONumber + FP.Extension;
destfile = FilePath + hfFileNameAutoGen.Value; //hfFileNameAutoGen is a hidden field
fuplGridDocs.PostedFile.SaveAs(destfile);
}
}
void FileDownLoad(LinkButton lnkFileName)
{
string filename = lnkFileName.Text;
string FilePath = Path.Combine(#"E:\Documents", filename);
fuplGridDocs.SaveAs(FilePath);
FileInfo fileToDownLoad = new FileInfo(FilePath);
if (fileToDownLoad.Exists)
{
Process.Start(fileToDownLoad.FullName);
}
else
{
lblMessage.Text = "File Not Saved!";
return;
}
}
While running the application before hosting it in IIS, I can upload a file to the desired location and can also retrieve a file from the saved location. But after publishing it in the localhost, I can only Upload a file. I could not download the saved file. There is no exception too. The Uploaded file is saved in the desired location. I don't know why it is not retrieving the file? Why I cant download the file in IIS? I have searched a lot in the internet, but couldn't find the solution. How to solve this? I am using Windows XP and IIS 5.1 version.
How do you expect your Web Application to do a Process.Start when you deploy this site to a server, your just going to be opening pictures on the server, not on the client PC.
I think this will answer your question: http://www.codeproject.com/Articles/74654/File-Download-in-ASP-NET-and-Tracking-the-Status-o
Also the download file is missing a slash after E:\Documents
another option is to add your wildcard to IIS MIME types

ASP.NET file transfer from local machine to another machine

I basically want to transfer a file from the client to the file storage server without actual login to the server so that the client cannot access the storage location on the server directly. I can do this only if i manually login to the storage server through windows login. I dont want to do that. This is a Web-Based Application.
Using the link below, I wrote a code for my application. I am not able to get it right though, Please refer the link and help me ot with it...
Uploading files to file server using webclient class
The following is my code:-
protected void Button1_Click(object sender, EventArgs e)
{
filePath = FileUpload1.FileName;
try
{
WebClient client = new WebClient();
NetworkCredential nc = new NetworkCredential(uName, password);
Uri addy = new Uri("\\\\192.168.1.3\\upload\\");
client.Credentials = nc;
byte[] arrReturn = client.UploadFile(addy, filePath);
Console.WriteLine(arrReturn.ToString());
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
The following line doesn't execute...
byte[] arrReturn = client.UploadFile(addy, filePath);
This is the error I get:
An exception occurred during a
WebClient request
Ah, it seems (and with good reason), the FileUpload can only save files to the web server and its drives. So my first thought won't work.
But: if you have the necessary permissions, couldn't you just save the file that you get in the FileUpload to that UNC path using standard System.IO calls?? Something like :
protected void Button1_Click(object sender, EventArgs e)
{
try
{
string completeFileName =
Path.Combine(#"\\192.168.1.3\upload", FileUpload1.FileName);
BinaryReader br = new BinaryReader(FileUpload1.PostedFile.InputStream);
FileStream fstm = new FileStream(completeFileName, FileMode.Create, FileAccess.ReadWrite);
BinaryWriter bw = new BinaryWriter(fstm);
byte[] buffer = br.ReadBytes(FileUpload1.PostedFile.ContentLength);
br.Close();
bw.Write(buffer);
bw.Flush();
bw.Close();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
If you expect very large files to be uploaded, you might want to transfer the data from the BinaryReader to the BinaryWriter in chunks - instead of allocating just a single buffer - but that's just an implementation detail, really.
I had the same issue a few days ago, I solved it by creating a user on the Web server and on the storage server with the same user name and password, I then impersonated the user in the web.config file.
NB: The user should have RW permissions in the directory where you want to store the files.

Categories

Resources