FTP Changing PGP File During Transfer in C# - c#

I have PGP files that I've verified as being valid, but at some point during the FTP upload, they become corrupt. When retrieved, I get an error message stating "Found no PGP information in these file(s)."
For what it's worth, the PGP is version 6.5.8, but I think that this is unimportant, as the files seem alright before they're uploaded.
My code is as follows for the file transfer, is there a setting or field that I've missed?
static void FTPUpload(string file)
{
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://ftp.itginc.com" + "/" + Path.GetFileName(file));
request.UseBinary = true;
request.Method = WebRequestMethods.Ftp.UploadFile;
request.Credentials = new NetworkCredential(ApplicationSettings["Username"], ApplicationSettings["Password"]);
StreamReader sr = new StreamReader(file);
byte[] fileContents = Encoding.UTF8.GetBytes(sr.ReadToEnd());
sr.Close();
request.ContentLength = fileContents.Length;
Stream requestStream = request.GetRequestStream();
requestStream.Write(fileContents, 0, fileContents.Length);
requestStream.Close();
FtpWebResponse resp = (FtpWebResponse)request.GetResponse();
Console.WriteLine("Upload file complete, status {0}", resp.StatusDescription);
resp.Close();
string[] filePaths= Directory.GetFiles(tempPath);
foreach (string filePath in filePaths)
File.Delete(filePath);
}
Any help is appreciated

Hmmmm try not reading it into a byte array and instead doing something like this
using (var reader = File.Open(source, FileMode.Open))
{
var ftpStream = request.GetRequestStream();
reader.CopyTo(ftpStream);
ftpStream.Close();
}

PGP encodes data to binary stream, so your reading it via StreamReader and UTF8 probably breaks the data. FTP is unlikely to alter the data as you explicitly binary mode (though UseBinary is true by default so your setting should not do anything at all).

Related

How to upload large files to FTP server in ASP MVC

I am developing an ASP MVC website.
Now i need to upload files(.zip files) to my FTP server. For uploading i use this code.
This code uploads only those files which has size < 10 Mb.
for Example: when i upload a file with 150 MB size with this code it get damaged and the file size changed to 300 MB on my Ftp Server.
So can any one help me..
byte[] fileBytes = null;
//Read the FileName and convert it to Byte array.
string filename = Path.GetFileName(FileUpload1.FileName);
using (StreamReader fileStream = new StreamReader(FileUpload1.InputStream))
{
fileBytes = Encoding.UTF8.GetBytes(fileStream.ReadToEnd());
fileStream.Close();
}
try
{
//Create FTP Request.
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(ftp + ftpFolder + "/" + fileName);
request.Method = WebRequestMethods.Ftp.UploadFile;
//Enter FTP Server credentials.
request.Credentials = new NetworkCredential(ftpUName, ftpPWord);
request.ContentLength = fileBytes.Length;
request.UsePassive = true;
request.UseBinary = true;
request.ServicePoint.ConnectionLimit = fileBytes.Length;
request.EnableSsl = false;
using (Stream requestStream = request.GetRequestStream())
{
requestStream.Write(fileBytes, 0, fileBytes.Length);
requestStream.Close();
}
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
response.Close();
}
catch (WebException ex)
{
throw new Exception((ex.Response as FtpWebResponse).StatusDescription);
}
Add this to your web.config
<httpRuntime maxRequestLength="whatever value you need in kb max value is 2,147,483,647 kb" relaxedUrlToFileSystemMapping="true" />
under system.web section
the default is 4 mb size limit
Details here
It probably gets corrupt, because you are reading the data with utf8 encoded. You should read it in binary format.
Don't use:
using (StreamReader fileStream = new StreamReader(FileUpload1.InputStream))
{
fileBytes = Encoding.UTF8.GetBytes(fileStream.ReadToEnd());
fileStream.Close();
}
You have to use File.ReadAllBytes or a BinaryReader(Stream)
https://msdn.microsoft.com/en-us/library/system.io.file.readallbytes(v=vs.110).aspx
https://msdn.microsoft.com/de-de/library/system.io.binaryreader(v=vs.110).aspx
for your example:
byte[] fileBytes = File.ReadAllBytes(Path.GetFileName(FileUpload1.FileName));
try
{
//Create FTP Request.
FtpWebRequest request = (FtpWebRequest)...

Exporting byte[] results to corrupted .zip

I am trying to make a CLR with .NET 2.0 integrated into MS SQL Server 2008. I call an API with and I should receive a .zip as response. I store the response into a Stream and I want to export this file to a physical .zip file.
I tried exporting the file with C# and SQL (BCB OR OLE) and all resulted into a corrupted file. So, I believe I am doing something wrong with in the making of the stream.
The C# code is the following:
private static byte[] GetStreamFileResult(Cookie loginCookie, Guid fileGuid, String baseUri)
{
byte[] output = null;
//String result = null;
string url = "some url"
CookieContainer cookies = new CookieContainer();
cookies.Add(new Uri(url), loginCookie);
WebRequest request = WebRequest.Create(url);
(request as HttpWebRequest).CookieContainer = cookies;
WebResponse response = request.GetResponse();
HttpWebResponse resp = response as HttpWebResponse;
Stream dataStream = response.GetResponseStream();
using (MemoryStream ms = new MemoryStream())
{
CopyStream(dataStream, ms);
output = ms.ToArray();
}
dataStream.Close();
response.Close();
return output;
}
The C# code to export the zip is the following:
File.WriteAllBytes("C:\\folder\\t.zip", stream); // Requires System.IO
The copy from stream to stream:
public static void CopyStream(Stream input, Stream output)
{
if (input != null)
{
using (StreamReader reader = new StreamReader(input))
using (StreamWriter writer = new StreamWriter(output))
{
writer.Write(reader.ReadToEnd());
}
}
}
Your CopyStream is broken. You need to talk binary. You're currently treating binary zip data as though it were text:
byte[] buffer = new byte[2048];
int bytesRead;
while((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) {
output.Write(buffer, 0, bytesRead);
}

Upload xml file on ftp binary

i have code, where i send xml file to ftp server, but size of file on ftp server is smaller than original file. I'm trying to enable binary transmission, but result is still the same.
FileInfo f = new FileInfo("C:\\Users\\L\\Desktop\\data.xml");
long original_vel = f.Length;
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://***");
request.UseBinary = true;
request.Method = WebRequestMethods.Ftp.GetFileSize;
request.Method = WebRequestMethods.Ftp.UploadFile;
request.Credentials = new NetworkCredential("*****", "*****");
StreamReader sourceStream = new StreamReader(#"C:\\Users\\L\\Desktop\\data.xml");
byte[] fileContents = Encoding.Unicode.GetBytes(sourceStream.ReadToEnd());
sourceStream.Close();
request.ContentLength = fileContents.Length;
long ftp_vel = request.ContentLength;
Stream requestStream = request.GetRequestStream();
requestStream.Write(fileContents, 0, fileContents.Length);
requestStream.Close();
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
if (original_vel == ftp_vel)
{
response.Close();
}
else
{
Odesilani();
}
The size of original file is 294 672, but the file on ftp have 294 670.
The xml file on ftp is valid....But when i compare files in total comander, the original file have: FF FE 3C 00 3F 00.....and the file on ftp have 3C 00 3F 00...But the content of file is ok...:/
Have you any idea?
Is the XML file at the server valid? From your code, You are reading the file using Unicode. Files that are encoded using unicode usually have a character that is placed at the beginning of the file called the Byte Order Mark . That may be the reason you have a 2-Byte difference as it was lost during conversion.
UPDATE The Proper Byte Order Mark for any encoding is given by Encoding.GetPreamble()
The fix to the code above would be..
StreamReader sourceStream = new StreamReader(#"C:\\Users\\L\\Desktop\\data.xml");
//Get Preamble and File Contents
byte[] bom = Encoding.Unicode.GetPreamble();
byte[] content = Encoding.Unicode.GetBytes(sourceStream.ReadToEnd());
//Create Destination array
byte[] fileContents = new Byte[bom.Length + content.Length];
//Copy arrays into destination appending bom if available
Array.Copy(bom, 0, fileContents, 0, bom.Length);
Array.Copy(content, 0, fileContents, bom.Length, content.Length);
request.ContentLength = fileContents.Length;
long ftp_vel = request.ContentLength;
Stream requestStream = request.GetRequestStream();
requestStream.Write(fileContents, 0, fileContents.Length);
requestStream.Close();
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
if (original_vel == ftp_vel)
{
response.Close();
}
else
{
Odesilani();
}

How to assemble pdf from HTTP stream?

I'm using a 3rd party html to pdf conversion (DocRaptor). You POST your HTML to their site and they respond with the PDF. The starter code they give you works fine, but it puts the file to your harddrive. I modified their code to get it to come through the browser and as a file download. So I'm 100% confident the data I'm getting from the HTTP response is good data. I can't seem to assemble it back to a useable file.
I'm reasonably confident the issue is how I'm handing the responseStream data. It all seems to go wrong once I enter the Try/Catch. I'm very new to c# and web programming, so I would very much appreciate some guidance from the SO users here. Thanks. Here is my code.
string postData = String.Format(PostFormat,
(string.IsNullOrEmpty(DocumentContent) ? "document_url" : "document_content"),
HttpUtility.UrlEncode(string.IsNullOrEmpty(DocumentContent) ? DocumentURL : DocumentContent),
HttpUtility.UrlEncode(Name),
HttpUtility.UrlEncode(type),
HttpUtility.UrlEncode(Test.ToString().ToLower()),
HttpUtility.UrlEncode(Strict),
HttpUtility.UrlEncode(PrinceOptions));
var byteArray = Encoding.UTF8.GetBytes(postData);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(DocRaptorUrl);
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
request.ContentLength = byteArray.Length;
using (var dataStream = request.GetRequestStream()) { dataStream.Write(byteArray, 0, byteArray.Length); }
System.IO.Stream stream = null;
try
{
using (HttpWebResponse httpResponse = (HttpWebResponse)request.GetResponse())
{
using (System.IO.Stream responseStream = httpResponse.GetResponseStream())
{
var filepath = #"C:\Users\David\Downloads\UberwriterUSRReport.pdf";
HttpContext.Current.Response.ContentType = "application/pdf";
// let the browser know how to open the PDF document, attachment or inline, and the file name
HttpContext.Current.Response.AddHeader("Content-Disposition", String.Format("attachment; filename=UberwriterUSRReport.pdf"));
stream = new System.IO.FileStream(filepath, System.IO.FileMode.Create);
CopyStream(responseStream, stream);
long bytestToRead = stream.Length;
while (bytestToRead > 0)
{
if (HttpContext.Current.Response.IsClientConnected)
{
byte[] buffer = new Byte[10000];
int length = stream.Read(buffer, 0, 10000);
HttpContext.Current.Response.OutputStream.Write(buffer, 0, length);
HttpContext.Current.Response.Flush();
bytestToRead = bytestToRead - length;
}
else
{
bytestToRead = -1;
}
}
}
}
}
Is it your intention to save the file to the hardrive before sending it to the browser? Cause that is what you're (incorrectly) doing now.
Best is to enclose the write action in a using statement, cause I don't see you close the stream anywhere:
stream = new System.IO.FileStream(filepath, System.IO.FileMode.Create);
Here you're saving to the file:
CopyStream(responseStream, stream);
Next, you're trying to read your outputstream (with which you just saved the file), to write that to your Response.Outputstream. And you have allready a copystream implementation, so why do it manually here? :
HttpContext.Current.Response.OutputStream.Write(buffer, 0, length);
So, I would say it should be something like:
using (HttpWebResponse httpResponse = (HttpWebResponse)request.GetResponse())
{
using (System.IO.Stream responseStream = httpResponse.GetResponseStream())
{
var filepath = #"C:\Users\David\Downloads\UberwriterUSRReport.pdf";
HttpContext.Current.Response.ContentType = "application/pdf";
// let the browser know how to open the PDF document, attachment or inline, and the file name
HttpContext.Current.Response.AddHeader("Content-Disposition", String.Format("attachment; filename=UberwriterUSRReport.pdf"));
using (var stream = new System.IO.FileStream(filepath, System.IO.FileMode.Create)) {
CopyStream(responseStream, stream);
}
using (var readstream = new System.IO.FileStream(filepath, System.IO.FileMode.Read)) {
CopyStream(readstream, HttpContext.Current.Response.OutputStream);
}
}
}
Or, if you don't want to save the file on the server at all:
using (HttpWebResponse httpResponse = (HttpWebResponse)request.GetResponse())
{
using (System.IO.Stream responseStream = httpResponse.GetResponseStream())
{
// let the browser know how to open the PDF document, attachment or inline, and the file name
HttpContext.Current.Response.AddHeader("Content-Disposition", String.Format("attachment; filename=UberwriterUSRReport.pdf"));
CopyStream(responseStream, HttpContext.Current.Response.OutputStream);
}
}
MUCHO thanks much to Stephen for putting me on the right path. I further refined the implementation. I had more code that what was required. All I want is the user to hit a button, post the HTML to the DocRaptor.com site, have them respond with the generated PDF, and that file to appear as a download in the browser. Here is the final implemented code as tested on Azure.
try
{
using (HttpWebResponse httpResponse = (HttpWebResponse)request.GetResponse())
{
using (System.IO.Stream responseStream = httpResponse.GetResponseStream())
{
//var filepath = #"C:\Users\David\Downloads\UberwriterUSRReport.pdf";
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("Content-Disposition", String.Format("atachment; filename=UberwriterUSRReport.pdf"));
HttpContext.Current.Response.BufferOutput = true;
CopyStream(responseStream, HttpContext.Current.Response.OutputStream);
}
}
}

How to save csv to memory and then ftp?

I am currently creating a CSV file and then I ftp that file.
This is working fine. However, I dont want to save the csv file, i want to create it to memory and then ftp it.
This is my current code:
private void Csv()
{
CsvExport eftExport = new CsvExport();
eftExport.AddRow();
eftExport["customer_reference"] = "Ref";
eftExport["landline"] = "01234567890";
string url = "C:/Content/Cms/DD/";
string fileName = "file.csv";
eftExport.ExportToFile(url + fileName);
this.FtpFile(url, fileName);
}
private void FtpFile(string url, string fileName)
{
FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://url.co.uk/" + fileName);
request.Method = WebRequestMethods.Ftp.UploadFile;
request.Credentials = new NetworkCredential ("Administrator", "pass");
StreamReader sourceStream = new StreamReader(url + fileName);
byte[] fileContents = Encoding.UTF8.GetBytes (sourceStream.ReadToEnd());
sourceStream.Close();
request.ContentLength = fileContents.Length;
Stream requestStream = request.GetRequestStream();
requestStream.Write(fileContents, 0, fileContents.Length);
requestStream.Close();
}
but in stead of doing eftExport.ExportToFile(url + fileName); i dont want it to save to machine??
Use this to but it into a byte array:
byte[] buffer = eftExport.ExportToBytes();
Now:
requestStream.Write(buffer, 0, buffer.Length);
Use the ExportToBytes() function of your CsvExport class.
Then change your FtpFile() to accept a byte array and remove the stream reader
you should end up with quite a bit less code :)
If your CsvExport type has an ExportToStream or similar simply use that create the stream that you subsequently write to the requestStream.

Categories

Resources