So Im trying to upload a file to my ftp server. Every things seems to work like expected but when I open the file from from the ftp I receive a I/O error. The local file works just fine. Some how the file gets corrupt after uploading. I found a similar problem here.
Here I read that you have to change the transfer mode to binary. I tried to set ftpRequest.UseBinary = true; But I still get the I/O error. Do I have to change the transfer mode somewhere els?
This is my ftp upload code:
public string upload(string remoteFile, string localFile)
{
ftpRequest = (FtpWebRequest)FtpWebRequest.Create(host + "/" + remoteFile);
ftpRequest.UseBinary = true;
ftpRequest.Credentials = new NetworkCredential(user, pass);
ftpRequest.Method = WebRequestMethods.Ftp.UploadFile;
// Copy the contents of the file to the request stream.
StreamReader sourceStream = new StreamReader(localFile);
byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd());
sourceStream.Close();
ftpRequest.ContentLength = fileContents.Length;
Stream requestStream = ftpRequest.GetRequestStream();
requestStream.Write(fileContents, 0, fileContents.Length);
requestStream.Close();
FtpWebResponse response = (FtpWebResponse)ftpRequest.GetResponse();
response.Close();
return string.Format("Upload File Complete, status {0}", response.StatusDescription);
}
Using webclient I get the error:
The remote server returned an error: (553) File name not allowed.
Here is my code:
private void uploadToPDF(int fileName, string localFilePath, string ftpPath, string baseAddress)
{
WebClient webclient = new WebClient();
webclient.BaseAddress = baseAddress;
webclient.Credentials = new NetworkCredential(username, password);
webclient.UploadFile(ftpPath + fileName + ".pdf", localFilePath);
}
Your method upload most likely breaks the PDF contents because it treats it as text:
You use a StreamReader to read the PDF file. That class
Implements a TextReader that reads characters from a byte stream in a particular encoding.
(MSDN StreamReader information)
This implies that while reading the file bytes, the class interprets them according to that particular encoding (UTF-8 in your case because that's the default). But not all byte combinations do make sense as UTF-8 character combinations. Thus, this reading already is destructive.
You partially make up for this interpretation by re-encoding the characters according to UTF-8 later:
byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd());
but as said before, the initial interpretation, the decoding as a UTF-8 encoded file already has destroyed the original file unless you were lucky enough and all the byte combinations made sense as UTF-8 encoded text.
For binary data (like ZIP archives, Word documents or PDF files) you should use the FileStream class, cf. its MSDN information.
Related
I am trying to make a httplistener server in c# that sends files to the client (who is on a browser). This is my code:
static void SendFile(HttpListenerResponse response, string FileName, string ContentType) {
response.ContentType = ContentType;
// Read contents of file
var reader = new StreamReader(FileName);
var contents = reader.ReadToEnd();
reader.Close();
// Write to output stream
var writer = new StreamWriter(output);
writer.Write(contents);
// Wrap up.
writer.Close();
stream.Close();
response.Close();
}
Unfortunately, this code cannot send binary files, such as images, PDFs, and lots of other file types. How can I make this SendFile function binary-safe?
Thank you for all the comments and the gist link! The solution where you read from the file as a byte[] and write those bytes to the output stream I looked up worked, but is was kind of confusing, so I made a really short SendFile function.
static void SendFile(HttpListenerResponse response, string FileName, string ContentType) {
response.AddHeader("Content-Type", ContentType);
var output = response.OutputStream;
// Open the file
var file = new FileStream(FileName, FileMode.Open, FileAccess.Read);
// Write to output stream
file.CopyTo(output);
// Wrap up.
file.Close();
stream.Close();
response.Close();
}
This code just copies the file to the output stream.
We files in svn which we extract in byte format with a webservice.
I am trying to move us over to getting these files from an S3 bucket.
When we extract the file from svn it comes out in bytes which we later decode using LZ4.
Like this:
Encoding.UTF8.GetString(LZHelper.ReadFromLZ4Stream(bytes returned by svn))
However, the file for our bucket is just uploaded without compression or anything. I extracted the bytes and tried to encode them so they would match the svn version. I assumed it would be UTF8 as that is the decode used above but it does not seem to match.
In this function, not one of the compressedByte arrays matches the svn reply.
public byte[] GetFile(string pstrPath)
{
AmazonS3Client s3Client = new AmazonS3Client("blah", "blah blah", Amazon.RegionEndpoint.whoknows);
GetObjectRequest gRequest = new GetObjectRequest();
gRequest.BucketName = "test";
gRequest.Key = "the file path and name";
string contents;
GetObjectResponse gresponse = s3Client.GetObjectAsync(gRequest).Result;
byte[] compressedBytes;
using (MemoryStream memoryStream = new MemoryStream())
{
gresponse.ResponseStream.CopyTo(memoryStream);
compressedBytes = memoryStream.ToArray();
}
byte[] compressedBytes1 = LZ4Codec.Wrap(Encoding.Default.GetBytes(contents));
byte[] compressedBytes2 = LZ4Codec.Wrap(Encoding.UTF8.GetBytes(contents));
byte[] compressedBytes3 = LZ4Codec.Wrap(Encoding.UTF7.GetBytes(contents));
byte[] compressedBytes4 = LZ4Codec.Wrap(Encoding.UTF32.GetBytes(contents));
byte[] compressedBytes5 = LZ4Codec.Wrap(Encoding.BigEndianUnicode.GetBytes(contents));
byte[] compressedBytes6 = LZ4Codec.Wrap(Encoding.ASCII.GetBytes(contents));
byte[] svnres = Channel.GetFile(pstrPath);
return Channel.GetFile(pstrPath);
}
Is there a way I can find out what encoding these files are using?
Ideally I will change the file upload to be this encoding but I cannot do that unless I work out what encoding svn is using.
I am tying to upload large files(1 GB+) to Google Drive using GoogleDrive API. My code works fine with smaller files. But when it comes to larger files error occurs.
Error occurs in the code part where the the file is converted into byte[].
byte[] data = System.IO.File.ReadAllBytes(filepath);
Out of memory exception is thrown here.
Probably you followed developers.google suggestions and you are doing this
byte[] byteArray = System.IO.File.ReadAllBytes(filename);
MemoryStream stream = new MemoryStream(byteArray);
try {
FilesResource.InsertMediaUpload request = service.Files.Insert(body, stream, mimeType);
request.Upload();
I have no idea why the suggest to put the whole file in a byte array and then create a MemoryStream on it.
I think that a better way is this:
using(var stream = new System.IO.FileStream(filename,
System.IO.FileMode.Open,
System.IO.FileAccess.Read))
{
try
{
FilesResource.InsertMediaUpload request = service.Files.Insert(body, stream, mimeType);
request.Upload();
.
.
.
}
I am trying to use a GZipStream to compress a document prior to uploading to an FTP server. If I save the compressed file stream to disk just prior to uploading, the copy on the local file system is correct. However, when I try to unzip the file on the FTP server I get a 'File is broken' error from 7zip. The resultant unzipped file is correct until the last few characters when a sequence of characters is repeated. I have tried many different configurations to no avail.
public static void FTPPut_Compressed(string fileContents, string ftpPutPath)
{
using (var inStream = new System.IO.MemoryStream(System.Text.Encoding.Default.GetBytes(fileContents)))
{
inStream.Seek(0, SeekOrigin.Begin);
using (var outStream = new System.IO.MemoryStream())
{
using (var zipStream = new GZipStream(outStream, CompressionMode.Compress))
{
inStream.CopyTo(zipStream);
outStream.Seek(0, SeekOrigin.Begin);
FTPPut(ftpPutPath, outStream.ToArray());
}
}
}
}
private static void FTPPut(string ftpPutPath, byte[] fileContents)
{
FtpWebRequest request;
request = WebRequest.Create(new Uri(string.Format(#"ftp://{0}/{1}", Constants.FTPServerAddress, ftpPutPath))) as FtpWebRequest;
request.Method = WebRequestMethods.Ftp.UploadFile;
request.UseBinary = true;
request.UsePassive = true;
request.KeepAlive = true;
request.Credentials = new NetworkCredential(Constants.FTPUserName, Constants.FTPPassword);
request.ContentLength = fileContents.Length;
using (var requestStream = request.GetRequestStream())
{
requestStream.Write(fileContents, 0, fileContents.Length);
requestStream.Close();
requestStream.Flush();
}
}
Ex of corrupted output:
<?xml version="1.0" encoding="utf-16"?>
<ArrayOfCreateRMACriteria xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<CreateRMACriteria>
<RepairOrderId xsi:nil="true" />
<RMANumber>11-11111</RMANumber>
<CustomerId>1111</CustomerId>
</CreateRMACriteria>
</ArrayOfCreateRMACriteriafriafriafriafriafriafriafriafriafriafriafriafriafriafriafriafriafria
<!-- missing '></xml>' -->
You're not closing (and therefore flushing) the zip stream until after you've uploaded it. I suspect that may well be the problem. Move this line to after the using statement that creates/uses/closes the GZipStream:
FTPPut(ftpPutPath, outStream.ToArray());
... and get rid of the Seek call entirely. ToArray doesn't require it, and there's no suitable point in your code to call it. (If you call it before you flush and close the GZipStream, it will corrept the data; if you call it afterwards it'll fail as the MemoryStream is closed.) As an aside, when you do need to rewind a stream, I'd recommend using stream.Position = 0; as a simpler alternative.
I'm trying to convert a .db file to binary so I can stream it across a web server. I'm pretty new to C#. I've gotten as far as looking at code snippets online but I'm not really sure if the code below puts me on the right track. How I can write the data once I read it? Does BinaryReader automatically open up and read the entire file so I can then just write it out in binary format?
class Program
{
static void Main(string[] args)
{
using (FileStream fs = new FileStream("output.bin", FileMode.Create))
{
using (BinaryWriter bw = new BinaryWriter(fs))
{
long totalBytes = new System.IO.FileInfo("input.db").Length;
byte[] buffer = null;
BinaryReader binReader = new BinaryReader(File.Open("input.db", FileMode.Open));
}
}
}
}
Edit: Code to stream the database:
[WebGet(UriTemplate = "GetDatabase/{databaseName}")]
public Stream GetDatabase(string databaseName)
{
string fileName = "\\\\computer\\" + databaseName + ".db";
if (File.Exists(fileName))
{
FileStream stream = File.OpenRead(fileName);
if (WebOperationContext.Current != null)
{
WebOperationContext.Current.OutgoingResponse.ContentType = "binary/.bin";
}
return stream;
}
return null;
}
When I call my server, I get nothing back. When I use this same type of method for a content-type of image/.png, it works fine.
All the code you posted will actually do is copy the file input.db to the file output.bin. You could accomplish the same using File.Copy.
BinaryReader will just read in all of the bytes of the file. It is a suitable start to streaming the bytes to an output stream that expects binary data.
Once you have the bytes corresponding to your file, you can write them to the web server's response like this:
using (BinaryReader binReader = new BinaryReader(File.Open("input.db",
FileMode.Open)))
{
byte[] bytes = binReader.ReadBytes(int.MaxValue); // See note below
Response.BinaryWrite(bytes);
Response.Flush();
Response.Close();
Response.End();
}
Note: The code binReader.ReadBytes(int.MaxValue) is for demonstrating the concept only. Don't use it in production code as loading a large file can quickly lead to an OutOfMemoryException. Instead, you should read in the file in chunks, writing to the response stream in chunks.
See this answer for guidance on how to do that
https://stackoverflow.com/a/8613300/141172