I am a Jr. Programmer trying to get mp4 videos from an API and save them to a folder on the network. I am using the following code;
public static void saveVideos(HttpContent content, String filename, bool overwrite)
{
string pathName = Path.GetFullPath(filename);
using (FileStream fs = new FileStream(pathName, FileMode.Create, FileAccess.Write, FileShare.None))
{
if (fs.CanWrite)
{
byte[] buffer = Encoding.UTF8.GetBytes(content.ToString());
fs.Write(buffer, 0, buffer.Length);
fs.Flush();
fs.Close();
}
}
}
The code will compile without errors but all videos are written to the folder with a size of 1KB. I can't seem to figure out why I am not getting all of the file.
When I inspect the value of the content I see I am getting data that looks like this:
Headers = {Content-Length: 240634544
Content-Disposition: attachment; filename=Orders.dat
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Can anyone point out my error here?
Thanks
This doesn't do what you think it does:
content.ToString()
Unless otherwise overridden, the default implementation of .ToString() simply prints the name of the class. So all you're saving is a bunch of files with class names in them.
What you're probably looking for is the .ReadAsStringAsync() method instead. Or, since you're converting it to bytes anyway, .ReadAsByteArrayAsync() would also work. Perhaps something like this:
byte[] buffer = await content.ReadAsByteArrayAsync();
Of course, since this uses await then your method would have to be async:
public static async Task saveVideos(HttpContent content, String filename, bool overwrite)
If you can't make use of async and await (if you're on an older .NET version), then there are other ways to handle it as well.
Edit: Based on the comments and some of the context of the question, you may be dealing with large files here. In that case, using streams would perform a lot better. I don't have any sample code at the moment, but essentially what you'd want to do is move data directly from one stream to another rather than storing it in an in-memory variable.
If anyone else is having an issue or learning how to save a file using a FileStream hear is the code I used to solve the problem.
public static void saveVideos(HttpContent content, String filename, bool overwrite)
{
string pathName = Path.GetFullPath(filename);
using (Stream fs = new FileStream(pathName, FileMode.Create, FileAccess.Write, FileShare.None, 4096, FileOptions.None))
{
byte[] buffer = content.ReadAsByteArrayAsync().Result;
fs.Write(buffer, 0, buffer.Length);
}
}
Related
I am observing some strange behaviour when I use SSH.NET to transfer files with SFTP. I am using SFTP to transfer XML files to another service (which I don't control) for processing. If I use SftpClient.WriteAllBytes the service complains the file is not valid XML. If I first write to a temporary file and then use SftpClient.UploadFile the transfer is successful.
What's happening?
Using .WriteAllBytes:
public void Send(string remoteFilePath, byte[] contents)
{
using(var client = new SftpClient(new ConnectionInfo(/* username password etc.*/)))
{
client.Connect();
client.WriteAllBytes(remoteFilePath, contents);
}
}
Using .UploadFile:
public void Send(string remoteFilePath, byte[] contents)
{
var tempFileName = Path.GetTempFileName();
File.WriteAllBytes(tempFileName, contents);
using(var fs = new FileStream(tempFile, FileMode.Open))
using(var client = new SftpClient(new ConnectionInfo(/* username password etc.*/)))
{
client.Connect();
client.UploadFile(fs, targetPath);
}
}
Edit:
Will in the comments asked how I turn the XML into a byte-array. I didn't think this was relevant, but then again I'm the one asking the question... :P
// somewhere else:
// XDocument xdoc = CreateXDoc();
using(var st = new MemoryStream())
{
using(var xw = XmlWriter.Create(st, new XmlWriterSettings { Encoding = Encoding.UTF8, Indent = true }))
{
xdoc.WriteTo(xw);
}
return st.ToArray();
}
I can reproduce your problem using SSH.NET 2016.0.0 from NuGet. But not with 2016.1.0-beta1.
Inspecting the code, I can see that the SftpFileStream (what the WriteAllBytes uses) keeps writing the same (starting) piece of the data all the time.
It seems that your are suffering from this bug:
https://github.com/sshnet/SSH.NET/issues/70
While the bug description does not make it clear that it's your problem, the commit that fixes it matches the problem I have found:
Take into account the offset in SftpFileStream.Write(byte[] buffer, int offset, int count) when not writing to the buffer. Fixes issue #70.
To answer your question: The methods should indeed behave similarly.
Except that SftpClient.UploadFile is optimized for uploads of large amount of data, while the SftpClient.WriteAllBytes is not. So the underlying implementation is very different.
Also the SftpClient.WriteAllBytes does not truncate an existing file. What matters, when you are uploading less data than the existing file have.
Question is rather simple, and somewhat pointless I understand, but still...
Database is on the server, of course, and I need an action that will when initiated grab that file from database and save it to a folder that is in my AppSettings["Active"] configuration property.
Basically,
public ActionResult Activate(int id)
{
Project project = db.Projects.Find(id);
var activeProjectData = project.XML; // project.XML returns byte[] type
//For download (not in this method of course) i'm using something like return File(activeProjectData, "text/xml"); and that works ok
}
Now I want to save that file to AppSettings["Active"] path. Not to sure how to go about it. I've tried using System.IO.File.Create() but that didn't quite turn out well.
Any help is appreciated.
Simply create a FileStream and use it to write the data:
string fileName = ConfigurationManager.AppSettings["Active"];
using(var fs = new FileStream(fileName, FileMode.Create, FileAccess.Write) )
{
fs.Write(project.XML, 0, project.XML.Length);
}
If you don't need more control than that, there is a simple helper method on the File class:
File.WriteAllBytes(fileName, project.XML);
You'll need to use a FileStream
SqlCommand ("select fileBin from SomeTable", connection);
byte[] buffer = (byte[]) command.ExecuteScalar ();
connection.Close();
FileStream fs = new FileStream(#"C:\filename.pdf", FileMode.Create);
fs.Write(buffer, 0, buffer.Length);
fs.Close();
I'm trying to convert a .db file to binary so I can stream it across a web server. I'm pretty new to C#. I've gotten as far as looking at code snippets online but I'm not really sure if the code below puts me on the right track. How I can write the data once I read it? Does BinaryReader automatically open up and read the entire file so I can then just write it out in binary format?
class Program
{
static void Main(string[] args)
{
using (FileStream fs = new FileStream("output.bin", FileMode.Create))
{
using (BinaryWriter bw = new BinaryWriter(fs))
{
long totalBytes = new System.IO.FileInfo("input.db").Length;
byte[] buffer = null;
BinaryReader binReader = new BinaryReader(File.Open("input.db", FileMode.Open));
}
}
}
}
Edit: Code to stream the database:
[WebGet(UriTemplate = "GetDatabase/{databaseName}")]
public Stream GetDatabase(string databaseName)
{
string fileName = "\\\\computer\\" + databaseName + ".db";
if (File.Exists(fileName))
{
FileStream stream = File.OpenRead(fileName);
if (WebOperationContext.Current != null)
{
WebOperationContext.Current.OutgoingResponse.ContentType = "binary/.bin";
}
return stream;
}
return null;
}
When I call my server, I get nothing back. When I use this same type of method for a content-type of image/.png, it works fine.
All the code you posted will actually do is copy the file input.db to the file output.bin. You could accomplish the same using File.Copy.
BinaryReader will just read in all of the bytes of the file. It is a suitable start to streaming the bytes to an output stream that expects binary data.
Once you have the bytes corresponding to your file, you can write them to the web server's response like this:
using (BinaryReader binReader = new BinaryReader(File.Open("input.db",
FileMode.Open)))
{
byte[] bytes = binReader.ReadBytes(int.MaxValue); // See note below
Response.BinaryWrite(bytes);
Response.Flush();
Response.Close();
Response.End();
}
Note: The code binReader.ReadBytes(int.MaxValue) is for demonstrating the concept only. Don't use it in production code as loading a large file can quickly lead to an OutOfMemoryException. Instead, you should read in the file in chunks, writing to the response stream in chunks.
See this answer for guidance on how to do that
https://stackoverflow.com/a/8613300/141172
I’ve been working on a function parsing 3rd party fms logs. The logs are in Gzip, so I use a decompressing function that works for any other Gzip files we use.
When decompressing these files I only get the first line of the compressed file, there’s no exception, it just doesn’t find the rest of the bytes as if there's an EOF at the first line.
I tried using Ionic.Zlib instead of System.IO.Compression but the result was the same. The files don’t seem to be corrupted in any way, decompressing them with Winrar works.
If anybody has any idea of how to solve this, I’ll appreciate your help.
Thanks
You can download a sample file here:
http://www.adjustyourset.tv/fms_6F9E_20120621_0001.log.gz
This is my decompression function:
public static bool DecompressGZip(String fileRoot, String destRoot)
{
try
{
using (FileStream fileStram = new FileStream(fileRoot, FileMode.Open, FileAccess.Read))
{
using (FileStream fOutStream = new FileStream(destRoot, FileMode.Create, FileAccess.Write))
{
using (GZipStream zipStream = new GZipStream(fileStram, CompressionMode.Decompress, true))
{
byte[] buffer = new byte[4096];
int numRead;
while ((numRead = zipStream.Read(buffer, 0, buffer.Length)) != 0)
{
fOutStream.Write(buffer, 0, numRead);
}
return true;
}
}
}
}
catch (Exception ex)
{
LogUtils.SaveToLog(DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff"), "Eror decompressing " + fileRoot + " : " + ex.Message, Constants.systemLog, 209715200, 6);
return false;
}
}
I've put the last 45 minutes wrapping my head around this problem but I just can't explain why it isn't working. Somehow the DeflateStream-class isn't decoding your data properly. I wrote up my own GZip-parser (I can share the code if anyone wants to check it) which reads all the headers and checks them for validity (to make sure that there are no funny stuff there) and then use DeflateStream to inflate the actual data but with your file it still just gets me the first line.
If I recompress using your logfile using GZipStream (after first decompressing it with winrar) then it is decompressed just fine again both my my own parser and your own sample.
There seems to be some critizism on the net about Microsofts implementation of Deflate (http://www.virtualdub.org/blog/pivot/entry.php?id=335) so it might be that you found one of it's quirks.
However, a simple solution to your problem is to switch to SharZipLib (http://www.icsharpcode.net/opensource/sharpziplib/), I tried it out and it can decompress your file just fine.
public static void DecompressGZip(String fileRoot, String destRoot)
{
using (FileStream fileStram = new FileStream(fileRoot, FileMode.Open, FileAccess.Read))
using (GZipInputStream zipStream = new GZipInputStream(fileStram))
using (StreamReader sr = new StreamReader(zipStream))
{
string data = sr.ReadToEnd();
File.WriteAllText(destRoot, data);
}
}
I'm writing code that deals with a file that uses hashes. I need to read a chunk, then hash it, then write it, then read another chunk, etc.
In other words, I need to do a lot of reading and writing. I'm sure this is really simple, but I just wanted to run it by the pros...
Is it possible, and acceptable to do something like:
BinaryReader br = new BinaryReader (File.OpenRead(path));
BinaryWriter bw = new BinaryWriter (File.OpenWrite(path));
br.dostuff();
bw.dostuff();
I remember running into some sort of conflicting file streams error when experimenting with opening and writing to files, and I'm not sure what I had done to get it. Is it two file streams that's the issue? Can I have one stream to read from and write to?
This is perfecty possible and desired, A technicality, if your write method doesn't change the length of the file and is always behind the reader, this should not give any problems. In fact, from an API point of view, this is desirable since this allows the user to control where to read from and where to write to. (It's a recommended specification to write to a different file, in case any bad things happen during the encryption process, your input file wont be messed up).
Something like:
protected void Encrypt(Stream input, Stream output)
{
byte[] buffer = new byte[2048];
while (true)
{
// read
int current = input.Read(buffer, 0, buffer.Length);
if (current == 0)
break;
// encrypt
PerformActualEncryption(buffer, 0, current);
// write
output.Write(buffer, 0, current);
}
}
public void Main()
{
using (Stream inputStream = File.Open("file.dat", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (Stream outputStream = File.Open("file.dat", FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
Encrypt(inputStream, outputStream);
}
}
Now since you're using an encryption, i would even recommend to perform the actual encryption in another specialized stream. This cleans the code up nicely.
class MySpecialHashingStream : Stream
{
...
}
protected void Encrypt(Stream input, Stream output)
{
Stream encryptedOutput = new MySpecialHashingStream(output);
input.CopyTo(encryptedOutput);
}