In my web application I am working with files. Some files are very large. I use Response.Write() to write the file to the browser. This goes well for the smaller files, but for large files this can take a while and the bandwidth is fully used.
Is it possible to split large documents and send it piece by piece to the browser? Are there other ways to send the document quicker to the browser?
I hold the document as a property of an object.
Why don't you compress the file and store it in the DB and decompress it will extracting it?
You can do a lot of things depending on this questions:
How often does the file change?
Do I really need the files in the DB?
Why not store the File path in the
DB and the File on disk?
Anyhow, since your files are extremely high bandwidth and you would want your app to respond appropriately you might want to use AJAX load the files Asynchronously. You can have a WebHandler .ashx for this.
Here's a few examples:
http://www.dotnetcurry.com/ShowArticle.aspx?ID=193&AspxAutoDetectCookieSupport=1
http://www.viawindowslive.com/Articles/VirtualEarth/InvokingserversidecodeusingAJAX.aspx
My question is, is it possible to
split large documents and send it
piece by piece to the browser?
It depends on the file type, but in general no. If you are sending something like an excel file or a word doc etc. the receiving application will need all of the information (bytes) to fully form the document. You could physically separate the document into multiple ones, and that would allow you to do so.
If the bandwidth is fully used, then there is nothing you can do to "speed it up" short of compressing the document prior to send. In other words, zip it up.
Depending on the document (I know you said .mht, but we're talking content here) you will see the size go down by some amount. Maybe it's enough, maybe not.
Either way, this is entirely a function of the amount of content you want to send versus the size of the pipe available to send it. One of those is more difficult to change than the other.
Try setting IIS's dynamic compression. By default, it's set fairly low, but you can try setting it for a higher compression level and see how much that helps.
I'm not up to speed with ASP.NET but you might be able to buffer from a FileStream to some sort of output stream.
You can use the Flush method to send the currently buffered data to the client (the browser).
Note that this has some implications, as is described aptly here.
I've considered using it myself, a project sent documents that became fairly large and I was cautious about storing the whole data in memory. In the end I decided the data was not large enough to be a problem though.
Sadly the MSDN documentation is very, very vague on what Flush implies and you will probably have to use Google to troubleshoot.
Related
For example, I recorded a video using my camera and saved it as my_vacation.mp4 which size is 50MB. I opened the video file and an encrypted file called secret_message.dat using Visual Studio, by using File.ReadAllBytes() in C#, concatenated both arrays of bytes, and then saved it as my_vacation_2.mp4.
The program I created for testing purpose is able to save the byte index where the hidden file begin and I want to use it as key to extract that hidden file later.
Now I can play that video file normally, without any error. Total file size is 65MB. Suppose no one could access the original file, of course no one would know that the last 15MB part of that video file is actually another file, right?
What might be the flaw of this technique? Is this also a valid steganography technique?
Is this a valid steganography technique?
Yes, it is. The definition of steganography is hiding information in another medium without someone suspecting its presence or existence. Just because it may be a bad approach doesn't change its intentions at all. If anything, a multitude of papers on steganography mention this technique in their introduction section as an example of how steganography can be applied.
What might be the flaw of this technique?
There are mainly 2 flaws: it is trivial to detect and is absolutely fragile to modification attacks.
Many formats encode their data either by a header which says in advance how many bytes to read before the end of file, or by putting an end-of-file marker, which means to keep on reading data until the marker is encountered. By attaching your data after that, you ensure they won't be read by the appropriate format decoder. This can fool your 11-year old cousin who knows nothing about that sort of stuff, but anyone mildly experienced can load the file and count how many bytes were read. If there are unaccounted bytes in the physical file, that will instantly raise red flags.
Even worse, it's trivial to fully extract your secret. You may argue it's encrypted, but remember, the aim of steganography is to not raise any suspicion. Most steganalysis approaches put a statistical number to it, e.g., 60% there is a message hidden in X medium. A few others can go a bit further and guess the approximate length of the embedded secret. In comparison, you're already caught red-handed.
Talking about length, a file of X bitrate/compression and Y duration approximately results to a file of size Z. Even an unsavvy one will know what's up when the size is 30% larger than expected.
Now, imagine your file is communicated through an insecure channel where a warden inspects its contents and if he suspects foul play, he can modify the file so that the recipient doesn't get the message. In this case, it's as simple as loading the file and resaving it. In fact, your method is so fragile it can be destroyed by even the most unintentional of attacks. By just uploading your track to a site for playback, it can unwittingly reencode it for higher compression, just because it makes sense.
Suppose no one could access the original file, of course no one would know that the last 15MB part of that video file is actually another file, right?
No. Your secret file is encrypted, so that probably rules out any headers showing up in hex editor, but there is a problem - MP4 container format and its structure is well known.
You can extract all video/audio tracks and what you are left with is some metadata and your secret message, so it will be obvious that it's not supposed to be there.
It is a valid technique, just not a very effective one.
I am trying to write a tcp reconstruction program in c# , by using SharpPcap. So far I am doing a pretty good job, and the reconstruction is working fine. My only problem is, that in order to reconstruct big Pcap files by myself, I need to load them by parts/chunks to the memory, because sharppcap only let's me load the whole file( I think). Any suggestions?
Thanks
The pcap file format is really simple, see here: http://wiki.wireshark.org/FileFormatReference/libpcap
Why not load the file yourself, possibly a packet at a time, and then you can do what you want as you go along rather than having a library dictate your memory usage patterns?
Given that SharpPcap PcapDevices have a GetNextPacket method, from which both LibPcapLiveDevice and CaptureFileReaderDevice inherit that method, I don't see anything that would require you to load the whole file - you might have to read the entire file, but you can just ignore packets you don't want.
Given the path of a string i want to wipe out the contents of a file. The natural way I thought (which maybe incorrect) was to open a FileStream to the file and write gibberish (random data perhaps taken from a RNGCryptoServiceProvider) to it. And then perhaps do this several times and then delete the file.
My problem is that while this may look logically correct, i read up on another blog that Windows might actually choose to write the file to a different place in the hard disk.
Is that the case in Windows Mobile? Will this actually be a problem? Does this writing to a different location in the hard disk apply to even flash based (SD etc) cards ?
I've not personally done this, but you will probably need to use the low-level FLASH driver IOCTLs to do this correctly.
http://msdn.microsoft.com/en-us/library/aa927166.aspx
I think IOCTL_FMD_RAW_WRITE_BLOCKS looks particularly useful.
-PaulH
Another possibility that may work would be to erase the file normally, then use the defragment APIs to wipe ALL of the freespace on your flash. Since you're wiping everything, it won't be necessary to know exactly where on the disk your file was. But, this will wear out your flash drive more quickly. The C# method is detailed in this blog post: http://blogs.msdn.com/b/jeffrey_wall/archive/2004/09/13/229137.aspx
I have a memory stream that contains a PDF file.
Is it possible to view the PDF without saving it to the hard disk ? Process.Start() only takes a path and not a stream.
Thank you
Only by implementing your own pseudo-file system in C#, somehow mounting this as a disk in Windows, and having it intercept the file open and stream the contents of your MemoryStream. Absolutely 100% certainly not worth the effort.
You can create a RAM drive and write the stream to it, this way you are still keeping it all in ram (assuming the disk operations are what worries you).
Sure, this is certainly possible. Just not via Process Start and Adobe Reader (I assume you are invoking Adobe or something similar)
If you are using .NET or Java you simply need to find a PDF viewer component, there are lots to choose from, google will give you plenty of links, Gnostice has a good one, but its expensive. Once you find a suitable control, view the PDF directly from your app.
If there is, process.Start won't be the way, but I'd risk guessing that there isn't.
Unless there's a specific PDF API that allows that somehow (I doubt) I'd save it to disk.
I am writing a client windows app which will allow files and respective metadata to be uploaded to a server. For example gear.stl (original file) and gear.stl.xml (metadata). I am trying to figure out the correct protcol to use to transfer the files.
I was thinking about using ftp since it is widely used and a proven method to transfer files, except that I would have to transfer 2 files for every actual file (.stl and .stl.xml). However, another thought had also crossed my mind ... What if I create an object and wrap the file, metadata and the directory I needed to tranfer it to, serialize the object and then submit a request to a webservice, to transfer the file.
Original file size would range from 100k to 10MB. Metadata size would probably be less than 200k
The webservice call seems like an easier process to me to deserialize the object and distribute the file and respective metadata accordingly. However I'm not sure if this is a sound idea or if there is a better way to transfer this data other than the two methods I have mentioned.
If someone can point me in the right direction it would be much appreciated.
You could wrap it in a zip file like the "new" office document format does. You might even be able to use their classes to package it all up.
Edit:
Take a look at the System.IO.Packaging.Package class. It seems to be what you need. This class resides in the WindowsBase.dll assembly and became available in .NET 3.0.
PS: Remember that even though it is a zip file, it doesn't need to be compressed. If you have very large files, it may be better to keep them uncompressed. It all depends on how they're going to be used and if the transport size is an issue.