Compress existing XPS document - c#

I would like to load an existing XPS document and compress it additionally. Looking into MSDN it seems that .NET allows for setting the compression and interleaving however i was unable to find out how to apply those settings to an existing document.

Here's the simplest answer: XPS is simply a zip.
Manually you can rename your file from something.xps to something.zip, extract the contents, recompress the contents at a higher compression level, rename the file back again - you just need to make sure that the zip tool you are using doesn't end up putting everything within a sub-directory within the zip.
Or you could do with scripting or code.
If you want to reduce the file even more then have a look at my codeproject article.
The code attached to it is built around manipulating the output from the "XPS printer driver", however most of the ideas in it should yield a lot of useful options for you to use to compress a file.

Related

C# re-zipping Docx after image replace won't open [duplicate]

I have been trying to write a simple Markdown -> docx parser/writer, but am completely stuck with the last part, which should be the easiest: i.e. compressing the folder into a .docx that Word, or any other .docx reader, will recognize.
My parser-writer is irrelevant really: I have this problem if I simply unzip any old Word-produced *.docx and then try to recompress it with the usual compression utilities, giving it the file-ending docx. Is there some mysterious header I should be adding, or do I need a special OPC compression utility, or what?
I don't so much want a tool that will do this, as to figure out what is supposed to be there. It seems to be independent of the WordprocessingML specification.
Needless to say I don't know anything about compression. Everything I can find via Google has to do with fancy utilities you can use in business, but I'm making a little executable that would be GPLd or something, and should work on anything.
The most common problem around manually zipping together Open XML documents is that it will not work if you zip the directory instead of the contents. In other words, the[content_types].xml file, and the word, docProps, and _rels directories need to reside at the root level of the zip file.
Here are steps to unzip my.docx and re-zip:
% mkdir unzipped
% cd unzipped/
% unzip ../my.docx
% zip -r ../rezipped.docx *
% open ../rezipped.docx
The compression algorithm used is "Zip" (Base 64) compression.
7zip seems to offer this, though i have no tested it.
Further to what Mica said, the contents of the ZIP file are organised according to the Open Packaging Convention; cf. Microsoft's Essentials of the Open Packaging Convention.
You can use the .NET System.IO.Packaging to make and manipulate .docx files; this class is implemented in the Mono project.

How to programmatically determine if an image file is georeferenced

I have a task to programamatically scan a folder for georeferenced images. There might be a lot of images, some quite large, and some not georeferenced. The spatial information can also be either embedded or in a world file.
How can I tell programmatically (C#/WPF/ESRI Runtime) if "C:\someFolder\file.x" is georeferenced?
Thanks
First check the file type to see if it's a format that supports built in georeferencing (GeoTiff, jp2, and MrSid). Other static image files would need some sort of companion file with the georeferencing information. So for each image file you'd want to look for a matching companion file.
If you add some info on what formats the images/world files are in it'll be easier to show you some sample code.

Efficiently finding the segment that has undergone changes recently in a Docx File

I am developing an application which takes the Back Up of Docx file. For the Initial Back Up I copy the entire file in the destination, but next time I want to perform an incremental Back Up i.e I want to backup only that segment of the Docx file that has undergone changes. I need to find the most efficient to do the same.
I would really be thankful if I get any help in this regard.
The DOCX file is different from the previous Microsoft Word programs, which use the file extension DOC, in the sense that whereas a DOC file uses a text or binary format for storing a document, a DOCX file is based on XML and uses ZIP compression for a smaller file size. In other words, a DOCX file is a set of XML files that have been compressed using ZIP.
It might help if you can use ZipFile to dissect and tell which file is really changed and then incrementally save only the changes in your VCS.

Alternatives to ZIP for combining many files into one on Windows using .NET

Im looking for methods to combine files including their name and relative path into one single file. A folder disguised as a file. I don't need any compression or encryption. Just the file data including some binary metadata attached to each file.
It would be great if this file was possible to open/inspect/unpack with a standard file browser in Windows such as with regular zip-files.
Yes I could use zip. But I'm researching alternatives and I would prefer a simple method I could implement myself in C#/.NET.
UPDATE
I've researched this some more and came across Microsoft's Structured Storage format. It looked promising at first but it seemes to be an obsolete format, replaced with the Open Package Format. And then I found out about the TAR-format. It seemes to be the most basic format. But I'm not sure yet if I can add any custom metadata to the entries with TAR.
UPDATE
I went with DotNetZip at the end anyway...
Why not use zip? You can use a third party library, like dotnetzip, to make the code easy to write. And, as you mentioned, Windows handles zip files well.
If you have specific reason to search an alternative to ZIP, take a look on virtual file systems, eg. CodeBase File System or our Solid File System. Solid File System lets you add alternate data streams (like in NTFS) or tags (small chunks of binary or text data) to each file or directory. And with OS edition of SolFS you can make the filesystem visible to Windows (including Explorer and third-party applications).
I must admit that while virtual file systems are easy to use (easier than ZIP), they are commercial products (I didn't see free virtual file system implementations yet).

Detect file extension c#

There is a virus that my brother got in his computer and what that virus did was to rename almost all files in his computer. It changed the file extensions as well. so a file that might have been named picture.jpg was renamed to kjfks.doc for example.
so what I have done in order to solve this problem is:
remove all file extensions from files. (I use a recursive method to search for all files in a directory and as I go through the files I remove the extension)
now the files do not have an extension. the files now look like:
I think this file names are stored in a local database created by the virus and if I purchase the anti virus they will be renamed back to their original name.
since my brother created a backup I selected the files that had a creation date latter than when my brother performed the backup. so I have placed that files in a directory.
I am not interested in getting the right extension as long as I can see the content of the file. for example, I will scan each file and if it has text inside I know it will have a .txt extension. maybe it was a .html or .css extension I will not be able to know that I know.
I belive that all pdf files should have something in common. or doc files should also have something in common. How can I figure what the most common types (pdf, doc, docx, png, jpg, etc) files have in common)
Edit:
I know it will probably take less time to go over all this 200 files and test each one instead of creating this program. it is just that I am curios to see if it will be possible to get the file extension.
In unix, you can use file to determine the type of file. There is also a port for windows and you can obviously write a script (batch, powershell, etc.) or C# program to automate this.
First, congratulate your brother on doing a backup. Many people don't, and are absolutely wiped out by these problems.
You're going to have to do a lot of research, I'm afraid, but you're on the right track.
Open each file with a TextReader or a BinaryReader and examine the headers. Most of them are detectable.
For instance: Every PDF starts with "%PDF-" and then its version number. Just look at those first 5 characters. If it's "%PDF-", then put a PDF on the filename and move on.
Similarly: "ÿØÿà..JFIF" for JPEG's, "[InternetShortcut]" for URL shortcuts, "L...........À......Fƒ" for regular shortcuts (the "." is a zero/null, BTW)
ZIPs / Compressed directories start with {0x50}{0x4B]{0x03}{0x04}{0x14}, and you should be aware that Office 2007/2010 documents are really ZIPs with XML files inside of them.
You'll have to do some digging as you find each type, but you should be able to write something to establish most of the file types.
You'll have to write some recursion to work through directories, but you can eliminate any file with no extension.
BTW - A great tool to help pwith this is HxD: http://www.mh-nexus.de/ It's what I used to pull this answer together!
Good luck!
"most common types" each have it's own format and most of them have some magic bytes at the fixed position near beginning of the file. You can detect most of formats quite easily. Even HTML, XML, .CSS and similar text files can be detected by analyzing their beginning. But it will take some time to write an application that will guess the format. For some types (such as ODF format or JAR format, which are built on top of regular ZIPs) you will be also able to detect this format.
But ... Can it be that there exists such application on the market? I guess you can find something if you search, cause the task is not as tricky as it initially seems to be.

Categories

Resources