Decoding JPEG MCUs in C# - c#

abstract
In the process of writing an unpacker for a IPK-archive (NDS Homebrew Image Viewer) I got stuck at the point where I had to decode single MCUs. The original Sourcecode indicates the usage of DCT but the reference implementation is all in ARM assembly...
I'm currently writing a rudimentary unpacker for the files utilized by NDS Homebrew "Image Viewer" (http://gamebrew.org/wiki/Image_Viewer).
The original site seems down but I found the source code (including a file format description) inside the archive hosted on gamebrew.
The unpacker is written in C# due to convenience (BinaryReader) and also because there is no need for it to be fast.
I've already managed to extract the thumbnails. Those are stored as deflated (zlib) BGR-Bitmaps.
The full-size images are split up into MCU's (8x8 samples). Each MCU is also stored in deflated form.
As of now I got the decompressed data with what seems like DCTed data. It also looks like they've "multiplexed" the individual channels (is this the 4:1:1?).
In the NDS version the byte data then gets passed together with a quantization-table (array of 64 signed 16bit-integers) into a function which looks like this:
void customjpeg_DecodeYUV411(s32 *pQuantizeTable,u8 *pData,u16 *pBuf)
{
pDCs=(s16*)pData;
pACs=(s8*)&pDCs[4*4*6];
static __attribute__ ((section (".dtcm"))) DCTELEM _y0[DCTSIZE2],_y1[DCTSIZE2],_y2[DCTSIZE2],_y3[DCTSIZE2],_cb[DCTSIZE2],_cr[DCTSIZE2];
TYUV411toRGB15_Data YUV411toRGB15_Data[4]={
{ NULL, _y0, &_cb[((4*0)*DCTSIZE)+(4*0)], &_cr[((4*0)*DCTSIZE)+(4*0)] },
{ NULL, _y1, &_cb[((4*0)*DCTSIZE)+(4*1)], &_cr[((4*0)*DCTSIZE)+(4*1)] },
{ NULL, _y2, &_cb[((4*1)*DCTSIZE)+(4*0)], &_cr[((4*1)*DCTSIZE)+(4*0)] },
{ NULL, _y3, &_cb[((4*1)*DCTSIZE)+(4*1)], &_cr[((4*1)*DCTSIZE)+(4*1)] },
};
// PrfStart();
// 3.759ms
for(u32 y=0;y<4;y++){
YUV411toRGB15_Data[0]._pBuf=&pBuf[((8*0)*64)+(8*0)];
YUV411toRGB15_Data[1]._pBuf=&pBuf[((8*0)*64)+(8*1)];
YUV411toRGB15_Data[2]._pBuf=&pBuf[((8*1)*64)+(8*0)];
YUV411toRGB15_Data[3]._pBuf=&pBuf[((8*1)*64)+(8*1)];
for(u32 x=0;x<4;x++){
DCT13bit_asm(pQuantizeTable,_y0);
DCT13bit_asm(pQuantizeTable,_y1);
DCT13bit_asm(pQuantizeTable,_y2);
DCT13bit_asm(pQuantizeTable,_y3);
DCT13bit_asm(pQuantizeTable,_cb);
DCT13bit_asm(pQuantizeTable,_cr);
YUV411_13bit_toRGB15_asm(YUV411toRGB15_Data);
YUV411toRGB15_Data[0]._pBuf+=DCTSIZE*2;
YUV411toRGB15_Data[1]._pBuf+=DCTSIZE*2;
YUV411toRGB15_Data[2]._pBuf+=DCTSIZE*2;
YUV411toRGB15_Data[3]._pBuf+=DCTSIZE*2;
}
pBuf+=(DCTSIZE*2)*64;
}
// PrfEnd(0); ShowLogHalt();
}
(taken from: customjpeg.cpp)
why are the calls run 4x4 times per mcu?
is it pure DCT? because i don't see where the "dequantization" is done and i've seen (afaik) some references to the zigzag-pattern(?) in the assembly..
am I missing something?
I've checked with some online resources, but they only seem to focus on the encoding process.
Unfortunately I lack the mathematical background, therefore I haven't grasped on to the full concept of DCT and thus the reversal process isn't as straight forward to me..
I've spent a whole day to get this running by utilizing libraries and C#-DCT implementations but I was unsuccessful.

Related

svg2png performance

I'm having difficulties with the speed of svg2png and wondering if there was any way to improve it. I'm using D3 to create a Radar Chart. This is all rendered by Node.js in a jsdom. The resulting SVG is converted to a PNG using svg2png. The purpose of doing this is to then insert the image into reports which are provided to the end user. The same javascript which renders the radar chart is used within the app (without node.js) and works very quickly.
Using .NET core 2.1 and Node.js v8.11.2
Time taken to convert the svg to png is approximately 2-3 seconds.
Invocation of node services;
public async Task<string> GetRadarChartAsync(dynamic options)
{
return await _nodeServices.InvokeAsync<string>("./wwwroot/js/node-radar-chart.js", options);
}
Which is being called like so and the base64 image extracted.
Task<string> result = (Task<string>)mapped_function.DynamicInvoke(objects.ToArray<object>());
string img_base64 = result.Result;
The javascript wrapper looks like this;
module.exports = function(callback, options, data) {
const dom = new JSDOM(`<!DOCTYPE html><div id="body" class="radar-chart"></div>`);
var options1 = {
window: dom.window,
selector: '.radar-chart',
data: JSON.parse(data.radar)
}
var options_combined = Object.assign(options1, options);
var chart1 = new RadarChart(options_combined);
// Convert SVG to PNG and return it to controller
var svgText = chart1.html();
svg2png(Buffer.from(svgText))
.then(buffer => buffer.toString('base64'))
.then(buffer => callback(null, buffer));
}
UPDATE 14/8/18
Previously incorrectly labelled this issue as an invocation issue between node and .NET Core 2.1. Further investigation revealed that svg2png is causing the problem.
FURTHER UPDATE
Issue is likely due to svg2png utilising PhantomJS. Proposed idea was to allow svg2png to use the same PhantomJS instance over multiple calls, but there is no development on this issue as yet. See svg2png github.
I'll have to deal with my speed issues until a better solution presents itself.
When I tried out the NodeServices samples on GitHub (especially the server-side rendering one) then the invocation time was 1 second minimum when using .NET 2.0. When I tried it with .NET 2.1 I think it decreased to like 100ms or less if I remember correctly. So it's possible that the NodeServices invocation time got a huge improvement from 2.0 to 2.1.
If it is not possible to upgrade then maybe you could refactor so that you need to invoke NodeServices only once.
I have never used svg2png but I see that you do buffer.toString('base64') which probably is not too efficient. I would imagine this happens:
The whole buffer is read to the end and converted at the same time to Base64, and Base64 in itself is not very efficient when it comes to size on disk (as far as I know).
Then this whole conversion is moved from Node to ASP.NET and I guess there could be some serialization going on here which will be slow with huge Base64 content(?).
I would look into using streams on both sides as that's what is usually used when it comes to media files.

Encrypting BIG emails in Outlook cause Low Ressources Exception in C#

I am developing a tool, that encrypts emails with S/MIME in bulk within Outlook 2013. It works so far, but when I am trying to encrypt a REALLY BIG email (in the test case it was about 60MB raw). I get a COMException stating unsufficient ressources.
I can go around this, by working direktly with EWS and MimeKit (which works like a charm! Thank you #jstedfast), but I'd like to find a way to work in Outlook, for network traffic considerations. I know these changes will be synched to Exchange eventually, but during the process itself, it is independent of bandwidth.
I am also looking at MapiEx, but if there is an easier solution, than having yet another dependency (and with MFC too), I'd be happy! Maybe there are some settings, I'd have to make before.
A bit of code. The Exception it caught somewhere else.
public void String SetEncryption(MailItem mailItem)
{
PropertyAccessor pa = null;
try
{
pa = mailItem.PropertyAccessor;
Int32 prop = (int)pa.GetProperty(_PR_SECURITY_FLAGS);
Int32 newprop = prop | 1;
pa.SetProperty(_PR_SECURITY_FLAGS, newprop);
}
finally
{
Marshal.FinalReleaseComObject(pa);
pa = null;
}
}
Edit: The Exception is not coming, when the encryption is set, but when the result is saved, after the encryption is set.
SetEncryption(mailItem);
mailItem.Save();
I solved it myself.
Since I had the problems in Outlook itself, I was trying MAPIEx to access the raw S/MIME Attachment in the email and de-/encrypt it using MimeKit/BouncyCastle.
The same problem occoured, but with a different error message, which lead me to the following site: ASN.1 value too large.
To sum it up: The Crypto API has two signatures. One which takes a byte array and one, which takes a stream. The first one has an arbitrary imposed (!!!) limit of 100 Million Bytes. Since the enveloped CMS has double base64 the ratio of this 100 MB is 9/16, which is round about 56 MB.
I assume, that Outlook uses the same API-Call and therefore has this limit.
I know it is not a fact, but the evidence strongly supports this theory. :)
Check if KB 2480994 is installed: http://support.microsoft.com/kb/2480994

Programatically edit a resource table of an external exe? [duplicate]

There is a Resource Hacker program which allow to change the resources in the other win32(64) dll and exe files.
I need to do the same thing, but programmaticaly. Is it possible to do it using .Net framework? What is the good starting point to do it?
You must use the BeginUpdateResource, UpdateResource and EndUpdateResource WinApi functions, try this page to check the pinvoke .Net signature of these functions, also you can check this project ResourceLib.
The author points to another tool "XN Resource Editor" which comes with source code (although Delphi, not .NET).
This should be enough to see which functions being used and use the .NET equivalent of them.
Take a look at Anolis.Resourcer. It seems to be the thing you need
A ResHacker clone developed as a testbed for Anolis.Core and to replace ResHacker (because ResHacker doesn't support x64, XN Resource Editor (ResHacker's spiritual sequel) doesn't support multiple-language resources and crashes a lot, and other utilities rest cost actual money. It has a powerful yet simplified UI that doesn't duplicate commands or confuse the users with special-case handlers (which ResHacker and XN have in spades).
Note that none of these will work if you're dealing with signed EXEs or DLLs.
Well, as I see it is not easy task, so I'll use command line interface of Resource Hacker.
If you want to do it straight from .NET, there is a library called Ressy exactly for this purpose. It provides both low-level operations on resources (i.e. working with raw byte streams), as well as high-level (i.e. replacing icons, manifests, version info, etc.).
Add or overwrite a resource:
using Ressy;
var portableExecutable = new PortableExecutable("C:/Windows/System32/notepad.exe");
portableExecutable.SetResource(
new ResourceIdentifier(
ResourceType.Manifest,
ResourceName.FromCode(1),
new Language(1033)
),
new byte[] { 0x01, 0x02, 0x03 }
);
Get resource data:
using Ressy;
var portableExecutable = new PortableExecutable("C:/Windows/System32/notepad.exe");
var resource = portableExecutable.GetResource(new ResourceIdentifier(
ResourceType.Manifest,
ResourceName.FromCode(1),
new Language(1033)
));
var resourceData = resource.Data; // byte[]
var resourceString = resource.ReadAsString(Encoding.UTF8); // string
Set file icon:
using Ressy;
using Ressy.HighLevel.Icons;
var portableExecutable = new PortableExecutable("C:/Windows/System32/notepad.exe");
portableExecutable.SetIcon("new_icon.ico");
See the readme for more examples.

How to read an extension .proto file (textformat.merge) with old code?

My question is related with the C# implementation of the google protocol buffers (protobuf-csharp-port, by jon skeet, great job!)
I am experiencing troubles with the extensions: let's say I wrote:
"transport_file.proto" with a "transport message" and some code to
deal with it "code_old".
and I wrote an extension of the transport message on
"Mytransport.proto" file, and new code to read it "code_new".
I'm trying to read a new message (from MyTransport.proto) with the code_old expecting to ignore the extension, but I get an exception in the merge method from TextFormat: "transport" has no field named "whatever_new_field"
Transport.Builder myAppConfigB = new Transport.Builder();
System.IO.StreamReader fich = System.IO.File.OpenText("protocolBus.App.cfg");
TextFormat.Merge(fich.ReadToEnd(),myAppConfigB);
fich.Close();
new extended file looks like:
...
Transport
{
TransportName: "K6Server_0"
DllImport: "protocolBus.Transports.CentralServer"
TransportClass: "K6Server"
K6ServerParams
{
K6Server { host: "85.51.11.23" port: 40069 }
Service: "TZinTalk"
...
}
}
...
while the old one, not extended:
...
Transport
{
TransportName: "K6Server_0"
DllImport: "Default"
TransportClass: "Multicast"
}
...
The whole idea is to use the text based protocol buffer as a config file in which I write some params, and based on one of those I load and assembly (which will read the whole message with the new extension (params to initialize the object).
Any idea? (it is a desperate question :D )
I'm using MSVC# 2008Express edition, protobuf-csharp-port version 0.9.1 (someday I'll upgrade everything).
THANKS in advance.
I'm working on a non centrilized Publish-Subscribe framework of messages (for any written message in a proto file I auto create a Publish and a Subscriber class) with different transports. By the default I use multicast, but broadcast and a "UDP star" are also included. I let the extension mechanism to let people add new transports with its owm config params that should be read by my main code_old (just to load the assembly) and let the new transport (.dll) read it again (fully).
Curious? the previous, almost functional, version is in http://protocolbus.casessite.org
Update 1
Extended types in text format are enclosed in brackets (good to know, I was not aware of it :D ) so I should have written:
[K6ServerParams]
{
K6Server { host: "85.51.11.23" port: 40069 }
Service: "TZinTalk"
...
}
Protocol buffers are designed to be backwards and forwards compatible when using their binary format, but certainly the current code doesn't expect to parse the text format with unknown fields. It could potentially be changed to do that, but I'd want to check with the Java code to try to retain parity with that.
Is there any reason you're not using the binary representation to start with? That's the normal intended usage, and the one where the vast majority of the work has gone in. (Having said which, it all seems a bit of a blur after this long away from the code...)

How can I port a Javascript AES library to .NET to ensure interoperability?

Background:
I have data that I'm encrypting with javascript on the client side that needs to be decrypted on the server side.
As far as I can tell, the javascript AES library I'm using does not interop with the C# Rijndael library.
Thus, I'm left to essentially implement the javascript AES in C# for use.
I'm going to try to compile the javascript using jsc.exe into a dll and see if reflector can save me some time.
I'm aware that jscript is not the same as javascript, but I'm hoping I can get away with something that works awefully close, and just do the touchups manually.
Problem:
When I compile the javascript using JSC I get the following error:
error JS1234: Only type and package
definitions are allowed inside a
library
The offending line is this first line in the following lines of code:
var GibberishAES = (function(){
var Nr = 14,
/* Default to 256 Bit Encryption */
Nk = 8,
Decrypt = false,
enc_utf8 = function(s)
{
try {
return unescape(encodeURIComponent(s));
}
catch(e) {
throw 'Error on UTF-8 encode';
}
},
dec_utf8 = function(s)
{
try {
return decodeURIComponent(escape(s));
}
catch(e) {
throw ('Bad Key');
}
},
And the full source can be found here:
I'm not sure what the problem is. I'm also open to suggestions as to how to encrypt/decrypt data between Javascript and C#.
If you just want to do AES from Javascript, did you try slowAES? It worked for me.. I found good interop between slowAES and the built-in Rijndael or AES classes in .NET. Also I found that the class design was natural and easy to use and understand. This would not require porting from Javascript to JScript.
Password-based-Key-derivation is not really handled by SlowAES. If you need that (likely) then I suggest the PBKDF2 implementation from Parvez Anandam. I also have used that, and it works nicely.
When I tested slowAES coupled with Anandam's PBKDF2, it had good interop with C#'s RijndaelManaged class, in CBC mode.
Don't be put off by the name "slowAES" - it isn't really slow. It's named "slow" because it's Javascript.
If you cannot use something that is clean and compatible like slowAES, then before trying the jsc compiler, I would suggest packaging the existing javascript code into a Windows Script Component. The WSC allows you to package script logic as a COM component, where it becomes usable by any COM-capable environment including any .NET application. Here's a post that shows how to package slowAES as a WSC.
For some reason not many people are aware that you can package script code as a COM component, but it's been around for 10 years. It may sound unusual to you, but it beats doing the port. The code inside the WSC is Javascript, not Javascript.NET.
I had this problem today as well. I happen to stumble upon the solution. use package theNameSpace { class Whatever { function func() { return "the results"; } } }

Categories

Resources