I am using webclient to get the image data from a url, and trying to generate a video with it like so:
//http://www.codeproject.com/Articles/7388/A-Simple-C-Wrapper-for-the-AviFile-Library
WebClient client = new WebClient();
System.Net.WebRequest request = System.Net.WebRequest.Create(images);
System.Net.WebResponse response = request.GetResponse();
System.IO.Stream responseStream = response.GetResponseStream();
Bitmap bitmap = new Bitmap(responseStream);
//create a new AVI file
AviManager aviManager = new AviManager(#"C:\Users\Laptop\Documents\tada.avi", false);
//add a new video stream and one frame to the new file
//set IsCompressed = false
VideoStream aviStream = aviManager.AddVideoStream(false, 2, bitmap);
aviManager.Close();
But it cuts out on the following. In the library on this line
int result = Avi.AVIFileCreateStream(aviFile, out aviStream, ref strhdr);
I get the following error:
System.AccessViolationException: 'Attempted to read or write protected
memory. This is often an indication that other memory is corrupt.'
You probably have "Any CPU" at the moment...so your app then gets compiled/run as a 64bit process (on a 64bit Windows version).
The problem appears to be that that AVI Wrapper library was probably never tested with a 64bit .NET app....it hasn't defined the "pinvoke" definitions properly so that the parameters are correctly pushed on/popped off the stack when making the 64 bit API calls.
Change your project settings "platform target" to x86...so that you can avoid the issue....and can call the "avifil32.dll" albeit in 32bit mode.
Windows does ship with a 32bit and 64bit of that AVI library so in theory it is possible to call an AVI library when you are a 64bit process....but you need to define the interop/marshalling pinvoke properly.
c:\windows\system32\avifil32.dll (64bit)
c:\windows\syswow64\avifil32.dll (32bit)
In 32bit (Microsoft uses the ILP32 data model)...
an int is 4 bytes
a pointer is 4 bytes
In 64bit (Microsoft uses the LLP64 or P64 data model)....
an int is (still) 4 bytes
a pointer is (now) 8 bytes
(see https://msdn.microsoft.com/en-us/library/windows/desktop/aa384083(v=vs.85).aspx)
The mistake that often happens is that "pinvoke" definitions have used "int" when defining pointer types, instead of the more correct IntPtr type.
Thus, the "call" works ok on 32bit (because an "int" is the same size as a "pointer")....while on 64bit they are different sizes.
Other things change when you are 64bit too...such as the default boundary alignment...this can change offsets of types within structures - so you have to be careful when you are defining your pinvoke c# structures...so they match.
In case you are interested for the function call AVIFileCreateStream its WIN32 signature is as follows:
STDAPI AVIFileCreateStream(
PAVIFILE pfile,
PAVISTREAM *ppavi,
AVISTREAMINFO *psi
);
And the "types" of its parameters are:
typedef IAVIFile *PAVIFILE; // i.e. just a pointer
typedef IAVIStream *PAVISTREAM; // i.e. just a pointer
typedef struct {
DWORD fccType;
DWORD fccHandler;
DWORD dwFlags;
DWORD dwCaps;
WORD wPriority;
WORD wLanguage;
DWORD dwScale;
DWORD dwRate;
DWORD dwStart;
DWORD dwLength;
DWORD dwInitialFrames;
DWORD dwSuggestedBufferSize;
DWORD dwQuality;
DWORD dwSampleSize;
RECT rcFrame;
DWORD dwEditCount;
DWORD dwFormatChangeCount;
TCHAR szName[64];
} AVISTREAMINFO;
That wrapper library defined the NET "pinvoke" to AVIFileCreateStream using this:
//Create a new stream in an open AVI file
[DllImport("avifil32.dll")]
public static extern int AVIFileCreateStream(
int pfile,
out IntPtr ppavi,
ref AVISTREAMINFO ptr_streaminfo);
Immediately, you can see that the first parameter is defined incorrectly.
When the "call is made"....only 4 bytes will be placed onto the stack for the first parameter...instead of 8, then for the second parameter (which is a pointer to a pointer) 8 bytes are pushed (because IntPtr was used) (the address where to "write" an address), third parameter is an address to an AVISTREAMINFO structure.
Thus when AVIFileCreateStream is called it accesses those parameters on the stack, but they are basically junk.....it will be trying to use a pointer with the wrong value (i.e. only 4 bytes of the (first parameters) pointers address has come through on the stack...and the remaining 4 bytes (of the 8 byte pointer) are filled from the "next" thing on the stack...thus the pointer address is highly likely to be garbage....which is why you get the access violation.
The way it should have been defined is something like this (note there are other ways to achieve the same):
[DllImport("avifil32.dll", SetLastError=true)]
public static extern int AVIFileCreateStream(IntPtr pfile, out IntPtr ppavi, ref AVISTREAMINFO psi);
Related
Basically I would like to be able to change the PEB of commandline, which I believe is at offset 0x70. I am trying to do this using WriteProcessMemory which is part of Kernel32.dll.
Current_ImageBase.buffer = pNewAddr;
if (!WriteProcessMemory(hProcess, rtlUserProcParamsAddress + 0x70, (IntPtr)(&Current_ImageBase), Marshal.SizeOf(typeof(UNICODE_STRING)), out intPtrOutput))
{
Console.WriteLine("ERROR: Failed to reflect change back to PEB.\n");
return false;
}
This should change the CommandLine of PEB.
There is no guarantee that the PEB will be identical on every version of Windows.
The PEB contains a pointer to a _RTL_USER_PROCESS_PARAMETERS structure named ProcessParameters.
This structure contains a UNICODE_STRING named CommandLine. Inside this structure is a pointer to a buffer which contains the command line arguments.
To externally overwrite these arguments, you must:
Call NTQueryProcessInformation to get the ProcessBasicInformation
and therefore the PEB address
OpenProcess and get write permissions
ReadProcessMemory the PEB
ReadProcessMemory PEB.ProcessParameters
ReadProcessMemory PEB.ProcessParameters.CommandLine
WriteProcessMemory PEB.ProcessParameters.CommandLine.Buffer with the
correct size pulled from the UNICODE_STRING.Length
There is a GlobalSign CA. In most cases its root certificate is already exists in the Windows Certificates storage.
But sometimes (especially on old Windows versions) the storage doesn't contain the certificate.
I need to check if the certificate exists and import it if does not. I exported the certificate to a file and imported it using the code below:
public void ImportCertificate(StoreName storeName,
StoreLocation location,
byte[] certificateData)
{
X509Store x509Store = new X509Store(storeName, location);
X509Certificate2 certificate = new X509Certificate2(certificateData);
x509Store.Open(OpenFlags.ReadWrite);
x509Store.Add(certificate);
x509Store.Close();
}
The code adds the certificate but all certificate purposes are checked:
I don't want to add extra purposes to the certificate just want to set those ones which have other root CAs like below:
How to do it programatically?
You need to use CertSetCertificateContextProperty function to set store-attached properties.
In the dwPropId parameter you pass CERT_ENHKEY_USAGE_PROP_ID. You can find its numeric value in Wincrypt.h C++ header file. In a given case, dwPropId is 9:
#define CERT_ENHKEY_USAGE_PROP_ID 9
In the dwFlags you pass zero (0).
In the pvData parameter (which is IntPtr in managed signature) you pass an unmanaged pointer to an ASN.1-encoded byte array that represents a collection of object identifiers where each OID represents explicitly enabled key usage.
Here is the interop signature:
[DllImport("Crypt32.dll", CharSet = CharSet.Auto, SetLastError = true)]
internal static extern Boolean CertSetCertificateContextProperty(
[In] IntPtr pCertContext,
[In] UInt32 dwPropId,
[In] UInt32 dwFlags,
[In] IntPtr pvData,
);
Add usings reference to System.Runtime.InteropServices namespace.
Next, prepare a collection of key usages:
create a new instance of OidCollection class.
add required OIDs to the collection.
Use created OID collection to instantiate the X509EnhancedKeyUsageExtension class. This ctor overload is fine: X509EnhancedKeyUsageExtension(OidCollection, Boolean).
RawData property of EKU extension will contain ASN.1-encoded byte array which will be passed to CertSetCertificateContextProperty function.
Last parameter of CertSetCertificateContextProperty function is of IntPtr type and expects a pointer to an unmanaged memory block, so:
Use Marshal.AllocHGlobal(eku.RawData.Length) to allocate the properly sized buffer in unmanaged memory.
Use Marshal.Copy(byte[], IntPtr, int, int) static method overload to copy eku.RawData byte array to unmanaged pointer acquired in step 1.
call the CertSetCertificateContextProperty function. If it returns true then everything is ok.
After finishing all job, you must release unmanaged resources to avoid memory leak. Use Marshal.FreeHGlobal(IntPtr) method to release the pointer acquired during Marshal.AllocHGlobal call.
I lack the reputation to comment on other answers but Crypt32's answer is actually incorrect in two places so I'm posting a new answer that is based on Crypt32's answers.
pvData of CertSetCertificateContextProperty cannot accept X509EnhancedKeyUsageExtension.RawData directly, it needs to be in a CRYPT_INTEGER_BLOB struct
The function signature of Marshal.Copy is incorrect, it should be Marshal.Copy(Byte[], Int32, IntPtr, Int32)
With this in mind, let's start from the top:
Make sure you obtain the X509Certificate2 you require (Certificate), probably from a X509Store that has been opened as ReadWrite.
Import the Interop Signature and use the System.Runtime.InteropServices namespace.
[DllImport("Crypt32.dll", CharSet = CharSet.Auto, SetLastError = true)]
internal static extern Boolean CertSetCertificateContextProperty(
[In] IntPtr pCertContext,
[In] UInt32 dwPropId,
[In] UInt32 dwFlags,
[In] IntPtr pvData,
);
[StructLayout(LayoutKind.Sequential, CharSet=CharSet.Unicode)]
public struct CRYPTOAPI_BLOB {
public uint cbData;
public IntPtr pbData;
}
Create an OidCollection instance
Add the require OIDs of the EKUs you desire to the OidCollection
Create an X509EnhancedKeyUsageExtension instance (eku) using the OidCollection
Use Marshal.AllocHGlobal(eku.RawData.Length) to allocate the properly sized buffer in unmanaged memory as pbData
Use Marshal.AllocHGlobal with the size of the CRYPT_INTEGER_BLOB struct to allocate the properly sized buffer in unmanaged memory as pvData
Create an instance of the CRYPT_INTEGER_BLOB struct and allocate pbData to it and the length, eku.RawData.Length, to cbData
Use Marshal.StructureToPtr to assign the CRYPT_INTEGER_BLOB struct to pvData
Call CertSetCertificateContextProperty with pCertContext as Certificate.Handle, dwPropId being CERT_ENHKEY_USAGE_PROP_ID which is 9, dwFlags of 0 and pvData as pvData. It will return return true if successful, it may throw an error regarding a memory access exception if the Certificate Handle you passed into it is read-only, make sure it's from an X509Store opened as ReadWrite.
Free allocated unmanaged memory via Marshal.FreeHGlobal(pvData) and Marshal.FreeHGlobal(pbData)
Close any Opened X509Store
Again, thanks to Crypt32 for this answer.
I'm finding that when pinvoking GetBinaryType from managed code, I'm getting the opposite result of calling GetBinaryType from native code on the same machine.
I've borrowed the marshalling declaration from elsewhere:
public enum BinaryType : uint
{
SCS_32BIT_BINARY = 0, // A 32-bit Windows-based application
SCS_64BIT_BINARY = 6, // A 64-bit Windows-based application.
SCS_DOS_BINARY = 1, // An MS-DOS – based application
SCS_OS216_BINARY = 5, // A 16-bit OS/2-based application
SCS_PIF_BINARY = 3, // A PIF file that executes an MS-DOS – based application
SCS_POSIX_BINARY = 4, // A POSIX – based application
SCS_WOW_BINARY = 2 // A 16-bit Windows-based application
}
[DllImport("kernel32.dll", SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool GetBinaryType(
string lpApplicationName,
out BinaryType dwBinType
);
and then call the function as
bool is64bit = false;
BinaryType binType = BinaryType.SCS_32BIT_BINARY;
// Figure out if it's 32-bit or 64-bit binary
if (GetBinaryType(phpPath, out binType) &&
binType == BinaryType.SCS_64BIT_BINARY)
{
is64bit = true;
}
For 32-bit native binaries, GetBinaryType returns BinaryType.SCS_64BIT_BINARY (6), and for 64-bit native binaries, returns BinaryType.SCS_32BIT_BINARY (0).
To verify, I wrote a native command line tool, and ran it against the same binaries.
PCWSTR rgBinTypes[] = {
L"SCS_32BIT_BINARY", // 0
L"SCS_DOS_BINARY", // 1
L"SCS_WOW_BINARY", // 2
L"SCS_PIF_BINARY", // 3
L"SCS_POSIX_BINARY", // 4
L"SCS_OS216_BINARY", // 5
L"SCS_64BIT_BINARY", // 6
};
int _tmain(int argc, _TCHAR* argv[])
{
DWORD binType;
if (argc < 2)
{
wprintf(L"Usage: %S <binary-path>\n", argv[0]);
goto Cleanup;
}
if (!GetBinaryType(argv[1], &binType))
{
wprintf(L"Error: GetBinaryType failed: %d\n", GetLastError());
goto Cleanup;
}
wprintf(L"Binary type: %d (%s)\n", binType, binType < 7 ? rgBinTypes[binType] : L"<unknown>");
Cleanup:
return 0;
}
The command line tool correctly returns 0 (SCS_32BIT_BINARY) for 32-bit native binaries, and 6 (SCS_64BIT_BINARY) for 64-bit native binaries.
I found one reference to someone else having this same issue, but no answer was provided: https://social.msdn.microsoft.com/Forums/en-US/fc4c1cb4-399a-4636-b3c3-a3b48f0415f8/strange-behavior-of-getbinarytype-in-64bit-windows-server-2008?forum=netfx64bit
Has anyone else run into this issue?
I realize I could just flip the definitions in my Managed enum, but that seems awfully kludgy.
This is a WinAPI bug/developer's oversight. You may find this related question useful to read, and it's top answer may help you find the appropriate workaround,
Use a separate 64 bit process, and some IPC, to retrieve the information.
Use WMI to get the module file name.
Use QueryFullProcessImageName.
I ended up going for a completely different workaround. This answer about PE headers mentions the PE headers among 32 and 64 bit Windows executables. You can completely circumvent the WinAPI checking, and have your target executable checked via reading it in the Binary mode and checking if it matches the PE signature.
Sadly, there isn't much info on the problem online. I remember seeing this problem on some forum, where is was clearly listed as bug, but this was about ~10 years ago. I hope as we discuss this problem, more people become aware of it.
I have a problem accessing a COM object from a 64-bit COM server written in C++ from a .NET project (the 32-bit version works well). It is a problem similar to the one described here Troubleshooting an x64 com interop marshaling issue. I have a COM method that takes as parameter an array of structures with longs and BSTRs. When the call returns it works OK if the call was made from a native module, but when it was made from a managed (C#) assembly, I get an access violation. If the strings are not populated in the struct then there is no exception.
The proxy/stub file start with the following:
32-bit
/* File created by MIDL compiler version 7.00.0500 */
/* at Thu Sep 22 17:52:25 2011
*/
/* Compiler settings for .\RAC.idl:
Oicf, W1, Zp8, env=Win32 (32b run)
protocol : dce , ms_ext, c_ext, robust
error checks: allocation ref bounds_check enum stub_data
VC __declspec() decoration level:
__declspec(uuid()), __declspec(selectany), __declspec(novtable)
DECLSPEC_UUID(), MIDL_INTERFACE()
*/
//##MIDL_FILE_HEADING( )
#if !defined(_M_IA64) && !defined(_M_AMD64)
64-bit
/* File created by MIDL compiler version 7.00.0500 */
/* at Thu Sep 22 17:58:46 2011
*/
/* Compiler settings for .\RAC.idl:
Oicf, W1, Zp8, env=Win64 (32b run)
protocol : dce , ms_ext, c_ext, robust
error checks: allocation ref bounds_check enum stub_data
VC __declspec() decoration level:
__declspec(uuid()), __declspec(selectany), __declspec(novtable)
DECLSPEC_UUID(), MIDL_INTERFACE()
*/
//##MIDL_FILE_HEADING( )
#if defined(_M_AMD64)
I tried both with the 32-bit and 64-bit version of midl.exe from Windows SDK v7.0A, but it generates the very same output. So the suggestion from the other thread didn't help. Any other ideas?
UPDATE:
The struct looks like this (I changed the names, the rest is identical):
[uuid(6F13C84D-0E01-48cd-BFD4-F7071A32B49F)] struct S
{
long a;
BSTR b;
long c;
BSTR d;
long e;
BSTR f;
BSTR g;
BSTR h;
BSTR i;
long j;
BSTR k;
long l;
BSTR m;
long n;
};
The method signature looks like this:
[id(54)] HRESULT GetListOfStructs(SAFEARRAY(struct S)* arrRes);
I actually have several such structs and methods like this. Obviously, all of them have the same problem.
I'm using FindMimeFromData from urlmon.dll for sniffing uploaded files' MIME type. According to MIME Type Detection in Internet Explorer, image/tiff is one of the recognized MIME types. It works fine on my development machine (Windows 7 64bit, IE9), but doesn't work on the test env (Windows Server 2003 R2 64bit, IE8) - it returns application/octet-stream instead of image/tiff.
The above article describes the exact steps taken to determine the MIME type, but since image/tiff is one of the 26 recognized types, it should end on step 2 (sniffing the actual data), so that file extensions and registered applications (and other registry stuff) shouldn't matter.
Oh and by the way, TIFF files actually are associated with a program (Windows Picture and Fax Viewer) on the test server. It's not that any reference to TIFF is absent in Windows registry.
Any ideas why it doesn't work as expected?
EDIT: FindMimeFromData is used like this:
public class MimeUtil
{
[DllImport("urlmon.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = false)]
private static extern int FindMimeFromData(
IntPtr pBC,
[MarshalAs(UnmanagedType.LPWStr)] string pwzUrl,
[MarshalAs(UnmanagedType.LPArray, ArraySubType = UnmanagedType.I1, SizeParamIndex = 3)] byte[] pBuffer,
int cbSize,
[MarshalAs(UnmanagedType.LPWStr)] string pwzMimeProposed,
int dwMimeFlags,
out IntPtr ppwzMimeOut,
int dwReserved);
public static string GetMimeFromData(byte[] data)
{
IntPtr mimetype = IntPtr.Zero;
try
{
const int flags = 0x20; // FMFD_RETURNUPDATEDIMGMIMES
int res = FindMimeFromData(IntPtr.Zero, null, data, data.Length, null, flags, out mimetype, 0);
switch (res)
{
case 0:
string mime = Marshal.PtrToStringUni(mimetype);
return mime;
// snip - error handling
// ...
default:
throw new Exception("Unexpected HRESULT " + res + " returned by FindMimeFromData (in urlmon.dll)");
}
}
finally
{
if (mimetype != IntPtr.Zero)
Marshal.FreeCoTaskMem(mimetype);
}
}
}
which is then called like this:
protected void uploader_FileUploaded(object sender, FileUploadedEventArgs e)
{
int bsize = Math.Min(e.File.ContentLength, 256);
byte[] buffer = new byte[bsize];
int nbytes = e.File.InputStream.Read(buffer, 0, bsize);
if (nbytes > 0)
string mime = MimeUtil.GetMimeFromData(buffer);
// ...
}
I was unable to reproduce your problem, however I did some research on the subject. I believe that it is as you suspect, the problem is with step 2 of MIME Type Detection: the hard-coded tests in urlmon.dll v9 differ from those in urlmon.dll v8.
The Wikipedia article on TIFF shows how complex the format is and that is has been a problem from the very beginning:
When TIFF was introduced, its extensibility provoked compatibility problems. The flexibility in encoding gave rise to the joke that TIFF stands for Thousands of Incompatible File Formats.
The TIFF Compression Tag section clearly shows many rare compression schemes that, as I suspect, have been omitted while creating the urlmon.dll hard-coded tests in earlier versions of IE.
So, what can be done to solve this problem? I can think of three solutions, however each of them brings different kind of new problems along:
Update the IE on your dev machine to version 9.
Apply the latest IE 8 updates on your dev machine. It is well known that modified versions of urlmon.dll are introduced frequently (eg. KB974455). One of them may contain the updated MIME hard-coded tests.
Distribute own copy of urlmon.dll with your application.
It seems that solutions 1 and 2 are the ones you should choose from. There may be a problem, however, with the production environment. As my experience shows the administrators of production env often disagree to install some updates for many reasons. It may be harder to convince an admin to update the IE to v9 and easier to install an IE8 KB update (as they are supposed to, but we all know how it is). If you're in control of the production env, I think you should go with solution 1.
The 3rd solution introduces two problems:
legal: It may be against the Microsoft's policies to distribute own copy of urlmon.dll
coding: you have to load the dll dynamically to call the FindMimeFromData function or at least customize your app's manifest file because of the Dynamic-Link Library Search Order. I assume you are aware, that it is a very bad idea just to manually copy a newer version of urlmon.dll to the system folder as other apps would most likely crash using it.
Anyway, good luck with solving your urlmon riddle.