I am writing an SMO application, which copies the schema of one database into another.
For this purpose i use Transfer class available in SmoExtended.dll library.
The sample code is:
Database sourceDatabase = sqlServer.Databases[ASourceName];
Transfer t = new Transfer(sourceDatabase);
t.CopyAllObjects = true;
t.CopyData = false;
t.Options.DriAll = true;
t.Options.Triggers = true;
t.DestinationDatabase = ADestinationName;
t.DestinationServer = sqlServer.Name;
t.DestinationLogin = sqlServer.ConnectionContext.Login;
t.DestinationPassword = sqlServer.ConnectionContext.Password;
Database destinationDatabase = new Database(sqlServer, ADestinationName);
destinationDatabase.CompatibilityLevel = CompatibilityLevel.Version100;
destinationDatabase.Create();
t.TransferData();
destinationDatabase.AutoClose = false;
destinationDatabase.Alter();
I get an error Version100 database compatibility level is not supported.
I am using SMO libraries from the version 100 assembly folder.
The database i am copying and the destination database are both on the same server instance.
The server is SQL Server 2012.
I have tried all the available compatibility levels (80,90,100,110), none work. I get the error every time.
HOWEVER, if i use version 110 SMO libraries, everything works as expected, the database is created and schema is copied.
BUT, there is a reason why i can't use version 110 libs, and it's because our clients are using SQL Server 2008 R2, which is a v100 and machines with 2008R2 do not have SDKs of version 110.
PS: Source database compatibility level is set to 100.
Any idea how can i use v100 libs for both, 2008R2 and 2012 and possibly 2014 (this is the last version supporting v100)?
SOLUTION 1: (Easy, very little work)
As suggested by Ben Thul one could use the libraries of single version (e.g. 110) and ship them with an app, or have clients install SQL server library packs.
I have tested this approach with SQL server 2014 libraries. To achieve the goal, i had to download:
1. Microsoft SQL Server System CLR Types (SQLSysClrTypes.msi)
2. Shared Management Objects (SharedManagementObjects.msi)
I have chosen to use x86 libraries, which is 9.3MB in total. These two libraries are enough to get all the SMO libs required.
SOLUTION 2: (Harder but still quite easy, much more work)
I have come up with an idea which is a bit harder to implement than the solution 1, but it works.
The idea is to create separate DLL file for each version of SQL Server and dynamicaly load those DLLs.
The steps are as follows:
Install or download Unmanaged exports library.
Create a new DLL project for a single targeted SQL Server version.
e.g. 2008.dll, 2012.dll...
Inside each of the dll i reference corresponding SMO libraries. e.g.
2008.dll i reference DLLs from ..\100\SDK\Assembly, for SQL
server 2012 i would do 2012.dll and reference ..\110\SDK\Assemby
and so on.
Export the function(s) that i need for calling at my main
application.
[DllExport("Copy", System.Runtime.InteropServices.CallingConvention.StdCall)]
public static void Copy(string AInstance, string AUser, string APass, string ASourceName, string ADestinationName)
Define functions for loading and unloading DLLs
[DllImport("kernel32.dll")]
public static extern IntPtr LoadLibrary(string dllToLoad);
[DllImport("kernel32.dll")]
public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName);
[DllImport("kernel32.dll")]
public static extern bool FreeLibrary(IntPtr hModule);
Define function delegate to be used.
[UnmanagedFunctionPointer(CallingConvention.StdCall)]
private delegate bool CopyDB(string AInstance, string AUser, string APass, string ASourceName, string ADestinationName);
Load, run and unload depending on a SQL Server version
if ( Server.Version.Major == 10 ) {
IntPtr lib8 = LoadLibrary("2008.dll");
IntPtr workFunc = GetProcAddress(lib8, "Copy");
CopyDB dbc = ( CopyDB )Marshal.GetDelegateForFunctionPointer(workFunc, typeof(CopyDB));
dbc(SqlInstance, SqlLogin, SqlPassword, ASourceName, ADestinationName);
FreeLibrary(lib8);
} else if ( Server.Version.Major == 11 ) {
IntPtr lib12 = LoadLibrary("2012.dll");
IntPtr workFunc = GetProcAddress(lib12, "Copy");
CopyDB dbc = ( CopyDB )Marshal.GetDelegateForFunctionPointer(workFunc, typeof(CopyDB));
dbc(SqlInstance, SqlLogin, SqlPassword, ASourceName, ADestinationName);
FreeLibrary(lib12);
} else .....
The last step can be generalised even more. We could skip the version check if we were to build dlls with the name like <version>.dll, e.g. 10.dll,11.dll,12.dll. This way if .. else .. is no longer needed and the library that is being loaded will have its name generated at runtime by the value of Server.Version.Major
The positive side of this approach is that these targeted DLL files only take ~7KB (for my scenario at least), which is much less than CLR and SMO.msi library packs. It also guarantees that supported SMO libraries will be used. Furthermore, one can distribute a single dll to the clients once a new version of SQL Server is released and the app will happily use it for the newly installed instance.
Related
I'm trying to get Firebird-embedded 2.5 (64bit) on Linux with firebird .net provider (FirebirdSql.Data.FirebirdClient) working.
The FB embedded setup for my Test assembly is working on WinX86_64 with the Windows Firebird Embedded version.
On Linux I use the coresponding FB embedded Linux version placed files in the assembly directory:
libfbembed.so*
firebird.msg
security2.fdb
libicu*
libib*
Set the "RootDirectory" to assembly directory in the firebird.conf.
Set the shell environment variables LD_LIBRARY_PATH and FIREBIRD to to assembly directory.
FbConnectionStringBuilder conn = new FbConnectionStringBuilder();
conn.Database = #"/home/dev/firebirdTest/1stDB.FDB";
conn.ServerType = FbServerType.Embedded;
conn.UserID = "SYSDBA";
conn.Password = "masterkey";
conn.Charset = "UTF8";
conn.DataSource = "localhost";
conn.ClientLibrary = "libfbembed.so";
string connStr = conn.ConnectionString;
var dbcon = new FbConnection(connStr);
FbConnection.CreateDatabase(connStr, pageSize: 8192, forcedWrites: true, overwrite: false);
dbcon = new FbConnection(connStr);
dbcon.Open();
what I did before:
Redirecting the Firebird Clientlibrary by mono dllmap doesn't work. Solved by explicit setting the ClientLib in C# code.
manually creating a Database with isql on Linux works.
creating a Database by code on Linux works.
the Firebird .NET provider creates in debug mode the FB_{sanitizedName}.dll and DynamicAssembly.dll
the .NET provider is really silent. Debugging was done by starting the assembly with "strace mono {testAssembly.exe}" on linux.
FbConnection.CreateDatabase crashes with I/O error during "open O_CREAT" (calling FbCreateDatabase), if pagesize is not 8192. Setting explicit pagesize to 8192 solves this.
Now, I run in following errors ( and stuck here for days...):
Opening an existing Databasefile (like in the code here), crashes with:
FirebirdSql.Data.FirebirdClient.FbException: invalid database handle (no active connection) ---> invalid database handle (no active connection)
What's going wrong?
I've stuck with this error too.
FirebirdSql.Data.FirebirdClient.FbException: invalid database handle (no active connection)
Tried with FB 2.5.* and 3.0.0 results are the same.
Also tried using debug builds of FB. Logs did not help.
Maybe somebody here knows what the problem is?
It's 3 years since the original post, and I've run into the same problem. I don't have an "answer" but I do have an explanation. It appears the marshalling of SafeHandle objects isn't fully implemented in mono. From their documentation on SafeHandle: "Notice that “ref SafeHandles” pass a pointer to a slot containing zero to P/Invoked methods, and on return the a new SafeHandle is created with the returned value. “ref SafeHandles” do not actually get the original SafeHandle.handle value."
If you look into the FirebirdClient source, in IFbClient, you'll see the P/Invoke declarations look like:
IntPtr isc_detach_database(
[In, Out] IntPtr[] statusVector,
[MarshalAs(UnmanagedType.I4)] ref DatabaseHandle dbHandle);
DatabaseHandle is derived from SafeHandle, so the second argument is a "ref SafeHandle" argument, and suffers from the problem noted above - it will basically pass in a zero rather than the actual handle value.
There is no fix other than to (a) improve the SafeHandle implementation in mono, or (b) rewrite FirebirdClient to avoid the use of SafeHandles.
I'm having problems with the VISA-Com Libraries to communicate with a Keysight (N6700B) power supply.
I have some C# code I am compiling in Visual Studio 2015, and it does not work. However, if I compile the same code in Visual Studio 2012, then it works.
Basically I am just doing simple communication with the device:
using Ivi.Visa.Interop;
//...
string address = "USB0::2391::2311::MY54002380::0::INSTR";
ResourceManager rm = new ResourceManager();
FormattedIO488 myDmm = (IMessage)rm.Open(address , AccessMode.NO_LOCK, 2000, "");
myDmm.WriteString("*RST"); // reset the device
myDmm.WriteString("*IDN?"); // request the IDN string;
string IDN = myDmm.ReadString(); // This is where it fails, returning: "VI_ERROR_TMO: A timeout occurred"
Also, the power supply has an error state of: "Error -420, Query UNTERMINATED"
The code does not work with VS2015, but it DOES work with VS2012.
(In VS2012 I get no errors at all.)
I have tried downloading the latest drivers from KeySight, and it still does not work (www.keysight.com/find/iosuitedownload).
Does anyone have any idea why it would be breaking with VS2015 but work with VS2012?
I've looked up "Quere Unterminated" and some say that it could be a missing Termination character "\n". I've tried adding the "\n" to both writeStrings, and it still fails.
EDIT: I have also now tried using (in various places):
myDmm.IO.TerminationCharacterEnabled = true; // and = false
myDmm.FlushWrite(); // also tried passing in "true" (default is 'false')
I also tried adding the:
myDmm.IO.TerminationCharacter
to the WriteStrings.
http://download.ni.com/support/softlib//visa/NI-VISA/15.0/Windows/readme.html
Microsoft Visual Studio Support
The following table lists the programming languages and Microsoft Visual Studio versions supported by this version of NI-VISA.
Earlier versions of NI-VISA support other application software and language versions. For more information on Visual Studio compatibility with earlier versions of VISA, refer to ni.com/info and enter the info code NETlegacydrivers. To find and download an earlier version of a driver, refer to ni.com/downloads.
Visual Studio Versions Support by NI-VISA:
Visual C++ MFC1 -------------- 2008
Framework 3.5 Languages (Visual C# and Visual Basic .NET) -- 2008
.NET Framework 4.0 Languages (Visual C# and Visual Basic .NET)-- 2010
.NET Framework 4.5 Languages (Visual C# and Visual Basic .NET)-- 2012
So apparently the drivers don't work with VS2015... (not sure how a newer version doesn't work.. but okay)
EDIT, FOUND ANSWER
Someone from NI-VISTA told me to just add "true" as a second parameter:
myDmm.WriteString("*RST",true); // reset the device
myDmm.WriteString("*IDN?",true); // request the IDN string;
string IDN = myDmm.ReadString(); // now it works.
I'm not sure why "true" wasn't needed in 2012, and why it is needed in 2015... oh well.
I confirm: the method WriteString() (but not only this) needs the bool parameter flushAndEND = true (default value = false) in order to complete the command. Without it, the instrument can't parse the instruction. If you send multiple commands, the instrument can't separate and identify them.
I suggest you to use the Keysight IO monitor for sniffing the communication between your instruments and you controller (PC?) . It's an utility of the IO Libraries Suite (v17.1).
For details, see the attached IFormattedIO488 interface definition, belonging to the reference Ivi.Visa.Interop.
using System.Runtime.InteropServices;
namespace Ivi.Visa.Interop
{
[...]
public interface IFormattedIO488
{
[DispId(1610678274)]
bool InstrumentBigEndian { get; set; }
[DispId(1610678272)]
IMessage IO { get; set; }
void FlushRead();
void FlushWrite(bool sendEND = false);
dynamic ReadIEEEBlock(IEEEBinaryType type, bool seekToBlock = false, bool flushToEND = false);
dynamic ReadList(IEEEASCIIType type = IEEEASCIIType.ASCIIType_Any, string listSeperator = ",;");
dynamic ReadNumber(IEEEASCIIType type = IEEEASCIIType.ASCIIType_Any, bool flushToEND = false);
string ReadString();
void SetBufferSize(BufferMask mask, int size);
void WriteIEEEBlock(string Command, object data, bool flushAndEND = false);
void WriteList(ref object data, IEEEASCIIType type = IEEEASCIIType.ASCIIType_Any, string listSeperator = ",", bool flushAndEND = false);
void WriteNumber(object data, IEEEASCIIType type = IEEEASCIIType.ASCIIType_Any, bool flushAndEND = false);
void WriteString(string data, bool flushAndEND = false);
}
}
Have you tried the ReadBytes method? This method reads a fixed amount of bytes from the device. The error you've encountered most probably occurs because the the visa driver tries to read data until a termination character, which you never explicitly set, is received.
Try setting the TerminationCharacter Property to either \n or \r (depending on the instrument) and it should work. Furthermore, you might want to add it to the commands you send as well so the instrument won't bother you with this error (-420) anymore.
I am looking for a way to reliably detect when I boot into WinPE 4 (powershell) (or WinPE 3 (vbs) as an alternative), have I booted from a UEFI or BIOS System? (without running a third party exe as I am in a restricted environment)
This significantly changes how I would be partitioning a windows deployment as the partitions layout changes and format. (GPT vs. MBR, etc)
I have one working that is an adaptation of this C++ code in powershell v3 but it feels pretty hack-ish :
## Check if we can get a dummy flag from the UEFI via the Kernel
## [Bool] check the result of the kernel's fetch of the dummy GUID from UEFI
## The only way I found to do it was using the C++ compiler in powershell
Function Compile-UEFIDectectionClass{
$win32UEFICode= #'
using System;
using System.Runtime.InteropServices;
public class UEFI
{
[DllImport("kernel32.dll")]
public static extern UInt32 GetFirmwareEnvironmentVariableA([MarshalAs(UnmanagedType.LPWStr)] string lpName, [MarshalAs(UnmanagedType.LPWStr)] string lpGuid, IntPtr pBuffer, UInt32 nSize);
public static UInt32 Detect()
{
return GetFirmwareEnvironmentVariableA("", "{00000000-0000-0000-0000-000000000000}", IntPtr.Zero, 0);
}
}
'#
Add-Type $win32UEFICode
}
## A Function added just to check if the assembly for
## UEFI is loaded as is the name of the class above in C++.
Function Check-IsUEFIClassLoaded{
return ([System.AppDomain]::CurrentDomain.GetAssemblies() | % { $_.GetTypes()} | ? {$_.FullName -eq "UEFI"}).Count
}
## Just incase someone was to call my code without running the Compiled code run first
If (!(Check-IsUEFIClassLoaded)){
Compile-UEFIDectectionClass
}
## The meat of the checking.
## Returns 0 or 1 ([BOOL] if UEFI or not)
Function Get-UEFI{
return [UEFI]::Detect()
}
This seems pretty over the top just to get a simple flag.
Does anyone know if there is there a better way to get this done?
It is no less hacky, in the sense it will still require interop from powershell, but the interop code might be neater if you use (or can call): GetFirmwareType().
This returns a FIRMWARE_TYPE enumeration documented here. I can't believe given both functions are introduced in Windows 8 and exported by kernel32.dll that Microsoft's own documentation points you at "using a dummy variable"!
Internally, GetFirmwareType calls NtQuerySystemInformation. I will dig into what it is doing, but I do not think it is necessarily going to be enlightening.
Unfortunately, this only works for PE4 (Windows 8) since these functions were only added then.
The easieast way by far is to run on PowerShell:
$(Get-ComputerInfo).BiosFirmwareType
This may be a little late, but if one knows they are running in WinPE, the following code should work:
$isuefi = (Get-ItemProperty -Path HKLM:\System\CurrentControlSet\Control).PEFirmwareType -eq 2
$env:firmware_type
Not sure since what version this is supported.
Returns UEFI and Legacy in my tests.
However this is on full installation, Have note confirmed existence in WinPE
It looks like the PE environment has a folder that is specific to the PE environment. In addition, the variable %TargetDir% is described here, TARGETDIR property.
Lastly, you could check if you are running from X: There should be also a folder that has the boot.wim image you can check for. I believe the path would be X:\Sources\Boot.wim but double check.
if ( Test-Path "%TargetDir%\Windows\wpeprofiles" ) {
Write-host "You're in Windows PE"
}
I don't know if this will help (based on C# solution), but:
Win32_DiskPartition has the properties "Bootable" (bool), "BootPartition" (bool), and "Type" (string). For my UEFI system, "Type" comes back as the string "GPT: System".
Now, for all Win32_DiskPartitions that are bootable, are a boot partition, and have the specified type, determine if any of them are internal.
Hope this helps.
I am using Tamas Szekeres builds of GDAL including the C# bindings in a desktop GIS application using C# and .net 4.0
I am including the entire GDAL distribution in a sub-directory of my executable with the following folder structure:
\Plugins\GDAL
\Plugins\GDAL\gdal
\Plugins\GDAL\gdal-data
\Plugins\GDAL\proj
We are using EPSG:4326, and the software is built using 32-bit target since the GDAL C# API is using p/invoke to the 32-bit libraries (could try 64 bit since Tamas provides these, haven't gotten around to it yet).
When I run my application I get the following error
This error typically happens when software tries to access a device that is no longer attached, such as a removable drive. It is not possible to "catch" this exception because it pops up a system dialog.
After dismissing the dialog using any of the buttons, the software continues to execute as designed.
The error occurs the first time I call the following method
OSGeo.OSR.CoordinateTransformation.TransformPoint(double[] inout);
The strange stuff:
The error occurs on one, and only one computer (so far)
I've run this software in several other computers both 32 and 64 bit without problems
The error does not ocurr on the first run after compiling the GDAL shim library I am using, it only occurrs on each subsequent run
it happens regardless of release, or debug builds
it happens regardless of whether the debugger is attached or not
it happens regardless of whether I turn on or off Gdal.UseExceptions or Osr.UseExceptions();
disabling removable drives causes the bug to disappear. This is not what I consider a real solution as I will not be able to ask a customer to do this.
I have tried the following:
catching the error
changing GDAL directories and environment settings
changing computers and operating systems: this worked
used SysInternals ProcMon to trace what files are being opened with no luck, they all appear to be files that exist
I re-built the computer in question when the hard drive failed, to no avail.
"cleaning" the registry using CCleaner
files in GDAL Directory are unchanged on execution
Assumptions
Error is happening in unmanaged code
During GDAL initialization, some path is referring to a drive on the computer that is no longer attached.
I am also working on the assumption this is limited to a computer configuration error
Configuration
Windows 7 Pro
Intel Core i7 920 # 2,67GHz
12.0 GB RAM
64-bit OS
Drive C: 120 GB SSD with OS, development (Visual Studio 10), etc
Drive D: 1 TB WD 10,000k with data, not being accessed for data.
The Question
I either need a direction to trap the error, or a tool or technique that will allow me to figure out what is causing it. I don't want to release the software with the possibility that some systems will have this behaviour.
I have no experience with this library, but perhaps some fresh eyes might give you a brainwave...
Firstly, WELL WRITTEN QUESTION! Obviously this problem really has you stumped...
Your note about the error not occurring after a rebuild screams out: Does this library generate some kind of state file, in its binary directory, after it runs?
If so, it is possible that it is saving incorrect path information into that 'configuration' file, in a misguided attempt to accelerate its next start-up.
Perhaps scan this directory for changes between a 'fresh build' and 'first run'?
At very least you might find a file you can clean up on shut-down to avoid this alert...
HTH
Maybe you can try this:
Run diskmgmt.msc
Change the driveletter for Disk 2 (right click) if my assumption that Disk 2 is a Removable Disk is true
Run your application
If this removes the error, something in the application is referring to the old driveletter
It could be in the p/invoked libs
Maybe see: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46501 It talks about gcc somehow compiling a driveletter into a binary
+1 Great question, but It is not possible to "catch"
Its one of these awful solutions that will turn up on DailyWTF in 5 years. But for now it is stored here http://www.pinvoke.net/default.aspx/user32.senddlgitemmessage
using Microsoft.VisualBasic; //this reference is for the Constants.vbNo;
public partial class Form1 : Form
{
[DllImport("user32.dll")]
static extern IntPtr SendDlgItemMessage(IntPtr hDlg, int nIDDlgItem, uint Msg, UIntPtr wParam, IntPtr lParam);
[DllImport("user32.dll", SetLastError = true)]
public static extern IntPtr SetActiveWindow(IntPtr hWnd);
// For Windows Mobile, replace user32.dll with coredll.dll
[DllImport("user32.dll", SetLastError = true)]
static extern IntPtr FindWindow(string lpClassName, string lpWindowName);
// Find window by Caption only. Note you must pass IntPtr.Zero as the first parameter.
[DllImport("user32.dll", EntryPoint = "FindWindow", SetLastError = true)]
static extern IntPtr FindWindowByCaption(IntPtr ZeroOnly, string lpWindowName);
[DllImport("user32.dll", SetLastError = true)]
static extern uint GetDlgItemText(IntPtr hDlg, int nIDDlgItem,[Out] StringBuilder lpString, int nMaxCount);
public void ClickSaveBoxNoButton()
{
//In this example, we've opened a Notepad instance, entered some text, and clicked the 'X' to close Notepad.
//Of course we received the 'Do you want to save...' message, and we left it sitting there. Now on to the code...
//
//Note: this example also uses API calls to FindWindow, GetDlgItemText, and SetActiveWindow.
// You'll have to find those separately.
//Find the dialog box (no need to find a "parent" first)
//classname is #32770 (dialog box), dialog box title is Notepad
IntPtr theDialogBoxHandle; // = null;
string theDialogBoxClassName = "#32770";
string theDialogBoxTitle = "Notepad";
int theDialogItemId = Convert.ToInt32("0xFFFF", 16);
StringBuilder theDialogTextHolder = new StringBuilder(1000);
//hardcoding capacity - represents maximum text length
string theDialogText = string.Empty;
string textToLookFor = "Do you want to save changes to Untitled?";
bool isChangeMessage = false;
IntPtr theNoButtonHandle; // = null;
int theNoButtonItemId = (int)Constants.vbNo;
//actual Item ID = 7
uint theClickMessage = Convert.ToUInt32("0x00F5", 16);
//= BM_CLICK value
uint wParam = 0;
uint lParam = 0;
//Get a dialog box described by the specified info
theDialogBoxHandle = FindWindow(theDialogBoxClassName, theDialogBoxTitle);
//a matching dialog box was found, so continue
if (theDialogBoxHandle != IntPtr.Zero)
{
//then get the text
GetDlgItemText(theDialogBoxHandle, theDialogItemId, theDialogTextHolder, theDialogTextHolder.Capacity);
theDialogText = theDialogTextHolder.ToString();
}
//Make sure it's the right dialog box, based on the text we got.
isChangeMessage = Regex.IsMatch(theDialogText, textToLookFor);
if ((isChangeMessage))
{
//Set the dialog box as the active window
SetActiveWindow(theDialogBoxHandle);
//And, click the No button
SendDlgItemMessage(theDialogBoxHandle, theNoButtonItemId, theClickMessage, (System.UIntPtr)wParam, (System.IntPtr)lParam);
}
}
It turns out there was no way to definitely answer this question.
I ended up "solving" the problem by figuring out that there was some hardware registered on the system that wasn't present. It is still a mystery to me why, after several years, only GDAL managed to provoke this bug.
I will put the inability to catch this exception down to the idiosyncrasies involved with p/invoke and the hardware error thrown at a very low level on the system.
You could add custom error handlers to gdal. This may help:
Link
http://trac.osgeo.org/gdal/ticket/2895
I'm using FindMimeFromData from urlmon.dll for sniffing uploaded files' MIME type. According to MIME Type Detection in Internet Explorer, image/tiff is one of the recognized MIME types. It works fine on my development machine (Windows 7 64bit, IE9), but doesn't work on the test env (Windows Server 2003 R2 64bit, IE8) - it returns application/octet-stream instead of image/tiff.
The above article describes the exact steps taken to determine the MIME type, but since image/tiff is one of the 26 recognized types, it should end on step 2 (sniffing the actual data), so that file extensions and registered applications (and other registry stuff) shouldn't matter.
Oh and by the way, TIFF files actually are associated with a program (Windows Picture and Fax Viewer) on the test server. It's not that any reference to TIFF is absent in Windows registry.
Any ideas why it doesn't work as expected?
EDIT: FindMimeFromData is used like this:
public class MimeUtil
{
[DllImport("urlmon.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = false)]
private static extern int FindMimeFromData(
IntPtr pBC,
[MarshalAs(UnmanagedType.LPWStr)] string pwzUrl,
[MarshalAs(UnmanagedType.LPArray, ArraySubType = UnmanagedType.I1, SizeParamIndex = 3)] byte[] pBuffer,
int cbSize,
[MarshalAs(UnmanagedType.LPWStr)] string pwzMimeProposed,
int dwMimeFlags,
out IntPtr ppwzMimeOut,
int dwReserved);
public static string GetMimeFromData(byte[] data)
{
IntPtr mimetype = IntPtr.Zero;
try
{
const int flags = 0x20; // FMFD_RETURNUPDATEDIMGMIMES
int res = FindMimeFromData(IntPtr.Zero, null, data, data.Length, null, flags, out mimetype, 0);
switch (res)
{
case 0:
string mime = Marshal.PtrToStringUni(mimetype);
return mime;
// snip - error handling
// ...
default:
throw new Exception("Unexpected HRESULT " + res + " returned by FindMimeFromData (in urlmon.dll)");
}
}
finally
{
if (mimetype != IntPtr.Zero)
Marshal.FreeCoTaskMem(mimetype);
}
}
}
which is then called like this:
protected void uploader_FileUploaded(object sender, FileUploadedEventArgs e)
{
int bsize = Math.Min(e.File.ContentLength, 256);
byte[] buffer = new byte[bsize];
int nbytes = e.File.InputStream.Read(buffer, 0, bsize);
if (nbytes > 0)
string mime = MimeUtil.GetMimeFromData(buffer);
// ...
}
I was unable to reproduce your problem, however I did some research on the subject. I believe that it is as you suspect, the problem is with step 2 of MIME Type Detection: the hard-coded tests in urlmon.dll v9 differ from those in urlmon.dll v8.
The Wikipedia article on TIFF shows how complex the format is and that is has been a problem from the very beginning:
When TIFF was introduced, its extensibility provoked compatibility problems. The flexibility in encoding gave rise to the joke that TIFF stands for Thousands of Incompatible File Formats.
The TIFF Compression Tag section clearly shows many rare compression schemes that, as I suspect, have been omitted while creating the urlmon.dll hard-coded tests in earlier versions of IE.
So, what can be done to solve this problem? I can think of three solutions, however each of them brings different kind of new problems along:
Update the IE on your dev machine to version 9.
Apply the latest IE 8 updates on your dev machine. It is well known that modified versions of urlmon.dll are introduced frequently (eg. KB974455). One of them may contain the updated MIME hard-coded tests.
Distribute own copy of urlmon.dll with your application.
It seems that solutions 1 and 2 are the ones you should choose from. There may be a problem, however, with the production environment. As my experience shows the administrators of production env often disagree to install some updates for many reasons. It may be harder to convince an admin to update the IE to v9 and easier to install an IE8 KB update (as they are supposed to, but we all know how it is). If you're in control of the production env, I think you should go with solution 1.
The 3rd solution introduces two problems:
legal: It may be against the Microsoft's policies to distribute own copy of urlmon.dll
coding: you have to load the dll dynamically to call the FindMimeFromData function or at least customize your app's manifest file because of the Dynamic-Link Library Search Order. I assume you are aware, that it is a very bad idea just to manually copy a newer version of urlmon.dll to the system folder as other apps would most likely crash using it.
Anyway, good luck with solving your urlmon riddle.