I'm trying to get our .NET Core 7.0 solution building and running on Linux. This solution makes heavy use of SQL Server spatial types (https://www.nuget.org/packages/Microsoft.SqlServer.Types/), which have a dependency on a native SQL Server type implementation in SqlServerSpatial160.dll.
For instance, see these lines from GLNativeMethods.cs in the nuget package:
[DllImport("SqlServerSpatial160.dll")]
public static extern double GeodeticPointDistance([In] Point p1, [In] Point p2, [In] EllipsoidParameters ep);
[DllImport("SqlServerSpatial160.dll")]
private static extern GL_HResult Outline(
[In] GeoMarshalData g,
[In, Out] GeoDataPinningAllocator resultAllocator);
The nuget package comes with appropriate implementations for Win64 and Win86:
But it doesn't seem to come with anything for Linux. That seems to be confirmed by the results we get from any code path that needs a geospatial type.
Failed Swyfft.Common.UnitTests.Helpers.BestPointUnitTests.GetBestLatLon_H2H1_ShouldReturnHigh [1 ms]
Error Message:
System.DllNotFoundException : Unable to load shared library 'SqlServerSpatial160.dll' or one of its dependencies. In order to help diagnose loading problems, consider using a tool like strace. If you're using glibc, consider setting the LD_DEBUG environment variable:
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/runtimes/linux-x64/native/SqlServerSpatial160.dll.so: cannot open shared object file: No such file or directory
/usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.2/SqlServerSpatial160.dll.so: cannot open shared object file: No such file or directory
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/SqlServerSpatial160.dll.so: cannot open shared object file: No such file or directory
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/runtimes/linux-x64/native/libSqlServerSpatial160.dll.so: cannot open shared object file: No such file or directory
/usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.2/libSqlServerSpatial160.dll.so: cannot open shared object file: No such file or directory
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/libSqlServerSpatial160.dll.so: cannot open shared object file: No such file or directory
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/runtimes/linux-x64/native/SqlServerSpatial160.dll: cannot open shared object file: No such file or directory
/usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.2/SqlServerSpatial160.dll: cannot open shared object file: No such file or directory
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/SqlServerSpatial160.dll: cannot open shared object file: No such file or directory
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/runtimes/linux-x64/native/libSqlServerSpatial160.dll: cannot open shared object file: No such file or directory
/usr/share/dotnet/shared/Microsoft.NETCore.App/7.0.2/libSqlServerSpatial160.dll: cannot open shared object file: No such file or directory
/home/ken/swyfft_web/Swyfft.Common.UnitTests/bin/Debug/net7.0/libSqlServerSpatial160.dll: cannot open shared object file: No such file or directory
Stack Trace:
at Microsoft.SqlServer.Types.GLNativeMethods.GeodeticPointDistance(Point p1, Point p2, EllipsoidParameters ep) at Microsoft.SqlServer.Types.SqlGeography.STDistance(SqlGeography other)
at Swyfft.Common.Geo.SqlGeographyHelpers.GetDistance(IMappable p1, IMappable p2) in /home/ken/swyfft_web/Swyfft.Common/Geo/SqlGeographyHelpers.cs:line 104
So ... is there a Linux version of this anywhere? If not, is there a solution for folks like us who need to use geospatial types on SQL Server on Linux?
It turns out that for certain operations (like .GLNativeMethods.GeodeticPointDistance(), Microsoft.SqlServer.Types just calls SqlServerSpatial160.dll, which is a Windows-only library. So while you can generally use SQL Server types on Linux, there are a number of operations on those types that you can't do.
The solution I found was to find alternate implementations of the client-side operations I needed to do. For instance:
/// <summary>
/// This method uses a simplified method to calculate the Great Circle distance between two points.
/// See https://stackoverflow.com/a/27883916/68231. It's supposedly not as accurate at small distances as the SQL Server
/// method above - it doesn't take into account that the earth isn't a perfect sphere - but it's close enough for our purposes.
/// </summary>
public static double GetGreatCircleDistance(IMappable p1, IMappable p2)
{
const double radiusInMeters = 6_371_000;
var radLat1 = Radians(p1.Latitude);
var radLat2 = Radians(p2.Latitude);
var dLatHalf = (radLat2 - radLat1) / 2;
var dLonHalf = Math.PI * ((double)p2.Longitude - (double)p1.Longitude) / 360;
// intermediate result
var a = Math.Sin(dLatHalf);
a *= a;
// intermediate result
var b = Math.Sin(dLonHalf);
b *= b * Math.Cos(radLat1) * Math.Cos(radLat2);
// central angle, aka arc segment angular distance
var centralAngle = 2 * Math.Atan2(Math.Sqrt(a + b), Math.Sqrt(1 - a - b));
return radiusInMeters * centralAngle;
}
private static double Radians(decimal x)
{
return (double)x * Math.PI / 180;
}
See https://stackoverflow.com/a/27883916/68231 for more details on the implementation.
Related
I am writing an SMO application, which copies the schema of one database into another.
For this purpose i use Transfer class available in SmoExtended.dll library.
The sample code is:
Database sourceDatabase = sqlServer.Databases[ASourceName];
Transfer t = new Transfer(sourceDatabase);
t.CopyAllObjects = true;
t.CopyData = false;
t.Options.DriAll = true;
t.Options.Triggers = true;
t.DestinationDatabase = ADestinationName;
t.DestinationServer = sqlServer.Name;
t.DestinationLogin = sqlServer.ConnectionContext.Login;
t.DestinationPassword = sqlServer.ConnectionContext.Password;
Database destinationDatabase = new Database(sqlServer, ADestinationName);
destinationDatabase.CompatibilityLevel = CompatibilityLevel.Version100;
destinationDatabase.Create();
t.TransferData();
destinationDatabase.AutoClose = false;
destinationDatabase.Alter();
I get an error Version100 database compatibility level is not supported.
I am using SMO libraries from the version 100 assembly folder.
The database i am copying and the destination database are both on the same server instance.
The server is SQL Server 2012.
I have tried all the available compatibility levels (80,90,100,110), none work. I get the error every time.
HOWEVER, if i use version 110 SMO libraries, everything works as expected, the database is created and schema is copied.
BUT, there is a reason why i can't use version 110 libs, and it's because our clients are using SQL Server 2008 R2, which is a v100 and machines with 2008R2 do not have SDKs of version 110.
PS: Source database compatibility level is set to 100.
Any idea how can i use v100 libs for both, 2008R2 and 2012 and possibly 2014 (this is the last version supporting v100)?
SOLUTION 1: (Easy, very little work)
As suggested by Ben Thul one could use the libraries of single version (e.g. 110) and ship them with an app, or have clients install SQL server library packs.
I have tested this approach with SQL server 2014 libraries. To achieve the goal, i had to download:
1. Microsoft SQL Server System CLR Types (SQLSysClrTypes.msi)
2. Shared Management Objects (SharedManagementObjects.msi)
I have chosen to use x86 libraries, which is 9.3MB in total. These two libraries are enough to get all the SMO libs required.
SOLUTION 2: (Harder but still quite easy, much more work)
I have come up with an idea which is a bit harder to implement than the solution 1, but it works.
The idea is to create separate DLL file for each version of SQL Server and dynamicaly load those DLLs.
The steps are as follows:
Install or download Unmanaged exports library.
Create a new DLL project for a single targeted SQL Server version.
e.g. 2008.dll, 2012.dll...
Inside each of the dll i reference corresponding SMO libraries. e.g.
2008.dll i reference DLLs from ..\100\SDK\Assembly, for SQL
server 2012 i would do 2012.dll and reference ..\110\SDK\Assemby
and so on.
Export the function(s) that i need for calling at my main
application.
[DllExport("Copy", System.Runtime.InteropServices.CallingConvention.StdCall)]
public static void Copy(string AInstance, string AUser, string APass, string ASourceName, string ADestinationName)
Define functions for loading and unloading DLLs
[DllImport("kernel32.dll")]
public static extern IntPtr LoadLibrary(string dllToLoad);
[DllImport("kernel32.dll")]
public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName);
[DllImport("kernel32.dll")]
public static extern bool FreeLibrary(IntPtr hModule);
Define function delegate to be used.
[UnmanagedFunctionPointer(CallingConvention.StdCall)]
private delegate bool CopyDB(string AInstance, string AUser, string APass, string ASourceName, string ADestinationName);
Load, run and unload depending on a SQL Server version
if ( Server.Version.Major == 10 ) {
IntPtr lib8 = LoadLibrary("2008.dll");
IntPtr workFunc = GetProcAddress(lib8, "Copy");
CopyDB dbc = ( CopyDB )Marshal.GetDelegateForFunctionPointer(workFunc, typeof(CopyDB));
dbc(SqlInstance, SqlLogin, SqlPassword, ASourceName, ADestinationName);
FreeLibrary(lib8);
} else if ( Server.Version.Major == 11 ) {
IntPtr lib12 = LoadLibrary("2012.dll");
IntPtr workFunc = GetProcAddress(lib12, "Copy");
CopyDB dbc = ( CopyDB )Marshal.GetDelegateForFunctionPointer(workFunc, typeof(CopyDB));
dbc(SqlInstance, SqlLogin, SqlPassword, ASourceName, ADestinationName);
FreeLibrary(lib12);
} else .....
The last step can be generalised even more. We could skip the version check if we were to build dlls with the name like <version>.dll, e.g. 10.dll,11.dll,12.dll. This way if .. else .. is no longer needed and the library that is being loaded will have its name generated at runtime by the value of Server.Version.Major
The positive side of this approach is that these targeted DLL files only take ~7KB (for my scenario at least), which is much less than CLR and SMO.msi library packs. It also guarantees that supported SMO libraries will be used. Furthermore, one can distribute a single dll to the clients once a new version of SQL Server is released and the app will happily use it for the newly installed instance.
Hi, I have installed the sqlite-net nuget package, which comes with 2 cs files: SQLite.cs and SQLiteAsync.cs.
SQLite.cs contains a class called SQLite3 which contains the method:
[DllImport("sqlite3", EntryPoint = "sqlite3_win32_set_directory", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Unicode)]
public static extern int SetDirectory(uint directoryType, string directoryPath);
I see that the SQLiteConnection constructor has the following code:
#if NETFX_CORE
SQLite3.SetDirectory(/*temp directory type*/2, Windows.Storage.ApplicationData.Current.TemporaryFolder.Path);
#endif
But why?! What does this do and why does it need to be set each time a new SQLiteConnection is created? It seems like I get sporadic AccessViolationExceptions with this line.
Update
I found the documentation for this method but still don't understand the purpose of the TempDirectory. What gets written there?
/*
** This function sets the data directory or the temporary directory based on
** the provided arguments. The type argument must be 1 in order to set the
** data directory or 2 in order to set the temporary directory. The zValue
** argument is the name of the directory to use. The return value will be
** SQLITE_OK if successful.
*/
The specific call that you show from the SQLiteConnection constructor sets the temporary directory for SQLite. This is the directory that SQLite uses for temporary/ working storage.
I believe that because of app sandboxing, SQLite is unable to write to the default temp directory; this is why the code that you quoted uses the application's temp directory.
If the directory is not set, the side effect is that you get errors when executing update statements.
See more here:
Stack Overflow post - multiple update statements throw "cannot open"
GitHub conversation regarding the bug and resolution
Hi when running my unit test I'm wanting to get the directory my project is running in to retrieve a file.
Say I have a Test project named MyProject. Test I run:
AppDomain.CurrentDomain.SetupInformation.ApplicationBase
and I receive "C:\\Source\\MyProject.Test\\bin\\Debug".
This is close to what I'm after. I don't want the bin\\Debug part.
Anyone know how instead I could get "C:\\Source\\MyProject.Test\\"?
I would do it differently.
I suggest making that file part of the solution/project. Then right-click -> Properties -> Copy To Output = Copy Always.
That file will then be copied to whatever your output directory is (e.g. C:\Source\MyProject.Test\bin\Debug).
Edit: Copy To Output = Copy if Newer is the better option
Usually you retrieve your solution directory (or project directory, depending on your solution structure) like this:
string solution_dir = Path.GetDirectoryName( Path.GetDirectoryName(
TestContext.CurrentContext.TestDirectory ) );
This will give you the parent directory of the "TestResults" folder created by testing projects.
Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
This will give you the directory you need....
as
AppDomain.CurrentDomain.SetupInformation.ApplicationBase
gives nothing but
Directory.GetCurrentDirectory().
Have alook at this link
http://msdn.microsoft.com/en-us/library/system.appdomain.currentdomain.aspx
Further to #abhilash's comment.
This works in my EXE's, DLL's and when tested from a different UnitTest project in both Debug or Release modes:
var dirName = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location.Replace("bin\\Debug", string.Empty));
/// <summary>
/// Testing various directory sources in a Unit Test project
/// </summary>
/// <remarks>
/// I want to mimic the web app's App_Data folder in a Unit Test project:
/// A) Using Copy to Output Directory on each data file
/// D) Without having to set Copy to Output Directory on each data file
/// </remarks>
[TestMethod]
public void UT_PathsExist()
{
// Gets bin\Release or bin\Debug depending on mode
string baseA = AppDomain.CurrentDomain.SetupInformation.ApplicationBase;
Console.WriteLine(string.Format("Dir A:{0}", baseA));
Assert.IsTrue(System.IO.Directory.Exists(baseA));
// Gets bin\Release or bin\Debug depending on mode
string baseB = AppDomain.CurrentDomain.BaseDirectory;
Console.WriteLine(string.Format("Dir B:{0}", baseB));
Assert.IsTrue(System.IO.Directory.Exists(baseB));
// Returns empty string (or exception if you use .ToString()
string baseC = (string)AppDomain.CurrentDomain.GetData("DataDirectory");
Console.WriteLine(string.Format("Dir C:{0}", baseC));
Assert.IsFalse(System.IO.Directory.Exists(baseC));
// Move up two levels
string baseD = System.IO.Directory.GetParent(baseA).Parent.FullName;
Console.WriteLine(string.Format("Dir D:{0}", baseD));
Assert.IsTrue(System.IO.Directory.Exists(baseD));
// You need to set the Copy to Output Directory on each data file
var appPathA = System.IO.Path.Combine(baseA, "App_Data");
Console.WriteLine(string.Format("Dir A/App_Data:{0}", appPathA));
// C:/solution/UnitTestProject/bin/Debug/App_Data
Assert.IsTrue(System.IO.Directory.Exists(appPathA));
// You can work with data files in the project directory's App_Data folder (or any other test data folder)
var appPathD = System.IO.Path.Combine(baseD, "App_Data");
Console.WriteLine(string.Format("Dir D/App_Data:{0}", appPathD));
// C:/solution/UnitTestProject/App_Data
Assert.IsTrue(System.IO.Directory.Exists(appPathD));
}
I normally do it like that, and then I just add "..\..\" to the path to get up to the directory I want.
So what you could do is this:
var path = AppDomain.CurrentDomain.SetupInformation.ApplicationBase + #"..\..\";
For NUnit this is what I do:
// Get the executing directory of the tests
string dir = NUnit.Framework.TestContext.CurrentContext.TestDirectory;
// Infer the project directory from there...2 levels up (depending on project type - for asp.net omit the latter Parent for a single level up)
dir = System.IO.Directory.GetParent(dir).Parent.FullName;
If required you can from there navigate back down to other directories if required:
dir = Path.Combine(dir, "MySubDir");
According to https://github.com/nunit/nunit/issues/742#issuecomment-121964506
For NUnit3 , System.Environment.CurrentDirector is never changed, so it shall be the path of solution.
Eg:
string szProjectPath = System.Environment.CurrentDirectory + #"\where\your\project\is";
I prefer fixed location rather than GetParent().
One drawback of GetParent is when build is changed from AnyCPU to x86, default path would be changed from bin\Debug to bin\x86\Debug.
Need to get another parent, and it's pain in the neck.
Also, you may still access to you test assemblies at TestContext.CurrentContext.TestDirectory and get output from TestContext.CurrentContext.WorkDirectory
Edit:
Note: There are many changes in NUnit3. I will suggest reading through the documentation about "Breaking changes"
The best solution I found was to put the file as an embedded resource on the test project and get it from my unit test. With this solution I don´t need to care about file paths.
I'm not sure if this helps, but this looks to be briefly touched on in the following question.
Visual Studio Solution Path environment variable
In general you may use this, regardless if running a test or console app or web app:
// returns the absolute path of assembly, file://C:/.../MyAssembly.dll
var codeBase = Assembly.GetExecutingAssembly().CodeBase;
// returns the absolute path of assembly, i.e: C:\...\MyAssembly.dll
var location = Assembly.GetExecutingAssembly().Location;
If you are running NUnit, then:
// return the absolute path of directory, i.e. C:\...\
var testDirectory = TestContext.CurrentContext.TestDirectory;
My approach relies on getting the location of the unit testing assembly and then traversing upwards. In the following snippet the variable folderProjectLevel will give you the path to the Unit test project.
string pathAssembly = System.Reflection.Assembly.GetExecutingAssembly().Location;
string folderAssembly = System.IO.Path.GetDirectoryName(pathAssembly);
if (folderAssembly.EndsWith("\\") == false) {
folderAssembly = folderAssembly + "\\";
}
string folderProjectLevel = System.IO.Path.GetFullPath(folderAssembly + "..\\..\\");
You can do it like this:
using System.IO;
Path.GetFullPath(Path.Combine(AppDomain.CurrentDomain.SetupInformation.ApplicationBase, #"..\..\"));
use StackTrace
internal static class Extensions
{
public static string GetSourceDirectoryName(this Type type)
{
StackTrace stackTrace = new StackTrace(true);
foreach (var frame in stackTrace.GetFrames())
{
if (frame.GetMethod() is { } method && method.DeclaringType == type)
{
return Path.GetDirectoryName(frame.GetFileName());
}
}
throw new Exception($"未找到{type.Name}源文件目录");
}
}
It is sometimes desirable to have your application open the default application for a file. For example, to open a PDF file you might use:
System.Diagnostics.Process.Start("Filename.pdf");
To open an image, you'd just use the same code with a different filename:
System.Diagnostics.Process.Start("Filename.gif");
Some extensions (.gif for example) just about always have a default handler, even in a base Windows installation. However, some extensions (.pdf for example) often don't have an application installed to handle them.
In these cases, it'd be desirable to determine if an application is associated with the extension of the file you wish to open before you make the call to Process.Start(fileName).
I'm wondering how you might best implement something like this:
static bool ApplicationAssociated(string extension)
{
var extensionHasAssociatedApplication = false;
var condition = // Determine if there is an application installed that is associated with the provided file extension.;
if (condition)
{
extensionHasAssociatedApplication = true;
}
return extensionHasAssociatedApplication;
}
I would recommend following the advice in David's answer BUT since you need to detect an association:
To check whether a file has an association you can use the native function FindExecutable which is basically what Windows Explorer uses internally... it gives a nice error code (SE_ERR_NOASSOC) if there is no association. Upon success it gives a path to the respective executable.
Thee DllImport for it is
[DllImport("shell32.dll")]
static extern int FindExecutable(string lpFile, string lpDirectory, [Out] StringBuilder lpResult);
Another option would be to walk the registry for example (not recommended since complex due to several aspets like WoW64 etc.):
The real association is stored in the key that HKEY_CLASSES_ROOT\.pdf points to - in my case AcroExch.Document, so we checkoutHKEY_CLASSES_ROOT\AcroExch.Document. There you can see (and change) what command is going to be used to launch that type of file:
HKEY_CLASSES_ROOT\AcroExch.Document\shell\open\command
#Yahia gets the nod. I'm posting my quick solution for posterity so you can see what I went with. There are lots of possible improvements to this code, but this will give you the idea:
public static bool HasExecutable(string path)
{
var executable = FindExecutable(path);
return !string.IsNullOrEmpty(executable);
}
private static string FindExecutable(string path)
{
var executable = new StringBuilder(1024);
FindExecutable(path, string.Empty, executable);
return executable.ToString();
}
[DllImport("shell32.dll", EntryPoint = "FindExecutable")]
private static extern long FindExecutable(string lpFile, string lpDirectory, StringBuilder lpResult);
In a situation like this the best approach is to try to open the document and detect failure. Trying to predict whether or not a file association is in place just leads to you reimplementing the shell execute APIs. It's very hard to get that exactly right and rather needless since they already exist!
You will have too look at the registry to get that information.
You can follow from:
HKEY_CLASSES_ROOT\.extension
and it usually leads to something like HKEY_CLASSES_ROOT\extfile\Shell\Open\Command
and you will come to the command to open the file type.
Depending on what you are doing, it may be ideal to just ask for forgiveness ( that is, just open the file and see)
All of that information lives in the registry.. you could navigate to HKEY_CLASSES_ROOT, find the extension and go from there to find the default handler. But depending on the type of file and the associated handler(s) you'll need to wade into CLSIDs and whatnot... you're probably better off catching an exception instead.
This information is in the registry. For example:
# Mount the HKCR drive in powershell
ps c:\> new-psdrive hkcr registry hkey_classes_root
ps c:\> cd hkcr:\.cs
# get default key for .cs
PS hkcr:\.cs> gp . ""
(default) : VisualStudio.cs.10.0
...
# dereference the "open" verb
PS hkcr:\.cs> dir ..\VisualStudio.cs.10.0\shell\open
Hive: hkey_classes_root\VisualStudio.cs.10.0\shell\open
Name Property
---- --------
Command (default) : "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /dde
ddeexec (default) : Open("%1")
I'm using FindMimeFromData from urlmon.dll for sniffing uploaded files' MIME type. According to MIME Type Detection in Internet Explorer, image/tiff is one of the recognized MIME types. It works fine on my development machine (Windows 7 64bit, IE9), but doesn't work on the test env (Windows Server 2003 R2 64bit, IE8) - it returns application/octet-stream instead of image/tiff.
The above article describes the exact steps taken to determine the MIME type, but since image/tiff is one of the 26 recognized types, it should end on step 2 (sniffing the actual data), so that file extensions and registered applications (and other registry stuff) shouldn't matter.
Oh and by the way, TIFF files actually are associated with a program (Windows Picture and Fax Viewer) on the test server. It's not that any reference to TIFF is absent in Windows registry.
Any ideas why it doesn't work as expected?
EDIT: FindMimeFromData is used like this:
public class MimeUtil
{
[DllImport("urlmon.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = false)]
private static extern int FindMimeFromData(
IntPtr pBC,
[MarshalAs(UnmanagedType.LPWStr)] string pwzUrl,
[MarshalAs(UnmanagedType.LPArray, ArraySubType = UnmanagedType.I1, SizeParamIndex = 3)] byte[] pBuffer,
int cbSize,
[MarshalAs(UnmanagedType.LPWStr)] string pwzMimeProposed,
int dwMimeFlags,
out IntPtr ppwzMimeOut,
int dwReserved);
public static string GetMimeFromData(byte[] data)
{
IntPtr mimetype = IntPtr.Zero;
try
{
const int flags = 0x20; // FMFD_RETURNUPDATEDIMGMIMES
int res = FindMimeFromData(IntPtr.Zero, null, data, data.Length, null, flags, out mimetype, 0);
switch (res)
{
case 0:
string mime = Marshal.PtrToStringUni(mimetype);
return mime;
// snip - error handling
// ...
default:
throw new Exception("Unexpected HRESULT " + res + " returned by FindMimeFromData (in urlmon.dll)");
}
}
finally
{
if (mimetype != IntPtr.Zero)
Marshal.FreeCoTaskMem(mimetype);
}
}
}
which is then called like this:
protected void uploader_FileUploaded(object sender, FileUploadedEventArgs e)
{
int bsize = Math.Min(e.File.ContentLength, 256);
byte[] buffer = new byte[bsize];
int nbytes = e.File.InputStream.Read(buffer, 0, bsize);
if (nbytes > 0)
string mime = MimeUtil.GetMimeFromData(buffer);
// ...
}
I was unable to reproduce your problem, however I did some research on the subject. I believe that it is as you suspect, the problem is with step 2 of MIME Type Detection: the hard-coded tests in urlmon.dll v9 differ from those in urlmon.dll v8.
The Wikipedia article on TIFF shows how complex the format is and that is has been a problem from the very beginning:
When TIFF was introduced, its extensibility provoked compatibility problems. The flexibility in encoding gave rise to the joke that TIFF stands for Thousands of Incompatible File Formats.
The TIFF Compression Tag section clearly shows many rare compression schemes that, as I suspect, have been omitted while creating the urlmon.dll hard-coded tests in earlier versions of IE.
So, what can be done to solve this problem? I can think of three solutions, however each of them brings different kind of new problems along:
Update the IE on your dev machine to version 9.
Apply the latest IE 8 updates on your dev machine. It is well known that modified versions of urlmon.dll are introduced frequently (eg. KB974455). One of them may contain the updated MIME hard-coded tests.
Distribute own copy of urlmon.dll with your application.
It seems that solutions 1 and 2 are the ones you should choose from. There may be a problem, however, with the production environment. As my experience shows the administrators of production env often disagree to install some updates for many reasons. It may be harder to convince an admin to update the IE to v9 and easier to install an IE8 KB update (as they are supposed to, but we all know how it is). If you're in control of the production env, I think you should go with solution 1.
The 3rd solution introduces two problems:
legal: It may be against the Microsoft's policies to distribute own copy of urlmon.dll
coding: you have to load the dll dynamically to call the FindMimeFromData function or at least customize your app's manifest file because of the Dynamic-Link Library Search Order. I assume you are aware, that it is a very bad idea just to manually copy a newer version of urlmon.dll to the system folder as other apps would most likely crash using it.
Anyway, good luck with solving your urlmon riddle.