How to detect keyboard input code page - c#

I need to detect the code page the keyboard input is using while user is entering data into application fields.
I tried to use System.Text.Encoding.Default.CodePage; but it gives the code page of what is configured in regional settings.
Then i thought Console.InputEncoding.CodePage; could work, but still, code page is the same as with the above example.
The problem is, user may have a Cyrilic (Windows-1251) code page because of the regional settings, but he may wish to use a different input language. The data user enters is then saved to a file, and that file can be opened in a system which has a different regional settings. Along with text, i am saving code page number, so my app can load the file and display the text correctly. I cannot use Unicode for cross compatibility with a different app which does not support unicode.

Disclaimer: I'm not an expert on the globalization side of the .NET Framework, so it might wrap equivalent functionality somewhere or another. If so, and you can locate it, great—use that instead. I thought maybe Globalization.CultureInfo.CurrentCulture would return the information we're looking for, but alas it does not; instead, it appears to return the system default culture even if the keyboard layout is changed. I stopped investigating. The described approach is guaranteed to work, albeit with a little extra code.
To determine the code page associated with a particular keyboard layout, you can call the Win32 GetLocaleInfo function, specifying the language ID associated with the current keyboard layout. You request this using the LOCALE_IDEFAULTANSICODEPAGE constant.
To call these functions from a .NET application, you will need to use P/Invoke. You'll also need to define function equivalents for some essential macros.
const int LOCALE_IDEFAULTANSICODEPAGE = 0x1004;
const int LOCALE_RETURN_NUMBER = 0x20000000;
const int SORT_DEFAULT = 0x0;
static int LOWORD(IntPtr val)
{
return (unchecked((int)(long)val)) & 0xFFFF;
}
static int MAKELCID(int languageID, int sortID)
{
return ((0xFFFF & languageID) | (((0x000F) & sortID) << 16));
}
static int MAKELANGID(int primaryLang, int subLang)
{
return ((((ushort)(subLang)) << 10) | (ushort)(primaryLang));
}
[DllImport("kernel32.dll", SetLastError = true)]
static extern int GetLocaleInfo(int locale,
int lcType,
out uint lpLCData,
int cchData);
[DllImport("user32.dll")]
static extern IntPtr GetKeyboardLayout(uint idThread);
Use it like this:
// Get the keyboard layout for the current thread.
IntPtr keybdLayout = GetKeyboardLayout(0);
// Extract the language ID from it, contained in its low-order word.
int langID = LOWORD(keybdLayout);
// Call the GetLocaleInfo function to retrieve the default ANSI code page
// associated with that language ID.
uint codePage = 0;
GetLocaleInfo(MAKELCID(langID, SORT_DEFAULT),
LOCALE_IDEFAULTANSICODEPAGE | LOCALE_RETURN_NUMBER,
out codePage,
Marshal.SizeOf(codePage));
When tested with a US English keyboard layout, codePage is 1252. After switching to a Greek keyboard layout, codePage is 1253. Likewise, Turkish returns 1254, and the various cyrillic languages return 1251. Exactly as documented.
It is worth noting that, in the linked documentation, these API functions are indicated as having been superseded. With modern versions of Windows, Microsoft has moved to named locales, first because they were out of room for numeric IDs, and second to enable support for custom locales. But you will need to use the old functions for what you're doing. Modern Windows applications don't use ANSI code pages, either.
However, you do need to be aware of this fact, because it may come back to bite you. There are keyboard layouts that do not have an associated ANSI code page. For these, only Unicode can be used. The above code will return CP_ACP (which is equivalent to the numeric value 0). Handling that is up to you. You will either need to display an error, or save the file as Unicode (albeit breaking the other application, but complying with user expectations).
Finally, I must point out that if you cache the codePage value, it may become stale since the user can change the keyboard layout at any time. It is probably easiest just not to cache the value, determining it each time you perform a save. But if you want to cache it, you will need to handle the WM_INPUTLANGCHANGE message and update your cached value in response.

Related

Good way to write ID3v2 track number as string in C#?

Forgive me if this question is already answered somewhere on this site, but I didn't find anything when I searched for it. I've written a ID3v1/2 tag editor for .mp3 files in C# using taglib-sharp, and taglib-sharp treats the track numbers as uint numbers. According to id3.org:
The 'Track number/Position in set' frame is a numeric string
containing the order number of the audio-file on its original
recording. This may be extended with a "/" character and a numeric
string containing the total numer of tracks/elements on the original
recording. E.g. "4/9".
Personally I don't use "/", but I tend to write "03" instead of "3". Is there a simple way to write the track number to the tag as a string directly, instead of via a uint?
Also, side question: taglib doesn't seem to support some tags, specifically URL, Orig. Artist, Publisher and Encoded. Any idea on what to do with those?
UPDATE: Since this answer was originally written, GetTextAsString was made public. This answer has been updated to reflect that.
The Track field in TagLib# is a universal approximation and simplification of various tagging specifications intent for the tag field. For ID3v2 tags, this is assuming the TRCK field consists of one or two numbers strings separated by a slash and converting them into numbers, per the specification.
That said, it is a text field and you can do whatever you want with it. You just need to access the text frame to read or write it.
Writing is easy through Id3v2.Tag.SetTextFrame:
var tag = (Id3v2.Tag)file.GetTag(TagTypes.Id3v2, true); // Get or create ID3v2 tag.
tag.SetTextFrame("TRCK", "03"); // Add or update TRCK frame.
Since TRCK is a single-string text frame, it can similarly be read using Id3v2.Tag.GetTextAsString:
var tag = (Id3v2.Tag)file.GetTag(TagTypes.Id3v2, false);
var trackNumber = tag?.GetTextAsString("TRCK");
No you can't save a string into an int or uint without casting it first. Why don't just you save your value 3 as a uint as per the library you are using and use something like:
uint track = 3;
string strTrack = track.ToString("00");
to display it?
If it's allowed by its license, you can still modify the library to suit your needs.
I've taken a quick look at taglib sharp in the past and it's far from supporting all existing frames in a tag. It supports the most common frames only. For the other ones, I think there is some kind of default class you can use but I don't recall the name. Otherwise you can still go and extend the library by yourself unless there are some other such libraries available, which I am not aware of.

Faking Input Like GlovePIE

I have been programming against Kinect, and I now want to have games react to what I am doing on the Kinect. It is real easy to send data to notepad for key presses, but much harder to send it to games.
First off, I have been using the WPF Skeleton example from Kinect and building off that for now. I could use the C++ version but my C++ is very rusty, and I would prefer not to.
So here is what I have done so far, I have tried SendKeys, SendInput, keybd_event, Post_Message. None of those make it to games like Burnout Paradise.
I do know GlovePIE input gets to games, but how? Currently my work around/hack, is to use PPJoy, which has sample code in C++ to emulate button presses. I call this via [DllImport] from my WPF app. I then pickup the joystick button presses in GlovePIE and have it convert those to Keyboard Keys. So I go around in a circle, which works but PPJoys driver is not signed, so I can't really share this code as people would have to allow test-signed drivers.
Does anyone know how GlovePIE makes their Keypresses happen? I have posted on the GlovePIE Forums, but no responses. GlovePIE has a little bit of a hack to work with the old openNI kinect drivers, but I am using the standard microsoft version recently released a few weeks ago.
Ok, not sure what the scoop is on answering your own question, but I figured out the whole proper solution and wanted to share as the Checked off Answer.
First Include this using statement
using System.Runtime.InteropServices;
Then Put this in your class this in your class
[DllImport("user32.dll", EntryPoint = "keybd_event", CharSet = CharSet.Auto, ExactSpelling = true)]
public static extern void Keybd_event(byte vk, byte scan, int flags, int extrainfo);
This will import the C method for use in C#.
Now some games / application use Virtual Keys VKey and some (Direct X aka DIKey) use Scancodes. My problem was I used the scan codes improperly so it did not work for some games. I suggest if you do not know what application wants to consume these, you call it twice one with Virtual Key and one for Direct Input Key
Here is an example of two calls for the letter 'A' using Virtual Keys then using Direct Input Keys.
KEYDOWN = 0
KEYUP = 2
//Virtual Key
Keybd_event(65, 0, KEYDOWN, 0);
//Direct Input Key
Keybd_event(0, 30, KEYDOWN, 0);
As you can tell the values for A are different from VK to DIK
Both of the links relate to the HEX values, while my samples above show Base 10 (Integers)
VKeys link http://delphi.about.com/od/objectpascalide/l/blvkc.htm
DIKeys link http://community.bistudio.com/wiki/DIK_KeyCodes
This should also work for SendInput also, but I have not fully verified that.
I don't know GlovePIE, sorry, but perhaps these might help.
PostMessage and SendMessage get different behaviour when dealing with emulating keystrokes.
Would also help to know what message your actually sending, keydown/up, keypress etc.
You may need to do something about changing the focus - maybe the wrong element is selected, or your sending to the wrong (sub)window.
Similarly if you where emulating mouse clicks there can also be checks for where the mouse is to ensure it is still on the clickable area.
Consider also holding a key down triggers a repeat mechanism sending multiple key messages, commonly in games your holding the key down to turn not just tapping it once.
While I will not say this is the best answer, I wanted to round back with the solution I am using. I have found that I can use vJoy http://headsoft.com.au/index.php?category=vjoy which is signed, so it can be used in Windows 7 64 bit. I can the call this extern keybd_event
[DllImport("user32.dll", EntryPoint = "keybd_event", CharSet = CharSet.Auto, ExactSpelling = true)]
public static extern void Keybd_event(byte vk, byte scan, int flags, int extrainfo);
From there I can Keybd_event(CKEY, SCANCODE, KEYDOWN, 0); then X frame later call Keybd_event(CKEY, SCANCODE, KEYUP, 0);
I set vJoy to read in "c" or other keys I send via keybd_event. vJoy reads this properly and then "presses" the associated button which GlovePIE picks up. So in GlovePIE my script look like
A = joystick1.Button1
Z = joystick1.Button2
Which works in the games I have tried.
It is definitely not idea, but it works and allows end user to customize the input via vJoy and GlovePIE.

Does PrinterSettings.GetHdevmode() have a bug?

I would like to be able to change the printer properties without bringing up the printer properties window...
Using the DocumentProperties (imported from winspool.drv) function has so far failed, because while it is easy to suppress the dialog from showing up, it seems that the value returned by PrinterSettings.GetHdevmode() is not reflecting the PrinterSettings that is calling it, but instead the value from the previous printer properties returning OK. For example, this gives me the previous (wrong) values from the last call to the properties, instead of the values it should have from the PrinterSettings object:
IntPtr hdevmode = PrinterSettings.GetHdevmode(PrinterSettings.DefaultPageSettings);
PrinterSettings.SetHdevmode(hdevmode);
PrinterSettings.DefaultPageSettings.SetHdevmode(hdevmode);
So does GetHdevmode have a bug or is this what its supposed to do? Is there a C# work around for this or does anyone even have any information about it? I have been hard pressed even to find info on the topic.
Thanks in advance for any insight.
EDIT:
I didn't want to make this too personal of a problem, but hopefully having all the info in this case can provide an answer that is a useful solution for others too.
Here is a C++ DLL I have written in order to have a workaround for this issue. Its not currently working - it changes other memory such as copies, and doesn't succeed in changing the "underlying" papersize. I thought all I needed to do was specify the out buffer flag in order to make the changes?
extern "C" __declspec(dllexport) DEVMODE* __stdcall GetRealHDevMode(int width, int height, char *printerName, DEVMODE* inDevMode)
{
//declare handles and variables
HANDLE printerHandle;
LPHANDLE printerHandlePointer(&printerHandle);
//get printer handle pointer
OpenPrinter((LPWSTR)printerName, printerHandlePointer, NULL);
//Get size needed for public and private devmode data and declare devmode structure
size_t devmodeSize = DocumentProperties(NULL, printerHandle, (LPWSTR)printerName, NULL, NULL, 0);
DEVMODE* devmode = reinterpret_cast<DEVMODE*>(new char[devmodeSize + sizeof(DEVMODE) + sizeof(inDevMode->dmDriverExtra)]);
//lock memory
GlobalLock(devmode);
//fill the out buffer
DocumentProperties(NULL, printerHandle, (LPWSTR)printerName, devmode, NULL, DM_OUT_BUFFER);
//change the values as required
devmode->dmPaperWidth = width;
devmode->dmPaperLength = height;
devmode->dmPaperSize = DMPAPER_USER;
devmode->dmFields &= ~DM_PAPERSIZE;
devmode->dmFields &= ~DM_PAPERLENGTH;
devmode->dmFields &= ~DM_PAPERWIDTH;
devmode->dmFields |= (DM_PAPERSIZE | DM_PAPERLENGTH | DM_PAPERWIDTH);
//input flag on now to put the changes back in
DocumentProperties(NULL, printerHandle, (LPWSTR)printerName, devmode, devmode, DM_IN_BUFFER | DM_OUT_BUFFER);
//unlock memory
GlobalUnlock(devmode);
//return the devmode that was used to alter the settings
return devmode;
}
I figured the C++ code was enough to change the settings, so all I do in C# is this:
public PrinterSettings ChangePrinterProperties(PrinterSettings inPrinterSettings)
{
IntPtr TemphDevMode = inPrinterSettings.GetHdevmode(inPrinterSettings.DefaultPageSettings);
IntPtr hDevMode = GetRealHDevMode((int)(inPrinterSettings.DefaultPageSettings.PaperSize.Width * 2.54F),
(int)(inPrinterSettings.DefaultPageSettings.PaperSize.Height * 2.54F),
inPrinterSettings.PrinterName, TemphDevMode);
GlobalFree(hDevMode);
return inPrinterSettings;
}
UPDATE: Changed up the order a bit with dmPaperSize and dmFields. Improved results; not quite there yet.
UPDATE 2: Okay, I found a microsoft page that says the documentation is wrong. MSDN says to set dmPaperSize to 0 when you want to specify width and height whereas the Microsoft Support correction says to set it to DMPAPER_USER. http://support.microsoft.com/kb/108924
There are 2 problems with the way you are specifying the paper size in the DEVMODE:
(1) If you specify DM_PAPERWIDTH or DM_PAPERLENGTH or both, you MUST NOT also set the DM_PAPERSIZE bit. It depends on the printer driver, but many drivers will ignore DM_PAPERLENGTH/WIDTH in the above code.
(2) Many drivers don't support DM_PAPERLENGTH/WIDTH at all. With such drivers, you simply cannot set the paper size like you are trying to do above. You can only select one of the predefined dmPaperSizes.
You can use DeviceCapabilities(DC_FIELDS) to determine if your driver supports DM_PAPERLENGTH/WIDTH.
You can use DeviceCapabilities(DC_PAPERS) to enumerate the allowable dmPaperSizes.

C#: Location of const variable in a binary

Is it possible to know the location of const variables within an exe? We were thinking of watermarking our program so that each user that downloads the program from our server will have some unique key embedded in the code.
Is there another way to do this?
You could build a binary with a watermark that is a string representation of a GUID in a .net type as a constant. After you build, perform a search for the GUID string in the binary file to check its location. You can change this GUID value to another GUID value and then run the binary and actually see the changed value in code output.
Note: The formatting is important as the length would be very important since you're messing with a compiled binary. For example, you'll want to keep the leading zeros of a GUID so that all instances have the same char length when converted to a string.
i have actually done this sort of thing with Win32 DLLs and even the Sql Server 2000 Desktop exe. (There was a hack where you could switch the desktop edition into a full blown SQL server by flipping a switch in the binary.)
This process could then be automated and a new copy of a DLL would be programmatically altered by a small, server-side utility for each client download.
Also take a look at this: link
It discusses the use of storing settings in a .Net DLL and uses a class-based approach and embeds the app settings file and is configurable after compilation.
Key consideration #1: Assembly signing
Since you are distributing your application, clearly you are signing it. As such, since you're modifying the binary contents, you'll have to integrate the signing process directly in the downloading process.
Key consideration #2: const or readonly
There is a key difference between const and readonly variables that many people do not know about. In particular, if I do the following:
private readonly int SomeValue = 3;
...
if (SomeValue > 0)
...
Then it will compile to byte code like the following:
ldsfld [SomeValue]
ldc.i4.0
ble.s
If you make the following:
private const int SomeValue = 3;
...
if (SomeValue > 0)
...
Then it will compile to byte code like the following:
{contents of if block here}
const variables are [allowed to be] substituted and evaluated by the compiler instead of at run time, where readonly variables are always evaluated at run time. This makes a big difference when you expose fields to other assemblies, as a change to a const variable is a breaking change that forces a recompile of all dependent assemblies.
My recommendation
I see two reasonably easy options for watermarking, though I'm not an expert in the area so don't know how "good" they are overall.
Watermark the embedded splash screen or About box logo image.
Watermark the symmetric key for loading your string resources. Keep a cache so only have to decode them once and it won't be a performance problem - this is a variable applied to a commonly used obfuscation technique. The strings are stored in the binary as UTF-8 encoded strings, and can be replaced in-line as long as the new string's null-terminated length is less than or equal to the length of the string currently found in the binary.
Finally, Google reported the following article on watermarking software that you might want to take a look at.
In C++ (for example):
#define GUID_TO_REPLACE "CC7839EB7EC047B290D686C65F98E0F4"
printf(GUID_TO_REPLACE);
in PHP:
<?php
exec("sed -e 's/CC7839EB7EC047B290D686C65F98E0F4/replacedreplacedreplacedreplaced/g' TestApp.exe > TestAppTagged.exe");
?>
If you stick your compiled binary on the server, visit the php script, download the tagged exe, and run it...you'll see that it now prints the "replaced" string rather than the GUID :)
Note that the length of the replaced string must be identical to the original (32 in this case), so you'll need to pad the length if you want to tag it with something shorter.
I'm not sure what you mean by "location" of a const value. You can certainly use items like reflection to access a const field on a particular type. Const fields bind like any other non-instance field of the same accessibility. I don't know if that fits your definition of location though.

How do I get the characters for context-shaped input in a complex script?

In some RightToLeft languages (Like Arabic, Persian, Urdu, etc) each letter can have different shapes. There is isolated form, initial form, and middle form (you can just find it on the Character Map of the windows for any unicode font).
Imagine you need the exact characters that user has been entered on a text box, by default, when you converting the String to CharArray, it will convert each character to Isolated form.
(because when user entering the characters by keyboard, it is in the isolated form and when it is displaying on the screen, it will be converted to proper format; this is just a guess. because if you make the string by using exact character codes, it will generate the proper array).
My question is, how we can get that form of the string, the form that has been displayed in the textbox.
If there is no way in .NET then this means i need to make my own class to convert this T_T
Windows uses Uniscribe to perform contextual shaping for complex scripts (which can apply to l-to-r as well as r-to-l languages). The displayed text in a text box is based on the glyph info after the characters have been fed into Uniscribe. Although the Unicode standard defines code points for each of isolated, initial, medial, and final forms of a chracter, not all fonts necessarily support them yet they may have pre-shaped glyphs or use a combination of glyphs—Uniscribe uses a shaping engine from the Windows language pack to determine which glyph(s) to use, based on the font's cmap. Here are some relevant links:
More Uniscribe Mysteries (explains difference between glyphs and characters)
Microsoft Bhasha, Glyph Processing: Uniscribe
MSDN: Complex Scripts Awareness
Buried in the bowels of Mozilla code is code that handles complex script rendering using Uniscribe. There's also additional code that scans the list of fonts in the system and reads the cmap tables of each font. (From the comments at http://www.siao2.com/2005/12/06/500485.aspx).
Sorting it all Out: Did he say shaping? It's not in the script!
The TextRenderer.DrawText() method uses Uniscribe via the Win32 DrawTextExW() function, using the following P/Invoke:
[DllImport("user32.dll", CharSet=CharSet.Unicode, SetLastError=true)]
public static extern int DrawTextExW( HandleRef hDC
,string lpszString
,int nCount
,ref RECT lpRect
,int nFormat
,[In, Out] DRAWTEXTPARAMS lpDTParams);
[StructLayout(LayoutKind.Sequential)]
public struct RECT
{
public int left;
public int top;
public int right;
public int bottom;
}
[StructLayout(LayoutKind.Sequential)]
public class DRAWTEXTPARAMS
{
public int iTabLength;
public int iLeftMargin;
public int iRightMargin;
public int uiLengthDrawn;
}
This is a bit of a wild guess, but does String.Normalize() help here? It is unclear to me whether that just covers character composition or if it includes positional forms as well.
So how are you creating the "wrong" string? If you're just putting it in a string literal, then it's quite possible it's just the input method that's wrong. If you copy the "right" string after displaying it, and then paste that into a string literal, what happens? You might also want to check which encoding Visual Studio is using for your source files. If you're not putting the string into your source code as a literal, how are you creating it?
Given the possibility for confusion, I think I'd want to either keep these strings in a resource, or hard code them using unicode escaping:
string text = "\ufb64\ufea0\ufe91\ufeea";
(Then possibly put a comment afterwards showing the non-escaped value; at least then if it looks about right, it won't be too misleading. Admittedly it's then easy for the two to get out of sync...)

Categories

Resources