DirectorySecurity fs;
string FolderPath = "C:/Program Files";
fs = Directory.GetAccessControl(FolderPath, AccessControlSections.All);
foreach (FileSystemAccessRule fileSystemAccessRule in fs.GetAccessRules(true,true,typeof(System.Security.Principal.NTAccount)))
{
userNameDomain = fileSystemAccessRule.IdentityReference.Value;
userRights = fileSystemAccessRule.FileSystemRights.ToString();
string[] row = { userNameDomain, userRights };
var listViewItem = new ListViewItem(row);
lv_perm.Items.Add(listViewItem);
}
When doing this on "regular" files and folders (ex: C:/Users/Julien/Desktop/New folder), everything seems good:
ListView:
But, when I'm doing this on a folder with "special" rights (ex: C:/Program Files), I got duplicate IdentityReference.Value associated to strange numbers for rights:
Listview not good:
I don't have as many rights entries with strange numbers when I open the Permissions tab in C:/Program Files properties.
Maybe I'm doing something bad?
EDIT: From that page Here:
Using .NET you may think that determining which permissions are
assigned to a directory/file should be quite easy, as there is a
FileSystemRights Enum defined that seems to contain every possible
permission that a file/directory can have and calling
AccessRule.FileSystemRights returns a combination of these values.
However, you will soon come across some permissions where the value in
this property does not match any of the values in the FileSystemRights
Enum (I do wish they wouldn’t name some properties with the same name
as a Type but hey).
The end result of this is that for some files/directories you simply
cannot determine which permissions are assigned to them. If you do
AccessRule.FileSystemRights.ToString then for these values all you see
is a number rather than a description (e.g Modify, Delete, FullControl
etc). Common numbers you might see are:
-1610612736, –536805376, and 268435456
To work out what these permissions actually are, you need to look at
which bits are set when you treat that number as 32 separate bits
rather than as an Integer (as Integers are 32 bits long), and compare
them to this diagram:
msdn.microsoft.com/en-us/library/aa374896(v=vs.85).aspx
So for example, -1610612736 has the first bit and the third bit set,
which means it is GENERIC_READ combined with GENERIC_EXECUTE. So now
you can convert these generic permissions into the specific file
system permissions that they correspond to.
You can see which permissions each generic permission maps to here:
msdn.microsoft.com/en-us/library/aa364399.aspx. Just be aware
that STANDARD_RIGHTS_READ, STANDARD_RIGHTS_EXECUTE and
STANDARD_RIGHTS_WRITE are all the same thing (no idea why, seems
strange to me) and actually all equal the
FileSystemRights.ReadPermissions value.
So I think, because GetAccessRules are unable to group for ex:
NT SERVICE\TrustedInstaller FullControl
and
NT SERVICE\TrustedInstaller 268435456
He create 2 distinct entry.
I have to correct the FileSystemRights so they fit to the enumeration.
268435456 - FullControl
-536805376 - Modify, Synchronize
-1610612736 - ReadAndExecute, Synchronize
This issue exist since 2014. And still exist today.
You're not doing anything bad. The access mask is actually represented as an int internally (and on the file system), but the enum FileSystemRights is incomplete.
Therefore the visualizers and those functions converting a FileSystemRights value to a string will be at a loss and simply give you the numeric value (but as a string).
This means in order to make sense of them you'd have to inspect WinNT.h (from the Windows SDK) and look up all the possible access mask values - including generic ones - and provide a manual way of converting the numeric representation to a more human-readable string.
What you encountered are two distinct ACEs with different (but semantically equivalent) access masks, it seems. This is perfectly fine and happens in the wild (as your screenshot proves!). The .NET framework, however, chooses to ignore the issue.
If you look at the ACL with Powershell's Get-Acl or with icacls (a tool which has been on board since Windows 2000), you can see that there are potentially differences in how the individual ACEs are inherited or propagate down the filesystem hierarchy.
Another alternative would be to call the MapGenericMask() and have it perform the mapping. So when you would give it GENERIC_ALL (== 0x10000000) it would return 0x001f01ff, which corresponds to FileSystemRights.FullControl. This way you could "bend" the generic access mask into a form understood by the .NET framework.
Related
The methods in the .NET platform's DirectorySecurity namespace (e.g. GetAccessRules()) are far too slow for my purposes. Instead, I wish to directly query the NTFS $Secure metafile (or, alternatively, the $SDS stream) in order to retrieve a list of local accounts and their associated permissions for each file system object.
My plan is to first read the $MFT metafile (which I've already figured out how to do) - and then, for each entry therein, look up the appropriate security descriptor in the metafile (or stream).
The ideal code block would look something like this:
//I've already successfully written code for MFTReader:
var mftReader = new MFTReader(driveToAnalyze, RetrieveMode.All);
IEnumerable<INode> nodes = mftReader.GetNodes(driveToAnalyze.Name);
foreach (NodeWrapper node in nodes)
{
//Now I wish to return security information for each file system object
//WITHOUT needing to traverse the directory tree.
//This is where I need help:
var securityInfo = GetSecurityInfoFromMetafile(node.FullName, node.SecurityID);
yield return Tuple.Create(node.FullName, securityInfo.PrincipalName, DecodeAccessMask(securityInfo.AccessMask));
}
And I would like my output to look like this:
c:\Folder1\File1.txt jane_smith Read, Write, Execute
c:\Folder1\File1.txt bill_jones Read, Execute
c:\Folder1\File2.txt john_brown Full Control
etc.
I am running .NET version 4.7.1 on the Windows 10.
There's no API to read directly from $Secure, just like there is no API to read directly from $MFT. (There's FSCTL_QUERY_FILE_LAYOUT but that just gives you an abstracted interpretation of the MFT contents.)
Since you said you can read $MFT, it sounds like you must be using a volume handle to read directly from the volume, just like chkdsk and similar tools. That allows you to read whatever you want provided you know how to interpret the on-disk structures. So your question reduces to how to correctly interpret the $Secure file.
I will not give you code snippets or exact data structures, but I will give you some very good hints. There are actually two approaches possible.
The first approach is you could scan forward in $SDS. All of the security descriptors are there, in SecurityId order. You'll find there's at various 16-byte aligned offsets, there will be a 20-byte header that includes the SecurityId among other information, and following that there's the security descriptor in serialized form. The SecurityId values will appear in ascending order in $SDS. Also every alternate 256K region in $SDS is a mirror of the previous 256K region. To cut the work in half only consider the regions 0..256K-1, 512K..768K-1, etc.
The second approach is to make use of the $SII index, also part of the $Secure file. The structure of this is a B-tree very similar to how directories are structured in NTFS. The index entries in $SII have SecurityId as the index for lookups, and also contain the byte offset you can go to in $SDS to find the corresponding header and security descriptor. This approach will be more performant than scanning $SDS, but requires you to know how to interpret a lot more structures.
Craig pretty much covered everything. I would like to clear some of them. Like Craig, no code here.
Navigate to the node number 9 which corresponds to $Secure.
Get all the streams and get all the fragments of the $SDS stream.
Read the content and extract each security descriptor.
Use IsValidSecurityDescriptor to make sure the SD is valid and stop when you reach an invalid SD.
Remember that the $Secure store the security descriptors in self-relative format.
Are you using FSCTL_QUERY_FILE_LAYOUT? The only real source of how to use this function I have found is here:
https://wimlib.net/git/?p=wimlib;a=blob;f=src/win32_capture.c;h=d62f7d07ef20c08c9bec93f261131033e39b159b;hb=HEAD
It looks like he solves the problem with security descriptors like this:
He gets basically all information about files from the MFT, but not security descriptors. For those he gets the field SecurityId from the MFT and looks in a hash table whether he already has a mapping from this ID to the ACL. If he has, he just returns it, otherwise he uses NtQuerySecurityObject and caches it in the hash table. This should drastically reduce the amount of calls. It assumes that there are few security descriptors and that the SecurityID field correctly represents the single instancing of the descriptors
I am testing software in C# and must ensure proper behavior (graceful failure) occurs when a program is given an invalid full path. Initially this is trivial,as I give something like "Q:\\fakepath" and since there is no Q drive mounted on the system, the program fails as expected.
However, I would like my test to be robust and want a way to generate a path that is guaranteed to not exist and to not be able to exist. The path must be full since if it doesn't start with a drive letter it will be treated relative to some directory, resulting in no failure.
Some approaches I have though of are to search for local drives that are mounted and then pick a drive letter that does not appear. This would work fine and I might end up using this, but I would prefer a more elegant solution such as using a drive letter that could not possibly exist.
Another (potential) option is to use invalid characters in the path name. However, using invalid characters is not preferred as it actually results in a different failure mode of the program.
So formally: How can I most elegantly generate a full path that is guaranteed not be invalid?
EDIT: The program I am testing will go ahead and create a directory (including parent directories) if it is on a valid drive but in a location that does not already exist. Hence, this path needs to be something that couldn't be created with something like Directory.CreateDirectory(<path>), not just something that doesn't already exist.
One method would be to use the Windows API to create a temporary folder. This might sound counterintuitive, but now you have a known empty folder, any path you specify inside it is guaranteed to not exist. For example:
//From https://stackoverflow.com/a/278457/1663001:
public string GetTemporaryDirectory()
{
string tempDirectory = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());
Directory.CreateDirectory(tempDirectory);
return tempDirectory;
}
public string GetNonexistantPath()
{
return Path.Combine(GetTemporaryDirectory(), "no-such-file");
}
One way to get a guaranteed invalid folder path is have a file that exists with the same name as part of the directory path.
public string Example()
{
string filePath = Path.GetTempFileName(); //Creates a uniquely named, zero-byte temporary file on disk.
var invalidDirectoryPath = Path.Combine(filePath, "CanNotExist");
Directory.CreateDirectory(invalidDirectoryPath); //throws a IOException
}
You could try using one of the reserved words, for instance C:\NUL (case-sensitive). Trying to create such directory will cause a DirectoryNotFoundException. More details here.
You can use some really long path (say a thousand characters). Your program won't probably be able to create it as it is invalid.
You can try this approach. Not sure though it would work or not but a worth try.
use path: Q:\asddsafkdjfkasjdfklahsjfhskdjfladjfhsafjklasdjfkashfkajsdfhasdklfjashdfkljasdhfklajdfajsdfklajsfkjasjfhadkfjasflhakldfjashdfklajsdjfhaksldjfahsdkljadfklajfkjlkajfkljagkjklfdjgklajdkfljgskljgklfjskgjfkljdsgkfsdgsfgsdfgsfggsdfgsfdgsgwesdfgdgjgfadfsfgffgfsdghijklm
Don't bother about counting the total number of letters, you can do the same using http://www.lettercount.com/
The trick is the max length of windows folder can be 260.
Though I tried in on Windows 10 and the max length allowed to me is 247.
Source_MAX_Length_Of_Folder_On_Windows
So, this folder is guaranteed to be never found. Cheers :)
Although, I think the most elegant solution is checking the mounted drives and generate a path afterwards that you have already mentioned and decided to keep it as a last option.
I am trying to retrieve the subdirectories of a path I pass in. It proccess it and gives me half of the subdirectories but for the other half, it returns a "?" when debugging. I do not know what is causing this
Here is what I have:
string root = #"C:\Users\Documents\Meta Consumer";
string[] subDir = Directory.GetDirectories(root);
When Debugging:
1: (good)
2: (good)
3: (good)
.. ..
?: (this is where 14 is)
?: (15 is here)
.. ..
?: ?
I'm not sure the entire goal, if you intend to specifically Search for a specific item or intend to manipulate the Directory at all. One thing that I do see is you haven't specified any additional search for your array. This can be hindered I believe through deep nesting or permission issues.
Resolution One: Ensure that you have valid permission to do recursive searches within the specified directory.
Resolution Two: You can attempt to run a search for all items with a wildcard then force it to search all directories. This may help solve potential deep nesting issues you may encounter.
Resolution Three: Try the below code; see if it solves the issue.
string root = Environment.GetFolderPath(Environment.SpecialFolder.Documents);
string[] subDir = Directories.GetDirectories(root, "*", SearchOption.AllDirectories);
foreach (string s in subDir)
{
Console.WriteLine(s);
}
See if that returns the proper information that it wasn't previously. There are folders located in your Library that though are considered public to the user are still locked as they reside in the User Profile so permissions will be a good check.
Running Visual Studio as an Administrator will also help in your troubleshooting. Also you should see if there are any Inner Exceptions to help identify it as well.
So NTFS uses a 128-bit Guid to identify files and directories, you can view this information easily enough:
C:\Temp>C:\Windows\System32\fsutil.exe objectid query .
Object ID : ab3ffba83c67df118130e0cb4e9d4076
BirthVolume ID : ca38ec6abfe0ca4baa9b54a543fdd84f
BirthObjectId ID : ab3ffba83c67df118130e0cb4e9d4076
Domain ID : 00000000000000000000000000000000
So this is obvious enough, but how does one retrieve this information programmatically? Looking at the WinApi for OpenFileById(...) you should be able to get this information. One would expect this to be done in the "Win32 FileID API Library", yet the method there (GetFileInformationByHandleEx) returns a FILE_ID_BOTH_DIR_INFO structure. This structure defines a FileId; however, it is a LARGE_INTEGER (64bit) not the full 128 bit identifier.
I'm guessing one could use WMI for this, is that where I should turn?
A bit of searching took me to DeviceIoControl and there lies the answer to your question: FSCTL_GET_OBJECT_ID returns exactly the same IDs as in your output from fsutil.
Anyhow, the docs for BY_HANDLE_FILE_INFORMATION say that the 64-bit file ID already uniquely identifies a file on a given volume. According to Wikipedia, NTFS only supports a maximum of 2^32 files, so the 128-bit ID seems quite unnecessary.
Also please note that NOT every file does have a GUID. The GUID mechanisim is mostly used for .lnk files in order to keep the association when a the traget is moved. Only $Volume and the targets of link files have these GUIDs. Furthermore you can set them by hand.
Their advantage is that the GUID should not clash between volumes, while the file ID does.
the FILE_ID is actually 48 bit of MFT_RECORD_NUMBER and 16 bits of MFT_SEQUENCE_ID
Is it possible to know the location of const variables within an exe? We were thinking of watermarking our program so that each user that downloads the program from our server will have some unique key embedded in the code.
Is there another way to do this?
You could build a binary with a watermark that is a string representation of a GUID in a .net type as a constant. After you build, perform a search for the GUID string in the binary file to check its location. You can change this GUID value to another GUID value and then run the binary and actually see the changed value in code output.
Note: The formatting is important as the length would be very important since you're messing with a compiled binary. For example, you'll want to keep the leading zeros of a GUID so that all instances have the same char length when converted to a string.
i have actually done this sort of thing with Win32 DLLs and even the Sql Server 2000 Desktop exe. (There was a hack where you could switch the desktop edition into a full blown SQL server by flipping a switch in the binary.)
This process could then be automated and a new copy of a DLL would be programmatically altered by a small, server-side utility for each client download.
Also take a look at this: link
It discusses the use of storing settings in a .Net DLL and uses a class-based approach and embeds the app settings file and is configurable after compilation.
Key consideration #1: Assembly signing
Since you are distributing your application, clearly you are signing it. As such, since you're modifying the binary contents, you'll have to integrate the signing process directly in the downloading process.
Key consideration #2: const or readonly
There is a key difference between const and readonly variables that many people do not know about. In particular, if I do the following:
private readonly int SomeValue = 3;
...
if (SomeValue > 0)
...
Then it will compile to byte code like the following:
ldsfld [SomeValue]
ldc.i4.0
ble.s
If you make the following:
private const int SomeValue = 3;
...
if (SomeValue > 0)
...
Then it will compile to byte code like the following:
{contents of if block here}
const variables are [allowed to be] substituted and evaluated by the compiler instead of at run time, where readonly variables are always evaluated at run time. This makes a big difference when you expose fields to other assemblies, as a change to a const variable is a breaking change that forces a recompile of all dependent assemblies.
My recommendation
I see two reasonably easy options for watermarking, though I'm not an expert in the area so don't know how "good" they are overall.
Watermark the embedded splash screen or About box logo image.
Watermark the symmetric key for loading your string resources. Keep a cache so only have to decode them once and it won't be a performance problem - this is a variable applied to a commonly used obfuscation technique. The strings are stored in the binary as UTF-8 encoded strings, and can be replaced in-line as long as the new string's null-terminated length is less than or equal to the length of the string currently found in the binary.
Finally, Google reported the following article on watermarking software that you might want to take a look at.
In C++ (for example):
#define GUID_TO_REPLACE "CC7839EB7EC047B290D686C65F98E0F4"
printf(GUID_TO_REPLACE);
in PHP:
<?php
exec("sed -e 's/CC7839EB7EC047B290D686C65F98E0F4/replacedreplacedreplacedreplaced/g' TestApp.exe > TestAppTagged.exe");
?>
If you stick your compiled binary on the server, visit the php script, download the tagged exe, and run it...you'll see that it now prints the "replaced" string rather than the GUID :)
Note that the length of the replaced string must be identical to the original (32 in this case), so you'll need to pad the length if you want to tag it with something shorter.
I'm not sure what you mean by "location" of a const value. You can certainly use items like reflection to access a const field on a particular type. Const fields bind like any other non-instance field of the same accessibility. I don't know if that fits your definition of location though.