Code Access Security - Basics and Example - c#

I was going through this link to understand CodeAccessSecurity:
http://www.codeproject.com/KB/security/UB_CAS_NET.aspx
It's a great article but it left me with following questions:
If you can demand and get whatever permissions you want, then any executable can get Full_Trust on machine. If permissions are already there, then why do we need to demand those?
Code is executing on Server, so the permissions are on server not on client machine?
Article takes an example of removing write permissions from an assembly to show security exception. Though in real world, System.IO assembly (or related classes) will take care of these permissions. So is there a real scenario where we will need CAS?

The idea of "least privilege access" a very important Principal of secuirty. A hacker is going to make your application do something that it wasn't intended to do. Whatever rights the application has at the time of attack then the attacker will have thoughs same rights. You can't stop every attack against your application, so you need lower the impact of a possible attack as much as you can. This isn't bullet proof, but this significantly raises the bar. An attacker maybe able to chain a privilege escalation attack in his exploit.
In most situations you can't control the actions of the client. In general you should assume that the attacker can control the client using a debugger or a using modified or rewritten client. This is especially true for web applications. You want to protect the server as much as possible, and adjusting permissions is a common way of doing that.
Sorry, I can't answer this one without Google. But CAS is deprecated anyway.

Related

How to detect an external program monitoring your process?

Is it possible in .net to determine if another program is monitoring your process?
What I mean here is I have an exe running and if someone launches procmon.exe or some other app that tries to read some information about my exe, I want my exe to log this.
This is a vast and complex topic, and I'm only acquainted with its existence, not an expert. So all I can offer is a search term:
anti-debugging
It covers detection of monitoring tools, countermeasures to prevent inspection, and obfuscation to make information gained through monitoring quite useless.
Do be aware that there is an arms race between the reversers, who want to debug any and all code running on their system, and the DRM designers1, who want to protect their secrets from curious minds. Unless you're willing to dedicate your life to becoming an expert, you're probably stuck buying solutions from someone who is. Or just deciding that it isn't worth it.
1 Even if you believe content owners have the moral right2 to ban reverse engineering, please note that no one benefits from protective obscurity quite as much as malware authors.
2 Also, it's quite different to maintain a neutral expression. But I tried.
The monitoring process can either take information about your process directly from the Operating System (e.g. TaskManager, perfmon, etc.). In this case your process does not know anything about it.
In another case, the monitoring process could attach and debug your process. When the debugger attaches to your process the latter stops and the debugger can get information about its execution. So your process cannot "detach the debugger on its own" without some additional security measures.

Automatically refreshing cache on a IIS server

Sorry for the lack of knowledge here guys by the agency I work for has inherited a site built on windows server with ASP.net.
We're having problems with the user login system. The basic story is when a user requests an account an admin must approve it. If the admin rejects the user application the user can then still login by using the forgot password option.
My initial reaction is bad logic in the code but we've had some ASP guys take a quick look and were unable to reproduce the problem. Because of this they've suggested it might be some kind of caching issue. If so is there anyway to set the sever to reset it's cache every hour or so? or any other recommendations on this are also welcome. Sorry for the complete lack of code examples but a scenario is all I can offer right now.
Cheers.
If you can't reproduce it then there is no problem; although I concede there might be a caching problem. But without knowing more about your application all I can suggest is to grep your source code for references to the System.Web.Caching.Cache class (usually accessed via the Page class directly if not by HttpContext.Current).
Removing the idea of a cache being responsible, I think it's just a case of bad coding of the forgot-password feature. I guess is the forgot-password feature resets some kind of "account disabled" flag and that's what allows the user to get in.
The ASP.Net caching class has an iterator for the cached items, and a method to remove an item. It's difficult to host long-running processes in IIS -- because "idle" worker processes get killed -- so you can't just run a timer. But you can use the IIS auto-start feature, combined with a persistent store, to run your "nuke the cache" method every hour.
With that said, I can't see how this is a sensible solution to the problem -- I hope the plan here is just to help isolate the real issue.

Howto - Authenticating a calling assembly across a tcp/ip boundary

I have an server engine, which generally works in the server (imaginatively). But occasionaly it will be executing on a client, or occasionally on another server, and will be wanting to use this server to do some of extra/additional processing.
The link between will be protected by ssl/tls and certificates. So the communication will be secure, but i'm not sure that calling engine is my code.
How would you go about athenticating that engine. What would Alice and Bob say on this subject?
You won't find a 100% way to do this.
Since:
The code is running on the client
Then the code is available for disassembly
Then whatever security measures you have placed into the code running on the client, an attacker can circumvent
Basically, since Karl can phone up Alice and pretend to be Bob, by having Bob's voice, and knowing everything Bob knows (the disassembled code), then there is no way Alice can verify that it really is Bob, or just a very good impostor.
If you design your software so that it can only run on specific types of hardware, with TPM or similar technology, then you might have a chance, but only through software, you cannot create a 100% solution.
Even with a TPM-enabled solution, you could still risk the impostor circumventing it by sitting inbetween.
It all depends on what kind of attacks you want to prevent.
There is no general solution for this problem. You can't know for sure that the remote machine will not be suborned by an attacker, and thus you cannot be absolutely sure that remote code in communication with you is the code you originally intended to be communicating with you.
People use TPMs and similar mechanisms to try to do remote attestation of the trustworthiness of the remote hardware, but trying to do such things purely in software is hopeless as you cannot know what is running on the remote side. In the general case, given long enough an attacker can also suborn a TPM.

Filling Windows XP Security Event Log

I am in need of filling the Windows Security Event Log to a near full state. Since write access to this log is not possible, could anybody please advise as to an action that could be programatically performed which would add an entry to this log? It does not need to be of any significance as long as it gives an entry (one with the least overhead would be desired as it will need to be executed thousands of times).
This is needed purely for testing purposes on a testing rig, any dirty solution will do. Only requirement is that it's .NET 2.0 (C#).
You can enable all the security auditing categories in local security policy (secpol.msc | Local Policies | Audit Policy). Object access tends to give plenty of events. Enabling file access auditing, then set audit for everyone on some frequently accesses files and folders will also generate lots of events.
And that's normal usage, and that includes any programmatic access to those resources being audited (its all programmatic in the end, just someone else's program).
Enable Login Auditing as Richard mentioned above. Success or Failure is dependent upon how you handle step 2:
Use LoginUser to impersonate a local user on the system - or FAIL to impersonate that local user on the system. Tons of samples via good for viable C# implementations.
Call in a tight loop, repeatedly.
Another approach you can take involves engaging object access, and doing a large number of file or register I/O operations. This will also cause the log to fill out completely in an extremely short period of time.

Please help me with a program for virus detection using detection of malicious behavior

I know how antivirus detects viruses. I read few aticles:
How do antivirus programs detect viruses?
http://www.antivirusworld.com/articles/antivirus.php
http://www.agusblog.com/wordpress/what-is-a-virus-signature-are-they-still-used-3.htm
http://hooked-on-mnemonics.blogspot.com/2011/01/intro-to-creating-anti-virus-signatures.html
During this one month vacation I'm having. I want to learn & code a simple virus detection program:
So, there are 2-3 ways (from above articles):
Virus Dictionary : Searching for virus signatures
Detecting malicious behavior
I want to take the 2nd approach. I want to start off with simple things.
As a side note, recently I encountered a software named "ThreatFire" for this purpose. It does a pretty good job.
1st thing I don't understand is how can this program inter vent an execution of another between and prompt user about its action. Isnt it something like violation?
How does it scan's memory of other programs? A program is confined to only its virtual space right?
Is C# .NET correct for doing this kind of stuff?
Please post your ideas on how to go about it? Also mention some simple things that I could do.
This happens because the software in question likely has a special driver installed to allow it low level kernel access which allows it to intercept and deny various potentially malicious behavior.
By having the rights that many drivers do, this grants it the ability to scan another processes memory space.
No. C# needs a good chunk of the operating system already loaded. Drivers need to load first.
Learn about driver and kernel level programming. . . I've not done so, so I can't be of more help here.
I think system calls are the way to go, and a lot more doable than actually trying to scan multiple processes' memory spaces. While I'm not a low-level Windows guy, it seems like this can be accomplished using Windows API hooks- tie-ins to the low-level API that can modify system-wide response to a system call. These hooks can be installed as something like a kernel module, and intercept and potentially modify system calls. I found an article on CodeProject that offers more information.
In a machine learning course I took, a group decided to try something similar to what you're describing for a semester project. They used a list of recent system calls made by a program to determine whether or not the executing program was malicious, and the results were promising (think 95% recognition on new samples). In their project, they trained using SVMs on windowed call lists, and used that to determine a good window size. After that, you can collect system call lists from different malicious programs, and either train on the entire list, or find what you consider "malicious activity" and flag it. The cool thing about this approach (aside from the fact that it's based on ML) is that the window size is small, and that many trained eager classifiers (SVM, neural nets) execute quickly.
Anyway, it seems like it could be done without the ML if it's not your style. Let me know if you'd like more info about the group- I might be able to dig it up. Good luck!
Windows provides APIs to do that (generally the involve running at least some of your code in kernel). If you have sufficient privileges, you can also inject a .dll into other process. See http://en.wikipedia.org/wiki/DLL_injection.
When you have the powers described above, you can do that. You are either in kernel space and have access to everything, or inside the target process.
At least for the low-level in-kernel stuff you'd need something more low-level than C#, like C or C++. I'm not sure, but you might be able to do some of the rest things in a C# app.
The DLL injection sounds like the simplest starting point. You're still in user space, and don't have to learn how to live in the kernel world (it's completely different world, really).
Some loose ideas on topic in general:
you can interpose system calls issued by the traced process. It is generally assumed that a process cannot do anything "dangerous" without issuing a system call.
you can intercept its network traffic and see where it connects to, what does it send, what does it receive, which files does it touch, which system calls fail
you can scan its memory and simulate its execution in a sandbox (really hard)
with the system call interposition, you can simulate some responses to the system calls, but really just sandbox the process
you can scan the process memory and extract some general characteristics from it (connects to the network, modifies registry, hooks into Windows, enumerates processes, and so on) and see if it looks malicious
just put the entire thing in a sandbox and see what happens (a nice sandbox has been made for Google Chrome, and it's open source!)

Categories

Resources