Filling Windows XP Security Event Log - c#

I am in need of filling the Windows Security Event Log to a near full state. Since write access to this log is not possible, could anybody please advise as to an action that could be programatically performed which would add an entry to this log? It does not need to be of any significance as long as it gives an entry (one with the least overhead would be desired as it will need to be executed thousands of times).
This is needed purely for testing purposes on a testing rig, any dirty solution will do. Only requirement is that it's .NET 2.0 (C#).

You can enable all the security auditing categories in local security policy (secpol.msc | Local Policies | Audit Policy). Object access tends to give plenty of events. Enabling file access auditing, then set audit for everyone on some frequently accesses files and folders will also generate lots of events.
And that's normal usage, and that includes any programmatic access to those resources being audited (its all programmatic in the end, just someone else's program).

Enable Login Auditing as Richard mentioned above. Success or Failure is dependent upon how you handle step 2:
Use LoginUser to impersonate a local user on the system - or FAIL to impersonate that local user on the system. Tons of samples via good for viable C# implementations.
Call in a tight loop, repeatedly.
Another approach you can take involves engaging object access, and doing a large number of file or register I/O operations. This will also cause the log to fill out completely in an extremely short period of time.

Related

How to lock down a web application for maintenance without loss of unsaved work?

I have a large web application in ASP .NET (not that the technology matters here) that does not currently have a way to be locked down for maintenance without current users losing their work. Since I have not implemented something like this before, I would like to hear about some of the standard precautions and steps developers take for such an operation.
Here are some of the questions that I can think of:
Should each page redirect to a "Site down for maintenance" page or is there a more central way to prevent interaction?
How to centralize a scheduled maintenance such that user operations lock down before the site is locked down. Thus preventing loss of unsaved work.
The application is data-driven and implements transaction scopes at the business layer. It does not use load balancing or replication. I may be wrong, but it does not 'feel right' to have the BLL handle this. Any suggestions or links to articles would be appreciated.
One way to make a maintenance page is to use the app_offline.htm feature of IIS. Using this feature you will be able to show the same html page to all your users notifying them about the maintenance.
There is a nice post here in StackOverflow about it. ASP.NET 2.0 - How to use app_offline.htm.
Another thing you could do is to notify your users that there is a scheduled maintenance so that they also be aware and stop using the application.
That all depends on the time you need to upgrade your application. If the upgrade is to upload the new files and take not more that a minute or two, its most likely that your users wont even see it.
A non-answer answer that may be helpful: design the application so that it can be upgraded on the fly transparently to its users. Then you never have a maintenance window that users really need to worry about. There is no need to lock down the application because everything keeps working. If transactions get dropped, that's a bug in the application because there is an explicit requirement that the application can be upgraded with transactions in progress, so it's been coded to support that and there are tests that verify that functionality.
Consider as an example Netflix: does it have a locked down maintenance window? Not that the general public ever knows about. :-)

Should I use sessions?

I am designing an online time tracking software to be used internally. I am fairly new to c# and .NET though I have extensive PHP experience.
I am using Windows Forms Authentication, and once the user logs in using that, I create a Timesheet object (my own custom class).
As part of this class, I have a constructor that checks the SQL DB for information (recent entries by this user, user preferences, etc.)
Should I be storing this information in a session? And then checking the session object in the constructor first? That seems the obvious approach, but most examples I've looked at don't make much use of sessions. Is there something I don't know that others do (specifically related to .NET sessions of course)?
EDIT:
I forgot to mention two things. 1. My SQL DB is on another server (though I believe they are both on the same network, so not much of an issue)2. There are certain constants that the user will not be able to change (only the admin can modify them) such as project tasks. These are used on every page, but loaded the first time from the DB. Should I be storing these in a session? If not, where else? The only other way I can think of is a local flat file that updates each time the table of projects is updated, but that seems like a hack solution. Am I trying too hard to minimize calls to the DB?
There is a good overview on ASP.NET Session here: ASP.NET Session State.
If you don't have thousands of clients, but need "some state" stored server-side, this is very easy to use and works well. It can also be stored in the database in multi server scenarios, without changing a line in your code, just by configuration.
My advise would be not to store "big", or full object hierarchies in there, as storing in a session (if the session is shared among servers in a web farm in a database for example) can be somewhat costy. If you plan to have only one server, this is not really a problem, but you have to know that you won't be able to easily move to a multiple server mode easily.
The worst thing to do is follow the guys who just say "session is bad, whooooo!", don't use it, and eventually rewrite your own system. If you need it, use it :-)
I would shy away from session objects. And actually I would say look into .net MVC as well.
The reason I don't use the session is because I feel it can be a crutch for some developers.
I would save all of the information that you would have put into a session into a db. This will allow for better metrics tracking, support for Azure (off topic but worth mentioning) and is cleaner imo.
ASP developers know session state as a great feature, but one that is somewhat limited. These limitations include:
ASP session state exists in the process that hosts ASP; thus the actions that affect the process also affect session state. When the process is recycled or fails, session state is lost.
Server farm limitations. As users move from server to server in a Web server farm, their session state does not follow them. ASP session state is machine specific. Each ASP server provides its own session state, and unless the user returns to the same server, the session state is inaccessible. http://msdn.microsoft.com/en-us/library/ms972429.aspx
One of the main problems with Session is, that by default, it is stored in memory. If you have many concurrent users that store data in the session this could easily lead to performance problems.
Another thing is that application recycle will empty your in memory session which could lead to errors.
Off course you can move your session to SqlServer or a StateServer but then you will lose on performance.
Look into the HttpContext.User (IPrincipal) property. this is where user information is stored in the request.
Most people avoid session state simply because people like to avoid state in general. If you can find an algorithm or process which works all the time regardless of the previous state of an object, that process tends to be more fool proof against future maintenance and more easily testable.
I would say for this particular case, store your values in the database and read them from there any time you need that information. Once you have that working, take a look at the performance of the site. If it's performing fine then leave it alone (as this is the simplest case to program). If performance is an issue, look at using the IIS Cache (instead of session) or implementing a system like CQRS.
Session State Disadvantage
Session-state variables stay in memory until they are either removed or replaced, and therefore can degrade server performance. Session-state variables that contain blocks of information, such as large datasets, can adversely affect Web-server performance as server load increases. Think what will happen if you significant amount of users simultaneously online.
NOTE :- I haven't mentioned the advantages because they are straightforward which are : Simple implementation, Session-specific events, Data persistence, Cookieless support etc.
The core problem with sessions are scaleability. If you have a small application, with a small number of users, that will only ever be on one server, then it may be a good route for you to save small amounts of data - maybe just the user id - to allow quick access to the preferences etc.
If you MAY want multiple web servers, or the application MAY grow, then don't use session. And only use it for small pieces of information.

Logging to files or to event viewer?

I was wondering what is the 'correct' way to log information messages; to files, or to a special log in the event viewer?
I like logging to files since I can use rolling flat file listener and see fresh new log from each day, plus in the event viewer I can only see one message at a time - where in a file I can scan through the day much easily. My colleague argues that files just take up space and he likes having his warnings, errors and information messages all in one place. What do you think? Is there a preferred way? If so, why?
Also, are there any concurrency issues in any of the methods? I have read that entlib is thread-safe and generates a Monitor.Enter behind if the listener is not thread safe, but I want to make sure (we're just using Logger.Write). We are using entlib 3.1.
Thank you in advance.
Here's the rule of thumb that I use when logging messages.
EventLog (if you have access of course)
- We always log Unhandled Exceptions
- In most cases we log Errors or Fatals
- In some cases we log Warnings
- In some very rare cases we log Information
- We will never log useless general messages like: "I'm here, blah, blah, blah"
Log File
- General rule, we log everthing but can chose the type of level or filter to use to turn down the volume of messages being logged
The EventLog is always a good option because its bound to WMI. This way products like Open View and alike, can monitor and alert ops if something went haywire. However, keep the messages to a minimum because its slow, its size limited on a per messaeg basis and it, entry limit as you can easily fill up the EventLog quite quickly and you application has to handle the dreaded "EventLog is Full" exception :)
Hope this helps...
There is no 'correct' way. It depends on your requirements.
You 'like' looking at flat files but how many (thousands) of lines can you really read every day?
What you seem to need is a plan (policy) and that ought to involve some tooling. Ask yourself how quickly will you notice an anomaly in the logs? And the absence of something normal?
The eventlog is a bit more work/overhead but it can be easily monitored remotely (multiples servers) by some tool. If you are using (only) manual inspection, don't bother.
In enterprise applications there are different types of logs such as -
Activity logs - Technical logs which instrument a process and are useful in debugging
Audit logs - logs used for auditing purpose. Availability of such logs is a legal requirements in some cases.
What to store where: -
As far as the Audit logs or any logs with sensitive information are concerned they should go to database where they can be stored safely.
For Activity logs my preference is to files. But we should also have different log levels such as Error, Info, Verbose etc which should be configurable. This will make it possible to save space and time required for logging when it is not needed.
You should write to event log only when you are not able to write to a file.
Consider asking your customer admins or technical support people where they want the logs to be placed.
As to being thread-safe, yes, EntLib is thread-safe.
I would recommend Event-viewer but in cases where you don't have admin rights or particular access to Event-viewer, Logging to normal files would be better option.
I prefer logging to a database, that way I can profile my logs and generate statistics and trends on errors occurring and fix the most frequent ones.
For external customers I use a web service called async to report the error. (I swallow any expections in it so any logging error's wouldn't affect the client - not that I've had any, using log4net and L4NDash).

Code Access Security - Basics and Example

I was going through this link to understand CodeAccessSecurity:
http://www.codeproject.com/KB/security/UB_CAS_NET.aspx
It's a great article but it left me with following questions:
If you can demand and get whatever permissions you want, then any executable can get Full_Trust on machine. If permissions are already there, then why do we need to demand those?
Code is executing on Server, so the permissions are on server not on client machine?
Article takes an example of removing write permissions from an assembly to show security exception. Though in real world, System.IO assembly (or related classes) will take care of these permissions. So is there a real scenario where we will need CAS?
The idea of "least privilege access" a very important Principal of secuirty. A hacker is going to make your application do something that it wasn't intended to do. Whatever rights the application has at the time of attack then the attacker will have thoughs same rights. You can't stop every attack against your application, so you need lower the impact of a possible attack as much as you can. This isn't bullet proof, but this significantly raises the bar. An attacker maybe able to chain a privilege escalation attack in his exploit.
In most situations you can't control the actions of the client. In general you should assume that the attacker can control the client using a debugger or a using modified or rewritten client. This is especially true for web applications. You want to protect the server as much as possible, and adjusting permissions is a common way of doing that.
Sorry, I can't answer this one without Google. But CAS is deprecated anyway.

How to safely write to a log file from two instances of the same application?

I have an application which can only have 1 instance running at each time, however if a 2nd instance is launched it needs to be logged to a common logfile that the first could also be using.
I have the check for how many instances are running and I was planning on simply logging it to the event logger initially but the application can be running in user or system context and exceptions are thrown when attempting to query the eventlog source as a user so that idea is scrapped as the security logs are inaccessible to the user.
So I wanted to find out what the safest method of have 2 seperate instances of the same application write to a log file would be that would ensure both get an opportunity to write to it.
I would prefer not to use an existing additional framework if avoidable
Any help appreciated.
A Mutex could be used for interprocess synchronization of a shared resource such as log file. Here's a sample.
You could always write to the system event log. No locking or anything needed and the event viewer is more robust than some give it credit for.
In response to your comment, another user asked the question about write permissions for the event log here on SO. The answer linked to the msdn article that describes how to perform that.
See that question here.
You can dodge the problem if you prefer...
If this is a windows app, you can send the first instance a message and then just quit. On receiving the message, the original instance can write to the log file without any issues.
Why not use syslog protocol ? This will allow you to deliver the logs in a very standards-based and flexible manner. The protocol itself is quite simple, but there are plenty of examples on the Net, e.g. here. If your app is destined for the enterprise use, having a standard way of logging could be a big plus. (And, you do not need to maintain the files either - it becomes a job of a specialized software that does just that)
One way to hack it would be to memory-map the log file. That way, both instances of the application are sharing the same virtual memory image of the file. Then there are a number of ways of implementing a mutex inside the file.

Categories

Resources