I recently landed on old web application which is full of old school tricks
GET param : user info in URL parameters
Session informations
Hidden elements used for storing information
HTML/JS/CSS dumped in the page. Without proper encoding. etc.
Window.open to show popups.
XSS issues etc.
Concatenated SQL string good for blind SQL attacks.
and just many more...
to make things work. Looks like application is old over past 5-7 years (ASP.NET 1.1) and looks like the application code has failed to keep pace with better security practices.
Thankfully it looks like over period browsers and security testing tools evolved very well. Helping people/customer to report so many security issues every now and then. Keeping them happy and system secure has become pain.
Can someone please let me know in case you have faced something similar and help me get to some case study or something for how to addressed this ? test tools which are "freely" available which can be used to test web sites for security on developer environment ? What strategies should be used to deal with this situation ? How to progress.
Let me start by saying this: While there are open-source and free security scanner tools, none will be perfect. And in my experience (with PHP at least) they tend to return enough false-positives that it's barely worth running them (but that could have gotten better since I last used them). If you want to use one to try to help identify issues, by all means do so. But don't trust the output either way (from a false-positive and a false-negative perspective).
As far as how to tackle it, I would suggest a step-by-step approach. Pick one type of vulnerability, and eliminate it across the entire application. Then go to the next vulnerability type. So a potential game-plan might be (ordered by severity and ease of fixing):
Fix all SQL Injection vulnerabilities.
Go through the code, find all places where it does SQL queries, and make sure they are using prepared statements and that nothing can get in.
Fix all XSS vulnerabilities
Find all places where local information (user-submitted or otherwise) is properly sanitized and escaped (depending on the use-case).
Fix all CSRF vulnerabilites
Go through the site and make sure that all the form submissions are properly using a CSRF token system to protect them from fraudulent requests.
Fix any and all authentication and session fixation vulnerabilities
Make sure the authentication and session systems are secure from abuse. This involves making sure your cookies are clean and that any session identifiers are rotated often. And make sure you're storing passwords correctly...
Fix and information injection vulnerabilities
You state that there is user information in URLs and hidden form elements. Go through all of them and change it so that the user cannot inject values where they shouldn't be able to. If this means storing the data into a session object, do so.
Fix all information disclosure vulnerabilities
This is related to the former point, but slightly different. If you use a username in the URL, but can't do anything by changing it, then it's not an injection vulnerability, it's just a disclosure issue. Mop these up, but they are not nearly as critical (depending on what's disclosed of course).
Fix the output
Fix the encoding issues and any method that might generate invalid output. Make sure that everything is sane when it's outputted.
The important thing to note is that anything that you fix will make the application safer. If it's a live application right now, don't wait! Don't try to do everything, test and release. Pick a reasonable sized target (2 to 4 days of work max), complete the target, test and release. Then rinse and repeat. By iterating through the problems in this manor, you're making the site safer and safer as you go along. It will seem like less work to you because there is always an end in sight.
Now, if the application is severe enough, it may warrant a full rewrite. If that's the case, I'd still suggest cleaning up at least the big ticket items in the existing application prior to starting the rewrite. Clean up the SQL Injection, XSS and CSRF vulnerabilities at a bare minimum prior to doing anything else.
It's not an easy thing to do. But taken a small bite at a time, you can make significant progress while staying above water... Any little bit will help, so treat the journey as a series of steps rather than a whole. You'll be better off in the end...
Well, a google on each of the issues will help fix it, so I'll assume you're just worried about the actual risks.
This is easy to fix, just change to $_POST, which is a bit more secure, especially when used with session tokens.
This is still widely used, so I don't see an issue?
Meh, so long as that data is checked server side, and if it doesn't verify, the user has to relog or similar, this is acceptable. Of course, session is preferred, or even cookies.
Tidying up is just a long job, no issues unless passwords etc are being revealed in the JS. Css + html are almost always unencoded, so this seems fine.
I'm just not a fan of popups, and never use them, can't help here.
Well, host the script locally if possible, and sanitize any XSS that are essential, by stripping tags, URL encoding that kind of thing.
I encountered exactly this situation a couple of years ago on an e-commerce web site. It was in ASP.NET 1.1 and was absolutely appalling in terms of code quality and security practices. Further, I was told by management that a re-write was absolutely out of the question, and could not convince them to budge on that.
Over the course of 2 years I managed to get this system PCI-DSS compliant. This means we had to close all of those security loopholes one by one, and put in place practices to ensure they could not happen again.
The first problem is you have to find all the bugs and security flaws. You can use external vulnerability probing tools (I recommend QualysGuard). But manual testing is the only way to get complete coverage. Write out old fashioned test scripts to test for XSS, SQL injection, CSRF etc. and enlist the help of testers to run through these scripts. If you can automate these tests, it is well worth doing. But don't avoid covering a test case just because its too hard to automate. If you can afford security consultants, have them help you scope out the test coverage and write test cases.
This will give you:
a) a list of bugs/flaws
b) a repeatable way to test if you actually fixed each one
Then you can start refactoring and fixing each flaw.
The process I followed is:
step 1: refactor, step 2: test & release, step 3: go back to step 1.
I.e. there is a continuous cycle of fixing, testing & releasing. I would recommend you to commit to regular deployments to your production systems e.g. monthly, and squeeze as many "refactorings" into each release cycle as you can.
I found Automated security tools are almost useless due to:
a) they can provide you with a false sense of security
b) they can give you so many false positives that you are sent off on meaningless tangents instead of working on real security threats.
You really need to get to grips with the nature of each security vulnerability and understand exactly how it works. A good way to do this is to research each flaw (eg. start with OWASP top 10), and write a document that you could give to any developer on your team that would explain to them exactly what each flaw is you are trying to protect against and why. I think too often we reach for tools to help fix security flaws, thinking we can avoid the effort of really understanding each threat. Generally speaking, tools are only useful to improve productivity - you still have to take charge and run the show.
Resources:
1)Read the OWASP top 10.
2)The PCI-DSS specification is also really good stuff and helps you to think holistically about security - ie covering way more than just web applications, but databases, processes, firewall, DMZ/LAN separation etc. Even if its over the top for your requirements, its well worth browsing through.
Related
I am new in memchached concept. I search everywhere but i couldn't find anything how to implement in ASP.net 4.0. Can anyone tell me about the right concept.
I successfully installed memchached Server in services.msc
Now what to do after this step.
can any one have good example in Asp.net. If yes, Please provide me.
OR Please tell me step by step code.
I also read these article
http://rsuharta.wordpress.com/2011/04/27/memcached-provider-in-the-net-web-application/
But didn't understand anything. Please provide me best solution
Thanks.
Here is a CodeProject article walking you through using memcached in an ASP.NET application.
However, let me first say that it's awful likely that if you don't already understand the concept of a framework like memcached you don't need it.
Let me try and make this as clear as possible so you can make the right decision. For some reason, as of late, data caching has become the new "golden hammer" and all kinds of frameworks have popped up. But the problem is that most developers don't understand the real driving forces behind implementing data caching and they don't understand that it's really not a trivial matter. I'm going to give you the same example I gave someone else just yesterday on SO, but a paraphrased version.
Imagine if you will an application stack (i.e. more than one application) that accesses a shared set of data at a rate of more than, and I'm going to give you the real number, 40M+ transactions per day. Now, when I use the term transaction here I really mean read or write. Which only complicates things BTW because now I have to optimize for both.
Alright, so now we have a set of applications accessing this shared data at a ridiculous rate per day - how do we ensure reasonable response times for both read and write? Data caching. But, if you're not sitting in that boat you probably don't need data caching and need to spend your time learning other things that are more relevant to what you're doing.
I am not a software engineer as you will see if you continue reading, however I managed to write a very valuable application that saves our company lots of money. I am not paid to write software, I was not paid for writing this application, nor is my job title software engineer so I would like to have total control over who uses this application if I ever had to leave since as far as I can tell it is not legally theirs (did not write during company hours either).
This may sound childish but I've put much much time into this and I've been maintaining it almost on a daily basis so I feel that I should have some control over it, or at least could sell it to my company if they ever had to let me go, or I wanted to move on.
My current protection scheme on this application looks something like this:
string version;
WebRequest request = WebRequest.Create("http://MyWebSiteURL/Licence text file that either says 'expired' or "not expired'");
WebResponse response = request.GetResponse();
StreamReader stream = new StreamReader(response.GetResponseStream());
version = stream.ReadToEnd();
stream.Close();
response.Close();
if (version == ("not expired") == false)
{
MessageBox.Show(Environment.NewLine + "application expired etc etc", "Version Control");
}
It checks my server for "not expired" (in plain text), and if the webrequest comes back as anything but "not expired", it ultimately pops up another form stating it is expired and allows you to type in a passcode for the day which is a multiplication of some predetermined numbers times the current date to create "day passes" if ever needed (I think Alan Turing just rolled over in his grave).
Not the best security scheme, but I thought it was pretty clever having no experience in software security. I have however heard of hex editing to get around security so I did a little test for science and found this area of my compiled EXE:
"System.Net.WebRequest." Which I filled in with zeros to look like this: System.Net000000000
That was all it took to offset the loading of the application to hiccup during the server check which allowed me to click "continue" and completely bypass all my "security" and go along using the program without it ever expiring.
Now would a normal person go to this length (hex editing) to try to get past my protection scheme? Not likely, however just as a learning experience, what could I do as an added step to make hex editing or any other common workarounds not work unless it was by "professional" cracker?
Again I'm not paranoid, I'm just eager to learn more about security of applications. I was both proud of myself and ashamed at the same time for creating and breaking my own protection.
If commenting, please be kind since I know this is probably a humerus post to those more informed than I as I really have little experience in writing software and have never taken any type of course etc. Thanks for reading!
Another way to bypass the license check is to redirect the checking url to localhost returning always the desired text...
A better way is to make a call to a function doing the same thing but make your server response a signed XML including the server response time-stamp, that you can check on addition with the system datetime (use UTC dates in both sides). It is also a good idea to throw exceptions whenever something is not the way you expect it, and control the flow of your program with exception handling.
Check the following to get a how to clue:
How to: Sign XML Documents with Digital Signatures
How to: Verify the Digital Signatures of XML Documents
Now would a normal person go to this length (hex editing) to try to
get past my protection scheme?
Well i guess, that depends on how useful the application is for that "normal person", and how determines he is to make it work.
Most .net application unless obfuscated can be easily de-compiled to the source code using tools like (Telerik JustDecompile) or they can simple use the ildasm to see the IL code, i heard there are tools to even de-compile obfuscated .net libraries, although i haven't used or found any.
With my little experience, i can suggest two approaches
Enforcing licensing and cracking it in a application which runs plainly on the user machine is a cat and mouse game, you can add some extra protection to your code by moving some part of the applications functionality to the server and expose it as a web service which your client can consume, the part you move to the server must be an important part for the application to work and should be something that is hard to simulate.
The other approach is to add a auto updater feature to your application that will check the server for latest updates, and when ever it finds a new version it will overwrite the older one, thus overriding any cracked version, this can be easily disabled, but if disabled this will also stop any bug fixes you might release
I tried both the approaches, but they are only useful to some extent and you have to decide whether it is worth the effort enforcing or not
I am new to C# and SQL. But over the last few years while learning both in college a question really begins to burn inside me. Here it is:
It seems to me that there are really two very generic ways to handle input validation (i.e. checking for required fields, and data in the correct ranges ect).
The first, and the way shown traditionally is: Once you develop your UI, and have connected it to a database back end in some manner. On the user interface, you check for correct input, such as blank text boxes, number ranges, or to ensure a radio or check box is selected ect.
The second, and the way shown in database development is: To set check constraints on fields such as no nulls allowed, unique values, and even ranges and required fields.
My dilema is this. Given that in modern languages like C# you can do general execption handling, and also given that major league fault tolerance is built into most databases like SQL Server with regard to handling data changes in respect to committing all or none. Details like this, and to this level, would be hard to program in anything but the simplest of programs.
So my question is, why not build all the requirements directly into the table at the database back end. Take advantage of the aformentioned fault tolerance, and just forget about programming if statements to ensure correct data is input, and instead just use a generic catch all execption handler if the data is not committed.
Perhaps that is how it is done, if so I would really like to know for sure. If not, why? My preference is to avoid writing code whenever possible. Less code, less debugging, and less problems when it comes to updating. So I would tend to go with that approach of letting the DB back end do the work. Is this the generally correct thing to do.
I know that general execption handling is considered "expensive" in terms of resources. But surley once you get past 5 or 10 if statements to handle different fields and their constraints, it must be more efficient code wise to just do a general execption handler. It certantly seems easier to understand overall. (At least the way I do it).
Thanks for your help with this.
OK, here is why you need it in both places.
First the integrity of the data should be paramount and data can be changed directly in database tables (deliberately through a script to say update a million prices or by accident or even by disgruntled or criminal employees trying to disrupt the database or steal from the company). Therefore is it reckless to avoid using constraints directly in the database and it leads to bad data.
Now at the user interface level, you want to prevent the user from wasting his time submitting bad data and you want to prevent the servers and networks from wasting their time trying to process it, so you write checks at that level. Plus you don't want the data in an inconsistent state if you need to insert to several tables and aren't using a transction (which you should be using but I would suspect it happens less often than it should.) Plus the users hate it when you try the insert and it fails and tells you that X is wrong and then they fix X and now Y is wrong but it was wrong before, the process just didn't get as far as Y before.
You do both.
Create constraints at the DB - level, and check for those constraints on the client level as well.
The validation on the DB makes sure that no invalid data gets in your DB, no matter how the data is inputted.
The validation at the client side improves the user-experience.
You generally can't build all the logic for checking into the database. Also not validating user input sufficiently is a good way to open yourself up to attack.
One way to write lesss guard code in every method is 'Code Contracts' a product of microsoft research.
All input should be validated both client and server side. Always.
Also with a giant catch it would be hard to tell which field was in error. So you would end up writing a lot of which field exploded code at the other end.
While I generally advocate putting as much in the database as possible (which means that you can have a high degree of confidence about the "raw" data as possible), that isn't always possible, even with the powerful constraints and triggers available in SQL.
In addition, there are high-level "integrity" things which may change over time, and it is not realistic to always have temporally-dynamic conditions in constraints. i.e. all HR records since 2007 must have a non-NULL birthdate, but prior ones are allowed to remain NULL, but any row cannot ever be set back to NULL.
My point is you can almost never put it all in the database.
Put the things in that you can, and put others at higher levels in the system. The database is a very important part of any system, but it isn't the only part. As long as its design helps it protect its perimeter and be able to provide reliable service and guarantee what it says it will guarantee so that other parts of the system can rely on their assumptions, then that's about the most you can ask for.
In addition to all answers made here, like that UI control improves drammaticaly UX for the user, and can completely change "image" of your app, that validation on DB is made for correct insert the data to DB, but on client it have to be done for correct insert of the client data.
Consider an example of standalone enterprise app. A client work at home, he filled 20 invoices late night on his notebook in Mongolia. The day after he came back, and sync it with his office SAP server. If the error will be figure out only during sync of the data, you can imagine what awful is this situation.
Just an example. There could a plenty of others, I'm sure.
Good luck.
Its 2 years later and I have a decent amount of experience now. I am not going to accept my answer as the right one as many here have done a great job and I am very happy with their answers. But I want to add another important consideration that looking back over my experience has not been highlighted here. I also use stack overflow for reference as I progress and I always find myself looking back over my questions and answers which is another reason I wanted to add this. Like a note to my future self.
While working at that company, I was asked to build an app that would do job abc. With this I also had to build part of the database. As I was finishing with the company I learned that they were writing another app which would use my database. Effectively my point is, that as many have pointed out, data is paramount, and you don't know how it is going to be accessed when you're gone.
I have also learned that there are 3 places that data needs to be verified:
on the actual database as explained
on the server side code behind which is not the same as the DB or client side validation
on the client side
There is another worry. With the advent of new tech like tablets and smart phones. This is yet another place where validation has to be implemented. The same rules for a 4th time (unless its a web app).
I later learned that prior to MVC we had CGI forms which had something to do with handling data over the network (I humbly admit ignorance on hardware side) but from what was explained to me it seems there may even be a 5th place to do validation (although I am open to being totally wrong about that).
I think the next guru in computer science will make a name for himself if he can find a way to abstract all that verification and validation to one place so that such rules don't have to be altered in a bunch of places.
worst case:
DB
Server side code
Client side code for web apps
What about if:
There may be a native client app (i.e. windows, linux or mac (at least 6 now))
There may be various phone apps (android, iPhone, and win phone to name 3, at least 9 now))
There may be some CGI or whatever
This totals 10+ places without much exaggeration and there are other operating systems.
Even for a simple age range this is getting to be messy, but what if they bring out some new email format, or other complicated validation, or you have to change a bunch of validation rules. Now you have to modify them across at least 3 or 4 places which in itself is bad.
The major problem with that is that you are modifying a lot of code and infrastructure that has been invested in, tested, and usually proven to work and delivered to the market...
As the number of client sides grow, modifying well tested code, can't be a good thing. I think this is going to be a major headache for the future. I wonder if there will be a design pattern or best practice to resolve it. If anyone knows of one, please tell me.
There are loads of profilers and static code analyzers out there for C# assemblies.
Just wonder if there are any methods to prevent being analyzed, because it does make me feel a bit nervous when being stripped.
I have searched all over the Internet and stackoverflow. There seems none for my wonder.
What I've got are only some even more scary titles like these (sorry, new user can't post hyper links, please google them):
"Assembly Manipulation and C#/VB.NET Code Injection"
"How To Inject a Managed .NET Assembly (DLL) Into Another Process"
Is it just me being too worried or what?
BTW, with C#, we are building a new winform client for a bank's customers to do online transactions. Should we not do this in the winform way or should we use some other language like Delphi, C++? Currently we have a winform client built with C++ Builder.
If by analyzed you mean someone decompiling the code and looking at it, then the Dotfucstor that ships with VS Pro and above is a simple (free) tool to help here. There is a fuller functionality (but paid for) version that can do more.
To prevent someone tampering with your deployed assmebliles, use Strong Names.
Where there's a will, there's a way, whether it's managed code or native assembly. The key is to keep the important information on the SERVER end and maintain control of that.
Just about any application can be "analysed and injected". Some more than others. That's why you NEVER TRUST USER INPUT. You fully validate your user's requests on the server end, making sure you're not vulnerable to buffer overruns, sql injection and other attack vectors.
Obfuscators can make .NET assemblies harder to analyze. Using a secure key to strong-name your assemblies can make it much harder to alter your code. But, like everything else in the digital world, somebody can exploit a vulnerability and get around whatever safeguards you put in place.
The first thing you need to decide against what you are trying to protect?
Obfuscators are useful only to protect "secret sauce" algorithms, but the attacker can simply extract the code and use it as black-box. In 99% of cases obfuscators are waste of money.
If the attacker has physical access there is not much you can do.
If the end user is running with administrative privileges then they will be able to attach a debugger, and modify your code, including target account details. My local friendly bank has given me a chip & pin reader that I have to enter the last n digits of the target account, which it hashes/encrypts with my bank card's Chip; I then enter the code from the device into the bank's web application which can checked at the bank's end as well. This mitigates "man in the middle" type attacks...
Security is only possible on systems you physically control access to, and even then not guaranteed, merely achievable. You must assume any code not executing on a system you control can and will be compromised. As Rowland Shaw stated, the best bet for a financial institution is some sort of physical token which effectively adds a offline unique component to all transactions that cannot be (easily) known ahead of time by an attacker operating from a compromised system. Even then you should be aware of the fact that if the users computer has been compromised and he logs in with his secure token from that point forward until the session ends the attacker is free to perform whatever actions the user has permission to, but at least in that case the user is more likely to notice the fraudulent activity.
i am creating my own CMS frame work, because many of the clients i have, the have same requirements, like news module, newsletter module, etc.
now i am doing it fine, the only thing that is bothering me, is if a client wants to move from my server he would ask me to gibe him his files, and of course if i do so the new person who will take it he will see all my code, use it and benefit from i, and this is so bad for me that i spend all this time on creating my system and any one can easily see the code, plus he will see all the logic for my system, and he can easily know how other clients of mine sites are working, and that is a threat to me, finally i am using third party controls that i have paid for their license, and i don't want him to take it on a golden plate.
now what is the best way to solve this ? i thought it is encrypting, but how can i do that and how efficient is it ?
-should i merge all my CS files and Dlls in bin folder to one Dll and encrypt it, and how can i do that ?
i totally appreciate all the help on this matter as it is really crucial for me.
you should read this
Best .NET obfuscation tools/strategy
How effective is obfuscation?
In my experience, this is rarely worth the effort. Lots of companies who provide libraries like this don't bother obfuscating their code (Telerik, etc).
Especially considering what you are writing (CMSes are everywhere), you'd likely see more benefit from your time spent implementing features that put your product/implementation in a competitive advantage and make companies see that the software you are capable of writing has value, rather than the code itself.
In the end, you want to ensure you are a key factor in making software work for a company, not the DLLs you give them.
You'll need to precompile your site and obfuscate dlls.
Visual Studio has something like Dotfuscator Community Edition shipped with it. You could give it a try.
Of course, HTML output, CSS declarations, database structure and stored procedures code cannot be encrypted.
You can however try to compress CSS which will also reduce its readbility by humans.
Check here: The best approach to scramble CSS definitions to a human-unreadable state throughout an ASP.NET application
One other idea would be to use a frame in your HTML and put the most of the site pages inside of it. This way, it will not be visible when doing "View source".
Or just state it clearly that you offer whatever you're doing as a service and do not provide source codes of your work. I somehow doubt salesforce would be willing to give their sources to anyone who asks.