Is Session Id safe alternative to IP Address - c#

To prevent DOS attacks in my ASP.NET C# application, i have implemented throttling with help of Jarrod's answer in this post.
Best way to implement request throttling in ASP.NET MVC?
But this uses Ip address, which makes it vulnerable to advanced attackers who can change it easily. Another option to identify anonymous users is to use their session ID. I think that it can't be changed until the user restarts the browser, so it can be a good alternative. But i am not sure from the security point of view. Kindly tell me if it is safe or not to use it? If not, then is there any other method to achieve this purpose? Thanks
Edit:
There are some methods that need a longer throttle. That's why i need a programmatic throttle of about 5 secs to 2 mins. I have configured Dynamic Ip Restrictions for IIS, but i can't specify such large time for it.

I think your terminology might be mixed up. DoS is Denial of Service. Someone changing multiple records or requesting functionality is not a DoS attack and normally, most DoS attacks are Distributed, hence DDoS.
What you are requesting based on the link you provided is called throttling... but as others have suggested, the sessionid is simply a value passed up in a cookie and can be easily modified to bypass a check just as you can simply put a proxy in front of the request to mask the source IP between requests.
Therefore, if you only wish to throttle then you need to implement authentication in front of the functionality you want to protect, use the throttling code you posted and maybe throw a CSRF token in as well for good measure.
BUT... if you want to stop DDoS, it ain't going to happen at Layer 7 since the data is already at the server.

Related

Is there a way to ask some nginx server its connection limit per ip?

Let's say I am using multiple connections to download one file from an nginx server. Most of the time its limited to like 2 connections. Is there a way to ask the server how many it is allowing? Or do i have to trial and error to find it by myself
I think you're looking for a robots.txt. Perhaps the property "Crawl-delay" may help you if the site specifies that.
Typically you don't want to expose too much information about the technical setup and configuration of your server to the public for security reasons. Of course hiding such information is not gonna prevent persistent adversaries from attacking your server but at least it's gonna make it harder for the script kiddies. But for this reason I have my doubts that NGINX exposes a public API with such information.

Please critique my proposed architecture: windows service for parsing incoming emails to asp.net database

I have an existing asp.net c# application for which I'd like to implement a feature that allows users to post content via email. A user would send an email to a designated address and the system would parse the email and create database entries using the email subject, body and any attached images. My proposed approach is to create a windows service that pings a pop3/imap enabled email provider to retrieve incoming emails. The service would then parse the emails using an existing library I found here http://www.lesnikowski.com/mail/. The user would be matched according to the email address in the from field to the asp.net membership and then new records would be inserted from the contents of the email for that user. Initially the windows service would run on a separate EC2 instance that I'll set up for this purpose since the current host does not permit root access. But eventually I'll probably migrate the entire site to EC2.
Before I dive in I wanted to get some feedback from you all on my overall approach and architecture. More specifically:
Is what I described above the approach you would take?
Would you recommend implementing a web service to manage the interactions between the windows service and the database of the asp.net site? Or would you recommend hitting the database directly?
If I program the windows service to
ping the email provider every 30
seconds, will that be a problem?
Do you foresee any security issues with this approach I've outlined?
What about issues with reliability (needs to be a 24x7 service)?
Additional Background --- the asp.net website is an inventory system where each entry has a name, description and optional images. From the email the subject will become the name, the body will become the description and the images are the images. If you're familiar with the Posterous blogging platform you'll have an excellent reference point for what I am trying to accomplish.
Is what I described above the approach you would take?
It would be better if you could set up an Exchange server or sth similiar where you get notifications about new emails, so you don't have to ping every 30 minutes, but I never did it this way and cannot tell you if this is even possible.
The approach itself sounds plausible, because sending emails is really easy and everybody knows how to do that.
Would you recommend implementing a web service to manage the interactions
between the windows service and the
database of the asp.net site? Or would
you recommend hitting the database
directly?
I would recommend an extra abstraction layer, because it is not much effort and improves the design. This decreases performance (shouldn't be that much), so it depends on your requirements.
If I program the windows service to ping the email provider every 30
seconds, will that be a problem?
Depends on your email provider. Normally and if they allow it: No. You should definetly ask them first.
If it's your own: You're good to go.
There can be problems however if you're doing this inside a thread and you're accessing the IMAP multiple times at the same time. You should try to avoid that.
Do you foresee any security issues with this approach I've outlined?
Yes. You can easily forge the "from" field of an email you've send. There can be issues then, if the email is known. You should definetly add some kind of extra security like sending the mail to <SaltedHashThatIsDifferentForEachUser>#example.com. (Facebook does this too for example)
What about issues with reliability (needs to be a 24x7 service)?
I see more problems with the reliability of your email provider than with your service, because as long as the emails are saved, you can still parse them later.
You should investigate the maximum size of your imap to avoid rejected mails (e.g. delete them once you've successfully parsed them)
Would you recommend implementing a web service to manage the interactions between the windows service and the database of the asp.net site? Or would you recommend hitting the database directly?
There is no need to have a web service, it will just add complexity as well as introduce another attack target on your web server. Having your windows service hit your database directly will be simpler and more secure.
If I program the windows service to ping the email provider every 30 seconds, will that be a problem?
Should not be a problem ... Email providers provide POP3 and IMAP so that external services can use them (outlook, thunderbird, iphone) so they expect them to be constantly pinged.
Do you foresee any security issues with this approach I've outlined?
As Simon stated, emails can be easily forged, providing a security vulnerability. This link discusses a hacking incident on posterous and the trade off between ease of use and security. As a CISSP, I tend to lean toward security, especially when the vulnerability very easy to exploit.
The unique, "secret" email address is a better solution in terms of security. However, it takes a lot away from your goal of simplifying the update process. It also makes your solution more complex and costly since you will need to be able to support (and programmatically create) an unique address for every user.
What about issues with reliability (needs to be a 24x7 service)?
Most mainstream email providers have outstanding availability. In regards to the availability of this solution (without the preexisting factors such as your current hardware and hosting facility), you would want to ensure the windows service was well written and included some "fault tolerance". For example, the services i have written in the past handle a few select errors caused by external dependencies (database or email being unavailable) so that it does not crash but just waits until its back online. This provides better availability since the service is ready to go when the dependency is ok again, without someone required to manually restart the windows service.
Is what I described above the approach you would take?
Due to the security vulnerability exposed by relying on the sender of the email for authentication and authorization, I would not take this approach. If the main goal was to simplify and streamline the addition of new items from mobile platforms, I would probably create a "mobile friendly" web page to accomplish this.
I just returned from a web design conference in Seattle and it was heavily focused on "non-pc" platforms. After listing their very innovative ideas and best practices for designing for the mobile industry, I can see a web app being a great solution to achieving this goal.

Scaling an ASP.NET application

This is a very broad question, but hopefully I can get useful tips. Currently I have an ASP.NET application that runs on a single server. I now need to scale out to accommodate increasing customer loads. So my plan is to:
1) Scale out the ASP.NET and web component onto five servers.
2) Move the database onto a farm.
I don't believe I will have an issue with the database, as it's just a single IP address as far as the application is concerned. However, I am now concerns about the ASP.NET and web tier. Some issues I am already worried about:
Is the easiest model to implement just a load balancer that will farm out requests to each of the five servers in a round-robin fashion?
Is there any problem with HTTPS and SSL connections, now that they can terminate on different physical servers each time a request is made? (for example, performance?)
Is there any concern with regards to session maintanence (logon) via cookies? My guess is no, but can't quite explain why... ;-)
Is there any concern with session data itself (stored server side)? Obviously I will need to replicate session state between servers, or somehow force a request to only go to a single server. Either way, I see a problem here...
As David notes, much of this question is really more of an Administrative thing, and may be useful on ServerFault. The link he posts has good info to pore over.
For your Session questions: You will want to look at either the Session State Service (comes with IIS as a separate service that maintains the state in common between multiple servers) and/or storing asp.net session state in a SQL database. Both are options you can find at David Stratton's link, I'm sure.
Largely speaking, once you set up your out-of-process session state, it is otherwise transparent. It does require that you store Serializable objects in Session, though.
Round-Robin DNS is the simplest way to load-balance in this situation, yes. It does not take into account the actual load on each server, and also does not have any provision for when one server may be down for maintenance; anyone who got that particular IP would see the site as being 'down', even though four other servers may be running.
Load balancing and handling SSL connections might both benefit from a reverse proxy type of situation; where the proxy handles all the connections coming in, but all it's doing is encryption and balancing the actual request load to the web servers. (these issues are more on the Administration end, of course, but...)
Cookies will not be a problem provided all the web servers are advertising themselves as being the same web site (via the host headers, etc). Each server will gladly accept the cookies set by any other server using the same domain name, without knowing or caring what server sent it; It's based on the host name of the server the web browser is connecting to when it gets a cookie value.
That's a pretty broad question and hard to answer fully in a forum such as this. I'm not even sure if the question belongs here, or if it should be at serverfault.com. However....
Microsoft offers plenty of guidance on the subject. The first result for "scaling asp.net applications" from BING comes up to this.
http://msdn.microsoft.com/en-us/magazine/cc500561.aspx
I just want to bring up areas you should be concerned about with the database.
First off, most data models built with only a single database server in mind require massive changes in order to support a database farm in a multimaster mode.
If you used auto incrementing integers for your primary keys (which most people do) then you're basically screwed out of the gate. There are a couple ways to temporarily mitigate this but even those are going to require a lot of guesswork and have a high potential of collision. One mitigation involves setting the seed value on each server to a sufficiently high number to reduce the likelihood of a collision... This will usually work, for awhile.
Of course you have to figure out how to partition users across servers...
My point is that this area shouldn't be brushed off lightly and is almost always more difficult to accomplish than simply scaling "up" the database server by putting it on bigger hardware.
If you purposely built the data model with a multi-master role in mind then kindly ignore. ;)
Regarding sessions: Don't trust "sticky" sessions, sticky is not a guarantee. Quite frankly, our stuff is usually deployed to server farms so we completely disable session state from the get go. Once you move to a farm there is almost no reason to use session state as the data has to be retrieved from the state server, deserialized, serialized, and stored back to the state server on every single page load.
Considering the DB and network traffic from just and that their purpose was to reduce db and network traffic then you'll understand how they don't buy you anything anymore.
I have seen some issues related to round robin http/https sessions. We used to use in process sessions and told the load balancers to make the sessions sticky. (I think they use a cookie for this).
It let us avoid SQL sessions but meant that when we switched from http to https, our F5 boxes couldn't keep the stickiness. We ended up changing to SQL sessions.
You could investigate pushing the encryption up to the load balancer. I remember that was a possible solution for our problem, but alas, not one we investigated.
The session database on an SQL server can be easily scaled out with little code & configuration changes. You can stick asp.net sessions to a session database and irrespective of which web server in your farm serves the request, your session-id based sql state server mapping works flawless. This is probably one of the best ways to scale out the ASP.NET Session state using SQL server. For more information, read the link True Scaleout model for session state

Limiting number of calls to an ASMX Web Service

We have an asmx web service hosted in IIS6. Is there a good way to limit the number of calls to the service in a period of time for a single IP? We don't want to put a hard limit (X number of times an hour), but we want to be able to prevent a spike from a single user.
We're currently investigating to see if our firewall is capable of limiting connection attempts. In the case that our firewall is not able to limit connections, is there a good way to handle this programmatically? Rather than trying to come up with our own custom solution and reinventing the wheel, is there an existing implementation or strategy that can be used?
ASMX web services have almost no extensibility. If you have any choice, you should use WCF.
You might be able to write a method to be called from each of your operations, that would look at the caller IP, check in a database, and throw a SoapFault if that IP has connected too much. That's about all there is, though. You might be able to do that from a SoapExtension, but you have to be very careful with those.

Blocking access to site by banned IP addresses

I have a list of IP addresses of bots/hackers that are constantly attacking one of my sites. I want to block these visitors by IP and am trying to work out a "best" approach for this. My site uses C# ASP.NET MVC.
I have a List<int> of IP's.
Where is the best place to put the check code? I'm thinking of using the Page_Load event of a master page but could also put it in a filter to each controller...
What HTML do you return to the banned IP? I am reluctant to return a "site blocked because your IP is banned" because this will give the hackers the information they need to work around the block. The advantage of doing this is that it will give the innocent users who have been caught in the crossfire the reason why they can't access the site. My current feeling is that I should return a "Site under maintenance" notice.
What HTTP status code should I return with a fake "Site under maintenance" notice? I'm thinking 200.
Site is running on Server 2003.
If you feel your site is being "hacked" from a specific IP, you should not be blocking that IP in software, the very thing that they intend to compromise. Blocked IPs should be blocked at the firewall.
I'd have to agree with David on this for several reasons.
By blocking via software hackers/bots will still be able to abuse your resources (bandwidth, processor time, etc).
Software cant protect your site against dos attacks.
If a hacker is good they'll find a way around software blocks.
Updating blocking code will require recompiling of your application.
Your answer is in the firewall. Set up rules to block out the users and they wont be able to connect.
Sending an "under maintenance" page is a terrible idea because it'll confuse normal users and won't deter a good hacker...
While you could block the IP addresses on your outward facing servers (your web servers obviously but you may have others) this list will need to be replicated across all. By blocking on a server you're not only overcomplicating the solution but also providing a method which is not wholly secure.
The proper point to block network traffic, whether it be a select list of ports or IP addresses, is as far out on your network as you can get. This is typically a firewall/router at your entry point. These networking devices are optimized for this very purpose, as well as far beyond that. Depending on the manufacturer of your networking equipment the feature set will widely vary.
I suggest you:
Identify all routers/firewalls at the
outermost boundary. It is possible
you only have one unless you're load
balancing.
Learn how to configure the ACL
(access control list) for those
devices.
Modify the ACL based on your IP
addresses list to block.
Always save a backup of your network
device config elsewhere.
Obviuosly this is just the tip of the iceberg in security. Perhaps at some point you'll need to contend with DOS (Denial of Service attacks) and then some - oh the fun.
Good luck.
I'd stick the code in a place where it will run as soon as possible, before the server consumes too many resources .
I would say you should send back as little information as possible, ideally HTTP status 503 (Temporarily unavailable) with a short message linking to an acceptable-use page, or a page explaining to people some reasons why they MIGHT have been blocked and what to do if they feel them are blocked unfairly. You may wish to do this in text/plain instead of HTML as it will use fewer bytes :)
Using an in-memory list of blocked IPs also breaks when you have a large number of blocked addresses (say 1 million) because scanning it becomes prohibitive (remember you need to do this for every request to the relevant resource).
Ultimately you will want a way to distribute the lists of blocked IPs to all your web servers and/or keep it centralised - depending on exactly what kind of abuse you are getting or anticipating.
Having said that, you should definitely apply the YAGNI principle. If you aren't experiencing real capacity problems, don't bother blocking abusers at all. Very few sites actually do this, and most of them are things where there is a significant cost associated with running the site (such as Google search)

Categories

Resources