Security Code Generation / Comparison and Atomic Clocks - c#

We've built a two-factor authentication process for our web application. We've built a small standalone app that generates an 8 digit security code every minute. When a user logs in they are prompted for the security code. When submitted the web app, generates the security code on it's end and compares it to the security code entered. If the two are equal then the user is allowed into the application. This is used like an RSA token.
However, I am using atomic clock servers to make sure the security code generation is the same for both the USB app and the web app as time zone and clock syncing poses an issue. This is a pain not only because the servers can sometimes be unreliable, but we also have to add in firewall rules to allow us to hit the specific atomic clocks. Is there a secure way to do this without using a remote atomic clock?

You don't need precise clock, but rather the same value. So expose some sort of "current time" service from the same web app (i.e. basic HTTP get "/currenttime" with JSON response) and query it from the USP app . In this case you only will need to synchronize time between servers serving the app (if you have more than one).

If your application does not have to be fully RSA token secure, you could modify the web application to accept the last 2 or 3 security codes. That way, you're not so dependent on time consistency.
If you have to have time synchronization, you can run your own time server that can be accessed by the web application and the USB application. The time has to be consistent, not necessarily correct.

Relying on external time is a bad idea, because if the time source can be manipulated (by, say, a man-in-the-middle attack, malicious upstream DNS changes, etc), then one can remotely query the device to collect future values.
You should really evaluate your security requirements before rolling your own crypto. It's very easy to fall victim to a number of mistakes, like accidentally using a PRG which is not cryptographically secure, side-channel timing attacks, or similar.
If you must do this for production make sure you open up your implementation so that it can reviewed.

Related

WebBrowser and PCI DSS

In case the point-of-sale card reader stops working, a backup card entry method is required by the card-processing vendor. The processor's suggested method is that the application hosts a WebBrowser control to the vendor's own site in which the credit card info is entered at checkout, and to watch for the URL to change to know when the transaction is complete and receive the verification token.
This struck me as a potential PCI minefield:
The keypresses are going into the same process as the rest of the point-of-sale application and the WebBrowser also provides in-process DOM hooks
I'm not sure what this means for HTTPS certificate validation in case of MitM from a separate machine
There are probably other things I don't know about that are just as important. (Deprecated protocols and algorithms?)
To be sure, a standalone web browser could have some of these same issues but at least it wouldn't be the responsibility of the application codebase. I wouldn't want a PCI audit to have problems with something unrelated in the codebase just because it shares a codebase with payment entry.
Am I overthinking this since it's only a backup method to be used if the card reader is down? What is the standard way of handling this?
If you were being audited, an auditor would look for the following basic things:
How frequently is the embedded browser updated by the manufacturer? How does it receive updates? Will it receive/deploy automatic updates? Or, will you have to redeploy the application whenever a critical security flaw is discovered/patched? How do you manage these updates? If the updates are automatic, how do you QA them after they're in prod? If you have to redeploy the application, how will you roll it out to users? How will you be certain that all users update from insecure versions to secure versions? How frequently are they pushed? Do you have a good set of processes to manage between updating so frequently that your users never have a clue what they're going to open up and updating so rarely that you are running extremely vulnerable software?
In practice (particularly if you're subject to a post-breach audit), is the embedded browser fully updated to protect against patched security threats?
Does the embedded browser protect against browser based threats like drive by downloads? Will your anti-virus solution still work with an embedded browser? Are you sure? How have you tested that?
If you were, say, running a virtual terminal inside of a browser, you'd want to be able to answer those same questions, only about the regular browser. So, using an embedded browser doesn't change the letter of PCI-DSS. However, the security processes around the embedded browser will be different.
For things like MITM attacks, I'm not entirely sure that I understand your question. An embedded browser would be as vulnerable as a regular browser to MITM, though some regular browsers have more enhanced protection against man in the middle attacks. For example, if your embedded browser was an updated version of Google Chrome, I'd feel a heck of a lot more secure than if your embedded browser was a version of IE 6 that hasn't seen an update this decade.
The important thing to remember is that if your cardholder data environment (CDE) is within a secure network that receives regular vulnerability scans (and if you have a good, written process governing how you perform vulnerability scans), you should be fine in the event of a breach. The kicker though is that you need to document both the process and how you follow the process.
Say, for example, that your process is to:
a.) Have an expert on your team do vulnerability scan every second Friday.
b.) Hire an outside firm to do a full vulnerability scan once per quarter.
You'd need to have records of:
a.) Who is your expert? How was she trained? Is she qualified to do vulnerability scans? If she finds a vulnerability how is it escalated? What dates did she perform the scans? Does she have any print-outs of the results? Does she fill out a form with her findings? Do you have all of the forms? Can I see the results of the vulnerability scan she performed on December 18, 2015?
b.) When you have professional scans done, who performs them? How do you vet that the firm is qualified? How do you vet that the person who did them is qualified? What happens if they find vulnerabilities? What happens if they find vulnerabilities that your in-house expert doesn't find? Can I see their last report? Can I see the report from three quarters ago?

Can a hacker surpass internet http rest calls from desktop application?

I have made a desktop software in C# and i am going to make a 30 day free trail of the software ,now i will check the date and time from some server to check the date ... My question is, can the hacker hack this and produce somekind of key or steps to make it full version or produce some crack of it in the market for everyone(I know that a hacker can hack any product )???
Actually, a hacker can figure out what you are checking a date by REST API, by the monitoring of http traffic, then change the DNS name of your API to local host locally and provide you fake REST API response.
Sure, any software running on the desktop can be decompiled or reverse engineered. Then a patch can be created to disable any security features you've build into the application.
But this requires a lot of work. Not many applications are valuable enough for some hacker to spend so much time on it.
As others have already replied, it's trivially easy to intercept http(s) requests made to a server. Why don't you just use the date/time from the machine/device itself? Not many people will be willing to live with a date set back on their machine just to run your software illegally.
The real problem is where do you store that date. The first time the user legitimately installs your trail, that date won't be present. What is to prevent users from deleting that date and starting the trail period over?
To protect yourself from all this, run (parts of) your software in the cloud. But in that case, you'll need an authentication mechanism for your users.

Implementing a program that generates non-transferable licences

I'm taking a security class and am required to implement a licensing server that sends licenses that are non-transferable. I have no idea how to do that. Could you please give me some of your ideas?
You need to find something to tie the license to that is unique and immutable for sufficiently realistic values of unique and immutable. The canonical example is the network adaptor's MAC address. This address is usually set at the factory, "cannot" be changed, and is globally unique. (Did I hedge that enough to keep the nit-picker at bay...?)
Once you have this identifying info making a non-transferable license is pretty easy, you basically have a trusted authority sign the address and use that as the license. If you want to check that a machine is the one you are licensed to run on you just check to see if the signature is OK using the public part of the key.
If you can assume web access you can require the user to log in to a central server. It sends back a token referencing the user id as described in the other answer, plus a time range when it's valid. The idea is that you don't want to require continuous web access, just access once an hour or day or whatever your risk tolerance is.
Ideally this is done behind the scenes, e.g., using an initial token obtained from the server when the user first registered their product. Your app uses this token to log into the central server for an operational token, nothing is ever done in cleartext with user names and passwords.
The benefits: this is not tied to the physical hardware like a MAC address (network card). It REALLY pisses off users when they're told that they'll lose everything because they replaced their hardware.
The drawback: a knowledgeable attacker could copy the token to additional systems. However there are three ways of dealing with this:
personalize the application. "Chris"
is probably not going to be happy if
the application keeps referring to him as
"Bob".
only allow one active instance at the
server. Be careful though - this
might lead to 'denial of service'
attacks on your users. Or just
frustration if they can't access the
app at home because they forgot to
log out at work.
live with it. What's the cost of
lost sales vs. the cost of
implementing something stronger
and/or pissing off honest users?

How to authenticate client application for trust of messages sent from it

The basic question
How do I know that it is my publicly accessible (client) application that is sending my service messages? How do I know that it is just not some other application that is impersonating my application?
Some Background
Currently we log all errors that occur on our websites via log4net and WCF to a database. This works well because the web server (accessible from the web - Partly Trusted) reports there errors to the WCF service running on the application server (inaccessible from the web - Trusted) via a trusted relationship. We therefore know that all error logs are real and we need to investigate them.
With our new sites we plan to make use of SilverLight to liven things up a little. The problem we are faced with is how to report errors back from the SilverLight application running on the web consumer's PC (Untrusted) to our application server (inaccessible from the web - Trusted).
We can solve the inaccessibility problem of the application server by making the client communicate via a service facade on the web server, so that is no worry. The problem occurs when we need to be sure that the application sending the messages really is our application and not just an impersonator.
Some Thoughts
The code will be written in C# and be running in a SilverLight application that runs locally on the client PC, so we cannot be guaranteed that it will not be decompiled and used to send fake messages to our service.
The above means that we cannot make use of conventional symmetric encryption because we can't store our private key in the application (it can be decompiled). Similarly we can't use asymmetric encryption since it could just be impersonated (the attacker could just sign messages with the stored public key and send them - the messages would look real)
In the case of this application there is no user authentication, so we cannot use that to provide us with trust.
Yes, I know this is rather bizzare with the error logs being better protected than the data the application displays, but it is the case :)
Any thoughts or help would be greatly appreciated!
Impossible.
You can authenticate users, but not the application.
Let's say you decide to digitally sign the application. This signature is then read at runtime by your client application checking its own executable binaries against this signature. There is nothing that prevents the adversary from simply removing this check from your application.
Even if you make it close to impossible to reverse engineer your application, the adversary could always look at the communication channel and write an imposter that looks indistinguishable from your client to your server.
The only thing you can do is validate the actions on the server against a user identity.
Presumably, your server is creating the web page that the Silverlight application sits in. You could create a short-lived temporary "key" that only that web page contains. When the Silverlight app starts up, it reads this key and uses it. Because the server itself has a constantly changing, very short list of allowed keys, you can be more sure that only your app is accessing your services.
The best advice for you in this matter is to hire a security expert to help you. This is not a unique or unusual problem -- consider any game (like WoW for example) that is attempting to determine if it is speaking to a true client or a fraudulent client. Even with a massive amount of effort (look up Blizzard Warden, I'm not going to link it here), they still have issues. The problem boils down to exactly how much time and effort your attacker is going to invest in thwarting your attempts to make thing hard on him. Just be sure to validate everything on the server-side. :)

Storing Windows passwords

I'm writing (in C# with .NET 3.5) an administrative application which will poll multiple Windows systems for various bits of data. In many cases it will use WMI, but in some cases it may need to read remote registry or remotely execute some command or script on the polled system. This polling will happen at repeating intervals - usually nightly, but can be configured to happen more (or less) frequently. So the poll could happen as often as every 10 minutes or as rarely as once a month. It needs to happen in an automated way, without any human intervention.
These functions will require admin-level access to the polled systems. Now, I expect that in most use cases, there will be a domain, and the polling application can run as a service with Domain Admin (or equivalent) privileges, which means I do not have to worry about storing passwords - the admin setting up the app will define the service's username/password via standard Windows mechanisms.
But there's always a few black sheep out there. The program may run in nondomain environments, or in cases where some polled systems are not members of the domain. In these cases we will have to define a username and password, store them securely, then invoke this user/pass pair at the time we poll that system. So keep in mind - in this case the program being written is the user who sends password to the authenticating system.
I am not sure whether I will need to use a reversible hash which I then decrypt to plaintext at time of use, or if there is some Windows mechanism which would allow me to store and then reuse the hash only. Obviously the second mechanism is preferable; I'd like my program to either never know the password's plaintext value, or know it for the shortest amount of time possible.
I need suggestions for smart and secure ways to accomplish this.
Thanks for looking!
The answer is here:
How to store passwords in Winforms application?
Well it seems that your program needs to impersonate a user other than the context under which it is already running. Although, it does look like a pretty automated process, but if it's not, can you simply not ask the administrator to put in username and password at the time this 'black-sheep' computer is being polled?

Categories

Resources