In case the point-of-sale card reader stops working, a backup card entry method is required by the card-processing vendor. The processor's suggested method is that the application hosts a WebBrowser control to the vendor's own site in which the credit card info is entered at checkout, and to watch for the URL to change to know when the transaction is complete and receive the verification token.
This struck me as a potential PCI minefield:
The keypresses are going into the same process as the rest of the point-of-sale application and the WebBrowser also provides in-process DOM hooks
I'm not sure what this means for HTTPS certificate validation in case of MitM from a separate machine
There are probably other things I don't know about that are just as important. (Deprecated protocols and algorithms?)
To be sure, a standalone web browser could have some of these same issues but at least it wouldn't be the responsibility of the application codebase. I wouldn't want a PCI audit to have problems with something unrelated in the codebase just because it shares a codebase with payment entry.
Am I overthinking this since it's only a backup method to be used if the card reader is down? What is the standard way of handling this?
If you were being audited, an auditor would look for the following basic things:
How frequently is the embedded browser updated by the manufacturer? How does it receive updates? Will it receive/deploy automatic updates? Or, will you have to redeploy the application whenever a critical security flaw is discovered/patched? How do you manage these updates? If the updates are automatic, how do you QA them after they're in prod? If you have to redeploy the application, how will you roll it out to users? How will you be certain that all users update from insecure versions to secure versions? How frequently are they pushed? Do you have a good set of processes to manage between updating so frequently that your users never have a clue what they're going to open up and updating so rarely that you are running extremely vulnerable software?
In practice (particularly if you're subject to a post-breach audit), is the embedded browser fully updated to protect against patched security threats?
Does the embedded browser protect against browser based threats like drive by downloads? Will your anti-virus solution still work with an embedded browser? Are you sure? How have you tested that?
If you were, say, running a virtual terminal inside of a browser, you'd want to be able to answer those same questions, only about the regular browser. So, using an embedded browser doesn't change the letter of PCI-DSS. However, the security processes around the embedded browser will be different.
For things like MITM attacks, I'm not entirely sure that I understand your question. An embedded browser would be as vulnerable as a regular browser to MITM, though some regular browsers have more enhanced protection against man in the middle attacks. For example, if your embedded browser was an updated version of Google Chrome, I'd feel a heck of a lot more secure than if your embedded browser was a version of IE 6 that hasn't seen an update this decade.
The important thing to remember is that if your cardholder data environment (CDE) is within a secure network that receives regular vulnerability scans (and if you have a good, written process governing how you perform vulnerability scans), you should be fine in the event of a breach. The kicker though is that you need to document both the process and how you follow the process.
Say, for example, that your process is to:
a.) Have an expert on your team do vulnerability scan every second Friday.
b.) Hire an outside firm to do a full vulnerability scan once per quarter.
You'd need to have records of:
a.) Who is your expert? How was she trained? Is she qualified to do vulnerability scans? If she finds a vulnerability how is it escalated? What dates did she perform the scans? Does she have any print-outs of the results? Does she fill out a form with her findings? Do you have all of the forms? Can I see the results of the vulnerability scan she performed on December 18, 2015?
b.) When you have professional scans done, who performs them? How do you vet that the firm is qualified? How do you vet that the person who did them is qualified? What happens if they find vulnerabilities? What happens if they find vulnerabilities that your in-house expert doesn't find? Can I see their last report? Can I see the report from three quarters ago?
Related
I have made a desktop software in C# and i am going to make a 30 day free trail of the software ,now i will check the date and time from some server to check the date ... My question is, can the hacker hack this and produce somekind of key or steps to make it full version or produce some crack of it in the market for everyone(I know that a hacker can hack any product )???
Actually, a hacker can figure out what you are checking a date by REST API, by the monitoring of http traffic, then change the DNS name of your API to local host locally and provide you fake REST API response.
Sure, any software running on the desktop can be decompiled or reverse engineered. Then a patch can be created to disable any security features you've build into the application.
But this requires a lot of work. Not many applications are valuable enough for some hacker to spend so much time on it.
As others have already replied, it's trivially easy to intercept http(s) requests made to a server. Why don't you just use the date/time from the machine/device itself? Not many people will be willing to live with a date set back on their machine just to run your software illegally.
The real problem is where do you store that date. The first time the user legitimately installs your trail, that date won't be present. What is to prevent users from deleting that date and starting the trail period over?
To protect yourself from all this, run (parts of) your software in the cloud. But in that case, you'll need an authentication mechanism for your users.
so I have a bundled software that a client can download and install (using an msi on win machines).
part of this software is a mongoDB database, that stores client info, configurations, etc..
When the software is first installed, it creates an empty folder for the mongoDB, and whenever the software starts, it starts a mongod process (using C#'s Process.Start()): mongod.exe --dbpath <path> --port <port> --quiet.
My goal is to secure the mongoDB database with a username / password that will be known only to my application.
this will help prevent tampering with my client's data from the outside, as well as make it harder (but not impossible, see below) for the client themselves to tamper with the application's data.
The general idea, I guess, is that on installation (or on startup), to create a user with read / write privileges which my software will use to communicate with the database.
So My questions are:
1. How do I programmatically do this? I guess this is the right direction, but I couldn't find much info on the c# driver docs
2. How do I deal with upgrades? i.e clients who installed a previous version of the software, where the database is not secure at all; i would like to create a user with a password in that case as well.
3. how do I store the application user's credentials in my application? in a config file? but that can be read by the client. any best practices here?
versions info- (unfortunately, because of my company's issues, we're not using the latest product versions); mongoDB 2.6, mongoDB driver for .net 1.5.0.
thanks!
P.S. I have read through the security section on the mongoDB website, but wasn't able to find a simple example for the use case I'm trying to implement.. maybe I'm just missing something simple here..
This is kind of an interesting, unusual use case.
First of all, I want to make sure you're aware of the licensing/copyright implications of bundling MongoDB with your software. You should check out the license section of the mongo project GitHub page and read up on the AGPL.
Second, the easiest part of your question:
how do I store the application user's credentials in my application? in a config file? but that can be read by the client. any best practices here?
This goes beyond MongoDB. If a user owns the system that the mongod process is running on, they could just copy the data files and set up a no-auth mongod on top of your application data. You cannot reasonably stop them from doing things like that, so do not count on your application's data to be secure from the client user. Plus, if you install your application code locally, any decently smart and committed person should be able to extract the username and password from the compiled application code. You can make it hard, but not impossible.
Third,
How do I programmatically do this?
Based on what I just said, I'm taking "this" to mean
on installation (or on startup), to create a user with read / write privileges which my software will use to communicate with the database.
not the part about having it be secure from the person who owns the computer it's installed on, because that's not possible. To do this, I'd either package a mini datafile to start the mongod on top of, one that included users set up already, or include a dump that you use something like mongorestore to load into the mongod after you start it up. The first option is way simpler to implement and should not require you to have to take down and respawn the mongod process, so try that - see if you can set up a mongod with auth how you want it and then transplant user info by copying data files. FWIW, I'm pretty sure the passwords are not stored in plain text in the data files (they are salted), so you won't have that directly exposed from the data files.
Finally,
How do I deal with upgrades?
You'll have to take down their mongod, restart it with auth, use the localhost exception to create the users you need, turn off the localhost exception (optional but why not), and then end that connection and start new ones using auth. It's the same process as in the security tutorials, you just have to do it with C# driver commands. Note that moving between MongoDB versions is also tricky as the seurity model has improved over time, so you should consult the upgrade guide for extra things to do to make sure user schema gets upgraded correctly if you are moving a user from a secure 2.6 to a secure 3.0, say.
C# driver connectionstring can accept login credentials for the database.
mongodb://username:pwd#server:port/dbname
for ex
mongodb://myuser:mypassword#mydbserver:30254/mydb
The best way is to store the data in a config file. If you are worried about exposing it, it can be encrypted and stored. Other less likely option is to store in a resource file and reference that as a string.
I've a web application( ASP.NET WebForms). Now I need to give this application to my client for offline(Installed on Local Server and accessed via LAN) installation. I want to protect this application from being copied.
All I can think of now is:
I should maintain a online server and have a Activation Page which
runs when the Web App is run for the first time, It should connect to
the server and get a valid license against a Key(entered by me in
Web.Config during installation) and machine parameters.
Also, if I code it I need to take care of System Clock and other naive issues.
Now, I have two questions.
Are there any other options to safe guard a web app?
Does the solution I'm planning to code, already exists?
Thanks for reading and trying to help.
:)
No, no, no.. You can't tottaly protect your ASP.NET app like this..
Customer(if want to) can decompile your code and replace your activation methods, so application will allways think that it's "legally activated", or for example he can write fake activation server that will always activate your software... It's not so hard really, especially when your application is based on .NET.
This "protections" main purpose is only to make illegal copies creation little longer to do(for servial weeks or month-two..), so your selling departament can sell many-many copies to legal customers, and losses from illegall usage can be not so huge at project start time.. Or can be huge anyway even with usage of "super-super-super commercial protection product for you apps".. It's depends on luck and populariry of your app..
Only, and ONLY way protect your ASP.NET application with 100% guaranty from illegal copying is.. NOT TO give application to client for local install's at all. Use SSAS-model for selling your app. Or if it's not posible make this for some critical parts of your application.
We've built a two-factor authentication process for our web application. We've built a small standalone app that generates an 8 digit security code every minute. When a user logs in they are prompted for the security code. When submitted the web app, generates the security code on it's end and compares it to the security code entered. If the two are equal then the user is allowed into the application. This is used like an RSA token.
However, I am using atomic clock servers to make sure the security code generation is the same for both the USB app and the web app as time zone and clock syncing poses an issue. This is a pain not only because the servers can sometimes be unreliable, but we also have to add in firewall rules to allow us to hit the specific atomic clocks. Is there a secure way to do this without using a remote atomic clock?
You don't need precise clock, but rather the same value. So expose some sort of "current time" service from the same web app (i.e. basic HTTP get "/currenttime" with JSON response) and query it from the USP app . In this case you only will need to synchronize time between servers serving the app (if you have more than one).
If your application does not have to be fully RSA token secure, you could modify the web application to accept the last 2 or 3 security codes. That way, you're not so dependent on time consistency.
If you have to have time synchronization, you can run your own time server that can be accessed by the web application and the USB application. The time has to be consistent, not necessarily correct.
Relying on external time is a bad idea, because if the time source can be manipulated (by, say, a man-in-the-middle attack, malicious upstream DNS changes, etc), then one can remotely query the device to collect future values.
You should really evaluate your security requirements before rolling your own crypto. It's very easy to fall victim to a number of mistakes, like accidentally using a PRG which is not cryptographically secure, side-channel timing attacks, or similar.
If you must do this for production make sure you open up your implementation so that it can reviewed.
I typically have not worried at all about piracy or copy protection with software however I currently find myself in a unique situation. I develop an application for repairing computers for a specific computer repair company. Recently an employee has decided to quit the company after only working there for one month, and took my toolset with her. She then started a computer repair company out of her home and is using my toolset to fix computers. I am not particularly concerned with this person as our lawyers are already in hot pursuit. My concern is with future instances of this where I may not find out about them.
What I would like are some ideas for ways to protect and or phone home without being too over-bearing. I hate software that is too protected and becomes annoying or worse yet, worthless. This application is never to leave the walls of the computer repair company as they do not do on-site repair and I think I can use this to my advantage.
I do have a couple of ideas about how to go about restricting usage to within the company but I would like to hear how others have dealt with situations like this. Currently I keep going back to checking the network for specific servers or ip ranges but does anyone else have any other ideas?
First i think you have to decide what you are protecting against, as game developers have learned over the years you cannot stop ppl from copying your app/game.
Assuming what you want to protect yourself against the above senario again, i can think of 2 ok solutions.And your app has access to the network "always" or normaly duing normal use.
Phone home:
Have the application phone home to some server software, either on the company network or via internet. Have the application send some information to the server, and have it respond with either OK or die command.
To prevent someone from stealing the server, hardcode the server application ( If its installed at the company) to accept 1 physical server ( IE requires the machines has X mac, Y CPU serial, L mainboard serial).
AppServer sending verification
Since you where thinking about sniffing the network from trafic thats posible, but might be better to have a server part that sends out a verficiation code ( IE some public,private key encrypted message with a timestamp?) at periodic intervals.
Depending on X server sends some network trafic every now and then does not seem logical, and could create issues ( IE that server gets removed, but nobody knows you app depends on it to respond to ping).
Also spending on being able to ping XXX.XXX.XXX.XXX and som MAC address in the network is fairly simple to fake.
I've been looking recently at using the open source Rhino Licensing solution http://hibernatingrhinos.com/open-source/rhino-licensing - this seems like quite a sophisticated solution and the source includes an example application which you could alter for your individual needs - e.g. you don't have to lock the user out if you don't want to.