writing an online payment processing system - c#

We've been requested to create an Online payment processing system the like of Paypal for our national use.
does an open source version of this exist ? (so I could study it and maybe improve on it)
are there any books/resources/materials that could be useful ?
How can I go about taking on such a huge task ?

To start with first determine who is going to process your credit card transactions, as they can also most likely do debit cards.
To process these yourself is a pain as there are standards that have to be met, and you end up needing to pay for the privilege. Unless you will be processing an enormous amount of transactions you are better off using another company that already can process them.
To start with, just design the system using something like Payflow, to do the actual processing.
Once you are up and running, and you have dealt with the PCI standards to protect credit card data, then you can look at perhaps trying to phase out your payment gateway and do it yourself, but, that should only be if you determine that you are processing in sufficient volume that it is with the additional development and resource costs to do it yourself.
You will need to have a signed certificate, to ensure people that your site is safe, and ensure that you have strong encryption to protect the credit card data, and make certain that the passphrase or symmetric key is not on the computer, but is kept only in memory that will never be swapped to a hard drive, otherwise someone could steal it, if they can copy the hard drive.
http://www.allbusiness.com/sales/internet-e-commerce-securelectronic-transaction/2310-1.html

Well, first of you need to have solid knowledge the way your country handles money in terms of laws (VAT, refunds, and so on) so it might be help if you tell us that.
Second, this is not a product it's a service so you need a staff to support your users. You also need to make deals to process credit cards, e-checks and so on. And since you're dealing with possibly large amounts of money, you also need to financial and law advice (read this E-Gold statement and the Wikipedia entry). You also need to take security very seriously, both virtual and physical, so you'll need to contract several different teams to independently analyze and audit your system.
This is really a broad question, I'd suggest you read all the Wikipedia entries about PayPal and other processing systems and then explain your exact problem in a little more detail (though you might really want to keep some stuff secret, since this is a public accessible website).

+1 each to #James Black and #eyze for their answers. This is not a minor undertaking, and unless you work for a company that is already affiliated with the credit processing network in some way, you're in for a lot of work and a lot of compliance issues. Their answers were good enough that I don't have a lot to add, but I would like to add this.
We looked at working with a company that already is in the business of authorizing credit card transactions, but they work primarily with POS systems and terminals, rather than as an Internet gateway. We wanted to stick with them for processing web site payments, since we use them for our stores. In essence, since they didn't function as an Internet payment gateway, we would need to write our own payment gateway using them in the background. After weeks of research, we came to the conclusion that even though this was technicially within our capability, and even though we have the knowledge of PCI and other applicable standards that this is something best left to companies that do this as their primary business. We'll be going with one of the pre-existing gateways.
Also, to answer your specific questions:
I do not believe anything open source exists. The backbone on which credit card processing is done is so sensitive and such a target for attackers that there is a very real need to limit the knowledge of how to process cards on a need-to-know basis. (I lost count of the number of non-disclosure agrements I was presented with just to research the idea.)
For the same reason, I doubt you will find much in the way of books, etc.
If you're working with a company that already processes cards, then you're a step ahead, but if you're trying to break into the business you are going to face huge hurdles.

I'd think your teachers want you to learn how to plan, not how to copy, so don't look for a reference implementation, instead learn how to think about a problem.
The trick to solving any large problem is breaking it down into small problems.
So do this.
Write out what you need to do on paper,
draw pictures,
locate all the individual bits of functionality you need, draw screens of how it will look,
discuss the experience of the user,
break things up into modules
get to work writing it
You will also want to consider testing it, and making sure it delivers all the functionality you need.
Once you start thinking about a problem with a pencil and paper in your hand, it becomes very easy, IMHO :)

Related

How to efficiently store a constant stream of stats

I'm sure this has been asked dozens of times but I can't seem to find the correct terms to google for to get the info I need.
I look after a video streaming platform built in Asp.Net MVC 5.2. We film and stream live events. Some of our events have thousands of users watching at a time, sometimes it's only a couple of dozen.
We need to store watching statistics both to know how many users we had watching AND also how much they watched.
This is especially important for some clients who need to know if specific users watched training sessions all the way through.
The current thinking is that we will periodically (once a minute or so). Fire an ajax call off to the server which will store the info via Entity Framework.
Our concerns are:
Will we hit a concurrent connections limit and bring the site down? If so how do we protect against this (some kind of caching before writing, maybe Redis?).
Is Ajax the right approach, should we use SignalR or some other WebRTC method and if so can we make it work on older and mobile browsers (IE9, Safari, etc)?
Will this much data become too unwieldy and take forever to read / write once the table gets big?
We're completely open to using something other than our current SQL Server approach but I can't find the right thing to Google for to find an appropriate solution.
So, can anyone either tell me what I should be searching for (ie is there a name for this requirement) or perhaps make some suggestions of products or tutorials that cover this?
Thanks,

What to do if one detects flaw in architecture of the software

Currently our team of 11 people is working on a project on asp.net platform. Timeline for the project is of 8 months and we have already done 4 months. Now working on new functionalities we find that there are some flaws in the architecture of the system due to which we are facing lot of problems. Now whom to look-up for solving this ... the Team Lead or the Project Manager... Have you ever faced this scenario ? What is the best to do then..
If you've realised this and no one else has, and you want the project to succeed, then it's up to you to put a few extra hours in and figure out a plan, and then sell it to the people who are nominally responsible for the decision making.
Real projects do have "restarts". It doesn't mean that you throw away all your existing work. It means you find a new shell to fit pieces of that work into. This is a lot easier if your work-so-far consists of well-understood self-cohesive little components, loosely coupled with each other. This is why people work that way - because they know from experience that they're going to have to rearrange things. Almost all features added to programming languages over the decades are due to the fact that we know each chunk of code we write may be in for a choppy voyage - it may survive, but will probably have to cope with a lot of unfamiliar environments on the journey to release.
So you've noticed that relevant information emerges continuously, throughout a project. It doesn't conveniently all show up in a single burst right before you start typing code. Writing a specification document doesn't solve this problem at all. So you need to change the way you work so that the project draws in new information all the time - new emerging information from outside is the food that drives your project forward. Try to look forward to each new surprising revelation and greet it as a friend!
What does this mean? It means that the "architecture" parts of a project are no more stable than the "little details". You have to be able to change the architecture. You don't have enough information at the start to make a permanent decision about anything.
The underlying problem may be the fact that you have an eight month project in the first place. Real eight month projects (ones that succeed on time) are actually, if you look closely, a lot of shorter projects: 16 two week projects is ideal.
You need to put all the project's aims (so far) into a big list in priority order. Write each feature requirement from the user's perspective. "The user must be able to blah blah blah", that kind of thing. Avoid talking about the technical issues of the current design. Think about how a user would deal with having no product at all (or whatever they currently use) and talk about how their experience would be improved by a particular feature.
The important thing is the priority order. The aim is to be able to say: we only have time to ship with the first 10 items done. That's better than 9 items, which in turn is better than 8, and so on. But even with 8 items it would be better than nothing, because each item is a feature that by itself would improve the user's experience.
The list is called the backlog.
If you compare your work so far with the backlog, you'll typically find that you've been working on low priority stuff, because you imagine you'll need it later. Try not to do that from now on. The low priority stuff is... low priority. What if some new higher priority requests emerge between now and ship date? They almost certainly will! Despite what some people will claim ("It will be totally useless without feature A!"), you could probably ship with neither feature A nor feature B. But if you had to pick one, you'd go with feature A. And you may well have to ship without feature B, due to lack of time. So don't jeopardise A for the sake of being "ready" for adding B later. Only prepare in advance if it costs you next-to-nothing - leave places where you can add things, make everything extensible, but not if it slows you down right now.
Then start working on a new version of the product (cannibalising the work done so far where it makes sense) that takes care of the first few items on the list - the bare minimum. Spend no more than a week on this. A week is 6.25% of your remaining time, so it's actually pretty expensive. But at the end of this you have a picture of what you're ready to ship so far. That is the only way to measure your progress from now on.
The rest of your project consists of:
Repeatedly cranking out new working versions of the product, each time adding a few more features from the priority list. Get a small community of users to work with each version and give direct feedback to your team. Aim to do a new version every couple of weeks.
Turning the user feedback into new "stories" to go "on the backlog". This of course involves prioritising them.
You do this over and over, in short iterations, until you run out of time (you probably have time to do six to eight iterations). At the end of each iteration you have something to ship that is "better" (has more high-priority features, incorporates more feedback) than what you had at the end of the previous iteration. This is progress.
Each end-of-iteration release has two purposes: to show progress and make the user community a bit happier, of course, but also to elicit more feedback, to find out new information. Every version is both a solution and a probe. This dual nature continues after the first public release. A public release is a deep space probe that you send out into the solar system to send back pictures of strange new worlds (in the form of exception stack traces).
The whole thing is scientific and rational. You make decisions about order of work based on order of priority. You get constant feedback based on a working version of your product, instead of having to guess the feedback you'd receive from an imaginary version of your product.
People will respond to this approach by saying that it will be horribly "inefficient". Efficiency is a relative term. Projects that don't work this way always end up working this way in the end. But "in the end" is very late. Usually there's a mad panic for an extra N months after the original deadline, where the project keeps producing repeated versions of the product that are all "nearly right" or "nearly done", in a crazy self-deluded parody of iterative development.
Fortunately, you can start thinking and working this way at any time. Better to start halfway to the original deadline than shortly after it.
Just put up your issues in the team meeting that involves both the team lead and the project manager. Put forward all your views frankly and effectively. This would in turn help the future of the project and give a new insight to the PM.
Further all the 11 team members too can discuss this among themselves and share one another views.. not just about the problem... but also about the possible solution. In the end share all the valid solutions to the TL and PM. This whole process would eventually help to recover the project from this mid-development issues.
A good link for what to do is this
Raise the matter to your project manager. Present a list of options with as best an estimate as possible for time to complete each with an objective list of pros and cons.
Offer your advice as best you can, but at the end of the day it's probably not your call to make on which way it will be resolved.
For what you say it seems you have a Technical Debt.
That's a problem that every project faces, mainly because now yo have much more knowledge of the domain than 4 months before.
It's a tradeoff, if the changes are not very brutal you might pull them off. If they are radical, you might reach the deadline with some of the features and then schedule time for refactoring.
Remember, as real debt it will keep growing interests until you pay it off.
Good luck!
I'd start by getting a consensus across the team of what the issues are, and how they might be resolved. Ask yourselves: What's the overall impact? Is it a complete roadblock? Are there ways around the issues without having to make major changes? It might not be too serious, and by discussing the issues you might be able to agree an quick resolution.

Using gaming concepts to build user agents for market research purposes

I work for a market research company in the online space. We have been spending all of our cycles for over a year and a half building the next big thing in this space with regards to profiling our respondents (over time) to better place them in available surveys . Something that one of our researcher's has asked us for many times (rightly so) is a tool that will prove the worth of this new profiling system and predict the outcome of tweaks to it's many algorithms and rules to show which version of a rule set has a better outcome.
The goal is to be able to take a sliver of our profiling system (a static slice of Q&A data for a given time - sex:male/female, drinks:coke/pepsi/mt.dew, income:etc.) and run user agents (artificially developed software robots or agents) through our profiling system to see what the interactive results would be. As the Q&A data would be the same, the user agents abilities to choose answers would be the same, and only the algorithms and rules behind how the profiler works would change - this theoretically would allow us to pre-determine the outcome of any changes to our system. This result would then allow us to proof changes before pushing the changes to our production system. The hope would be that we could more easily catch any errors before releasing to the wild. But this would also allow us to test changes to the logic to hunt for optimizations in the profiler.
My question: For someone like me (C#/.NET mostly) who has really only worked in the web application space, where do I look to get started in building user agents that are able to interact with an outside system such as my profiling system? I specifically need to know how to spin up 1000 (one thousand) agents and have them interact with my profiling system (over a given amount of time) by being able to answer the questions that are presented to them by the profiling system based on characteristics that are dynamically defined on the user agent at the time of initialization.
An example of this is that I need some black agents, some chinese agents, some male agents, some female agents, some old agents, some new agents, some religious agents, some that drink coke, etc. and all of them mixed together to most appropriately resemble the world. We already have the demographic break down of our population so we can easily spin up 10% black males, 60% white female stay at home mothers, and all the other representations of our population.
My first thoughts for creating a system like this was to use the power of my XBOX 360, and some well thought out agents that resemble a person from an object oriented world with some added characteristics to be able to intelligently answer some questions...and guess at others.
In speaking with my colleague, it was suggested that I use some of the artificial intelligence frameworks out there and a 1000 cpu graphics card (we have one already) to get some super wicked fast performance out of loads of user agents. Where each CPU is an agent...(something like this).
Is there anyone out there with experience in this sort of thing? Proofing problems with a fictitious model of the world?
You say "interact with an outside system" - what is the interface to this system, and how does a person use it? Is it over the web? If so, you're wasting your time thinking about GPU optimisations and the like since your performance bottleneck will be the network, even over a LAN. In such circumstances you may as well just run the agents sequentially. Even if you could effectively spawn 1000 agents simultaneously (perhaps across multiple machines), chances are high that you'll just cripple the target server in an accidental denial of service attack, so it's counterproductive. However if you have the ability to change that interface to allow direct interprocess communication, you could go back to considering the massive parallelism approach. But then 1000 is not a big number in computing terms. It's likely you'd spend more time making the algorithm run in parallel than you'd save by having it that way.
As for 'artificial intelligence frameworks', I don't think there is anything quite so vague that would help you. AI and intelligent agents is a massive field - the book Artificial Intelligence: A Modern Approach which is a standard introductory text on intelligent agents is over 1000 pages long and contains maybe 20 or 30 totally independent techniques, many of which could apply to your problem, many of which won't. If you can specify more clearly what tasks the agent has to perform, and which inputs it has on which to make those decisions, picking a decent technique becomes possible. In fact, it may turn out that your problem doesn't require AI at all, if you have a clear mapping between agent demographics and decision making - you just look up the answer to use from the table you made earlier. So it's important to work out what problem you're actually trying to solve first.

C# MVC: What is a good way to prevent Denial Of Service (DOS) attacks on ASP.NET sites?

I'm looking for a good and inexpensive way to prevent denial of service attacks on my ASP.NET MVC site.
I've been thinking about a solution that intercepts the HttpHandler and then counts requests in the Cache object, with the key being something like "RequestCount_[IpAddressOfRequestClient]" but that seems like it would generate a crazy overhead.
Any ideas would be greatly appreciated. Thank you!
You might consider trying to throttle the requests. Identify users by IP and/or cookie and limit requests to (say) 1 every two seconds. A human wouldn't notice, but this would slow down a bot considerably.
This helps at the application level (protects your app/database) but it's not a complete solution, as the hits are still coming at the network level.
As a front line of defense I would probably depend on hardware. Many ISPs offer some protection, eg: http://www.softlayer.com/facilities_network_n2.html
This is a very old question, but I hope this reference helps someone else.
Now we are using 'API Protector .NET' (https://apiprotector.net) to protect our APIs against DoS and DDoS attacks.
It's a library compatible with MVC, WebApi and .NetCore too, that has given us very good results, both in simplicity, but fundamentally in maintainability. With this lib you can protect each function of your API litterally with a single line, and in a very specific way.
As is explained in the website of API Protector .NET:
If you limit your API, in general way, to N requests per IP or per
user, it is enough for these N requests can be used to constantly
impact the same specific heavy function that can severely slow down
the entire service.
“Each function of your API must be restricted in a
particular way depending on the normal frequency of use and the cost
of processing which that function implies for the server, otherwise
you are not protecting your API.”
API Protector .NET allows you to protect each function of your .NET
API against DoS and DDoS attacks without effort, in a simple,
declarative and maintenable way.
The only negative point is that it costs USD 5, but it gave us what we was looking for at a very low price, unlike the WebApiThrottle library that although it was the first option I tried (because it is free), it ended up being impractical unmaintainable when we wanted to protect different functions in a specific way (which is critical for effective protection, as explained).
API Protector .NET allows to combine different protections (by ip, by user, by role, in general, etc) decorating each function with a single line, which makes it easy to implement and maintain. For a detailed explaination read: https://apiprotector.net/how-it-works
An interesting anecdote is that time ago, when we still protecting our APIs with WebApiThrottle, we did some tests simulating DDoS attacks, with many parallel requests from different hosts, and for some reason, (I think that is due to something related to thread synchronization), bursts of requests came in to the functions, and late with the server already overloaded, the throttling started. This, added to the difficult maintainability, did not give us too much confidence for a solid protection, and that's why we ended up trying this alternative that works well.

Conventions to follow to make Commercial software harder to crack?

What are some good conventions to follow if I want to make my application harder to crack?
As long as your entire application is client side, it's completely impossible to protect it from being cracked. The only way to protect an application from being cracked is to make it have to connect to a server to function (like an online game, for example).
And even then, I have seen some cracks that simulate a server and send a dummy confirmation to the program so it thinks it's talking to a real, legit server (in this case I'm talking about a "call home" verification strategy, not a game).
Also, keep in mind that where there is a will, there's a way. If someone wants your product badly, they will get it. And in the end you will implement protection that can cause complications for your honest customers and is just seen as a challenge to crackers.
Also, see this thread for a very thorough discussion on this topic.
A lot of the answers seem to miss the point that the question was how to make it harder, not how to make it impossible.
Obfuscation is the first critical step in that process. Anything further will be too easy to work out if the code is not Obfuscated.
After that, it does depend a bit on what you are trying to avoid. Installation without a license? The timed trial blowing up? Increased usage of the software (e.g. on more CPUs) without paying additional fees?
In today's world of virtual machines, the long term anti-cracking strategy has to involve some calling of home. The environment is just too easy to make pristine. That being said, some types of software are useless if you have to go back to a pristine state to use them. If that is your type of software, then there are rather obscure places to put things in the registry to track timed trials. And in general a license key scheme that is hard to forge.
One thing to be aware of though - don't get too fancy. Quite often the licensing scheme gets the least amount of QA, and hits serious problems in production where legitimate customers get locked out. Don't drive away real paying customers out of fear of copying by people would most likely wouldn't have paid you a dime anyway.
Book: Writing Secure Code 2
There are 3rd party tools to obfuscate your code. Visual Studio comes with one.
BUT, first, you should seriously think about why you'd bother. If your app is good enough and popular enough to desire being cracked, it will be, despite all of your efforts.
Here are some tips, not perfect but maybe could help:
update your software frequently
if your software connects to some server somewhere change the protocol now and then. you can even have a number of protocols and alternate between them depending on some algorithm
store part of your software on a server which downloads every time you run the software
when you start your program do a crc check of your dlls that you load i.e. have a list of crc's for approved dll's
have a service that overlooks your main application doing crc checks once in a while and monitoring your other dependent dll's/assemblies.
unfortunately the more you spend on copy protecting your software the less you have to spend on functionality, all about balance.
another approach is to sell your software cheap but to do frequent, cheap upgrades/updates, that way it will not profitable to crack.
The thing with .NET code is it is relatively easy to reverse engineer using tools like .NET Reflector. Obfuscation of code can help but it's still possible to work out.
If you want a fast solution (but of course, there's no promise that you won't be cracked - it's just some "protection"), you can search for some tools like Themida or Star Force. These are both famous protection shells.
It's impossible really. Just release a patch often then change the salt in your encryption. However if your software get's cracked be proud it must be really good :-)
this is almost like mission impossible, unless you have very few customers.
just consider - have you ever seen a version of Windows that is not cracked?
If you invent a way to protect it, someone can invent a way to crack it. Spend enought effort so that when people use it in an "illegal" way, they are aware of it. Most things beyond that risk being a waste of time ;o)

Categories

Resources