I'm trying to get my head around the thought process when designing a large scale application.
Let's say I have a client who needs a new customer website and he is estimating 40,000 orders per day with an already 25,000 user base. When desiging the application, how do you about determining if a distributed architecutre is needed? Should I use a web farm? etc.
I've mostly build 2 tier (physical) applications in the past and I really want to improve my understanding.
Any insight would be great!
Load Test your new application from the get-go.
Since doing a big-design up-front will never give you the results that you expect from it (15+ years of experience) the best thing to do is to design for change and let the right architecture emerge from your requirements.
Given your description, adopt an agile methodology for this project, and use its practices to guide your project into a success. One of those vital practices is to have a 'Definition of Done' for all the work that you do. Clearly in your DoD you will have the item:
Needs to pass the Load Test (40,000 orders; 25,000 users per day).
As you would then start development, one of the first things to do is ofcourse to set up the environment to be able to run such a load test. If that never happens, you already know very early in the project that you will be in trouble.
Having the load test done as many times as needed (at least once every sprint) you will know that the scale requirement is/can be handled by your architecture.
HtH
It's going to depend on a lot of other factors than just the number of orders per day. Where will it be hosted? What does that physical architecture look like? What else does the application do besides ecommerce? Does it need to integrate with other applications (besides the payment gateways of course)? Etc.
A simple two tier application in the right cloud hosting environment (say VMware for instance) that can scale dynamically would work just fine for an ecommerce website. A simple two tier application in the right on-premises hosting environment (load balanced web farm) should also work fine for an ecommerce website. It's the difference between scaling up (potentially hidden with virtualization, which ends up being a scale out of sorts) and scaling out (adding more servers).
A distributed architecture would allow you to distribute the system load (say order processing) to 1:M servers that sit (perhaps) behind a load balancer. This is a very common approach, and would also work very well for an ecommerce website.
In my opinion, there isn't one architecture or system design that fits every mold. The closest architecture to fit every mold (again, my opinion) would be a service oriented architecture. If all business processes and logic are services (and designed correctly), then no matter how your requirements change, no matter what your hosting environment looks like or changes to, and no matter what integration requirements you have, your system can handle it with little or no changes.
Since you are coding .Net, put the application in Azure (da cLOUD man, DA CLOUD! ;).
It will help you get the processing power that you'll need. The application will work fin as long as you do not do any stupid mistakes.
Scaling can be addressed, at least partially, with load balancing.
Concurrency might be your true problem. Distributed transactions is a heavy handed tool, but it won't solve every use case without additional thought.
Some financial companies have additional security requirements regarding the web servers having direct access to the database.
Related
We have a number of small ASP.NET MVC apps. All are basically a bunch of forms which capture data and store them in a SQL Server database, usually which are then loaded through to our datawarehouse and used for reporting.
We are looking to rewrite all the small applications and apply a level of consistency and good practice to each. All the applications are fairly similar and I think from a user perspective it would be better if they seemed to be part of the same large application so we were considering merging them together in some way as part of the re-write.
Our two currently preferred options seem to be:
Create a separate portal application which will be the users point of entry to the apps. This could have 'tiles' on the homepage, one for each of the apps (which would be registered in this parent app) and could link them through to all. In this scenario all the Apps would remain in different projects and be compiled/deployed independently. This seems to have the advantage of keeping the separate so we can make changes to an app and deploy without affecting the others. I could just pull common code out into a class library? One thing that annoys me about this is that the parent app must basically use hard coded links to link to each app.
I looked into using 'areas' in ASP.NET MVC and have all the small apps as different areas in one big project. This seems kindof cleaner in my head as they are all in one place, however it has the disadvantage of requiring the whole app deployed when any of the individual ones are changed, and I have a feeling we will run into trouble after adding a number of apps in to the mix.
We have a SharePoint installation and someone suggested creating the portal type app in SharePoint... This doesn't sound like the best idea to me but am willing to consider if anyone can point out advantages to this method.
Are there any recommendations on the architecture of this? Has anyone completed similar projects in the past and something worked well/not well?
We have 4 developers and we do not expect the apps to change too much once developed (except to fix potential bugs etc.). We will however plan to add new apps to the solution as time goes on.
Thank you
MVC Areas advantage would be allowing code sharing, by refactoring the repeated redundant parts of each app to use the same infrastructure code (security, logging, data access, etc.)
But it will also mean more conflicts when merging the code initially.
Deployment concerns can be mitigated with a continuous deployment tool (there are many in the market) or if you deploy to an Azure WebApp, then deployment slots can give you a zero down time deployment.
Is it wise to build a large application entirely based off SOA? Or just some portions? User account logins, accounting, gis mapping, sales, etc?
In other words, would it be wise to build a GUI to such an application in HTML & Javascript which does all it's exchanges via ajax to .NET web services on the back-end?
I can't see it worth loosing all the .net .aspx functionality such as forms authentication, view state, etc. But my co-worker is saying if we are going to go SOA there is no need for .NET on the front end. But i think there should be some sort of balance. Where do you draw the line? Should all calls to the database go through the web services?
I just want to say that "with SOA we’re building for change, while with Traditional systems engineering, we’re building for stability."
The problem with stability, of course, is, it only takes the business so far — if the organization requires business agility, then they’re much better off implementing SOA.
So, It solely depends on what you want to achieve, you are the one who should draw the boundary.
I read it in article on SOA few days back as I'm too working on SOA.
EDIT:
Meanwhile I came across this article and thought of sharing with you.
The video quite explains the current scenario of SOA and its views by different people.
I'm getting the words of the song 'If I had a hammer' coming to mind. SOA is an architectural approach to develop software as a series of services. In my opinion this is best for systems that have less than immediate latency and limited bandwidth, and high cost in access etc (these are all obviously highly subjective). You don't need full SOA just get loose couping between components which I would argue is a good goal to achieve.
DB calls can go through a service, take ADO.NET data services for example however you really have to weigh up with what the service is to provide. Take caching. A decent approach to SOA will consider that data is may need to be cached to reduce service load. So can your data be stale in the UI? Are you allowing that use case? Is right for login info to be stale (a rough example I know but possibly something that may need to be addressed).
All in all - it depends. I think some things lend themselves to SOA very well. If you take a DDD approach then the services that represent Domains would probably do so. In this way your UI talks to domain services and not rows in table as the DB is abstracted behind domain services.
Don't use one methodology to solve all problems.
See this SO question too
It's a service oriented architecture, not a service exclusive architecture.
Presentation logic and plumbing have to live somewhere; it all depends on where it makes the most sense for it to live.
For example, let's say you have a UI component that relies on a highly chatty but efficient set of calls to a database to generate a complex analysis of something (take your pick). If your web browser is making all those calls, you introduce massive network latency and concurrency issues. If a web service makes all those calls, you are potentially putting presentation logic into it to format that result.
If you are using Session state (or web services period), you are essentially using ASP.Net anyway. Try uninstalling it and see if your web services still run.
If presentation logic needs to live on the server side, it is better for it to live within a framework intended for presentation rather than a web service, IMO. If you haven't looked at MVC 2, do so. It makes it incredibly easy to set up an application that melds browser and server UI support (for example, jQuery validator controls backed by server-side validation).
Conversely, the web browser provides an expressive platform. Assuming browser support and team knowledge, the AJAX/SOA architecture you describe is a good one. I'm using it more and more and trying to make my server pages cleaner and simpler but I have no plans to exclude ASP.Net from my toolkit any time soon.
Client implementation should be completely disconnected from the back end web service in a SOA. The service should be able to be consumed by ANY client. If you are using .NET on the back end and front end because they can be coded to directly communicate, then you are missing the point, because now they are tightly coupled and what you have now is a stove pipe application. The client should have no idea how the server side is implemented -- shouldn't matter if the back-end web service is built using .NET, Java, or whatever.
In a true SOA, you should be able to search for services in the services repository, perhaps tie the outputs in with other services or use XSLT to create alternative outputs that weren't necessarily considered when the original service was built, and consume it in a standard way in any client on the front end.
It sounds like what you're really asking is how to build a single application. The point of a SOA is to provide standard data sets through re-usable interfaces, that have no specific application or implementation in mind. To start out building a single application with the entire back-end comprised of SOA services would be a huge undertaking. In MY mind, each back-end service should be built because of it's intrinsic value all on it's own and be provided to the entire SOA "domain". Then when you or I decide to make a client that does X, Y, and Z, we can just go find those capabilities in the SOA and injest them.
I'm currently trying to convince management of the need to port one of our applications to .NET. The application has grown to be a bit of a monster in Access (backend in SQL), with 700 linked tables, 650 forms/subforms, 130 modules and 850 queries.
I pretty much know all the major benefits of doing this, but now need to look at how this can be achieved technically, so I can put a project plan together.
So, my plan was to convert the queries into stored procedures and/or views on the backend and re-write the forms in WPF or WinForms.
Now, the code is where I come unstuck. Is it possible to packaged up the code behind and modules into dlls and consume them whilst it is slowly ported to VB/C#?
What we can't be left with is half an application in VB/C# and half in Access, it must 'appear' to all be one application, even half way through the migration.
Thanks in advance.
EDIT: Just some more info about what we do and why we're looking at moving away from Access.
We are essentially an ISV and the Access application is our main product. This application has been developed over a period of 15 years, by many, many developers on an ad hoc basis. There is no documentation for this application.
We also have problems with getting branching in SCC to work properly, so we've currently got 4 or 5 code bases for the half a dozen clients we have. On top of that, all the testing we do is completely manual, which you can imagine is very labour intensive, and only scratches the surface of what really needs to be tested.
We're currently looking to expand, and have a number of sales leads that are in the final stages. I'm worried that with these new sales, we're going to be swamped with support and testing, and that this application is going to become even more entangled an buggy.
I'll also add to this the fact that we're just about to enter the spec phase of a brand new product, which is almost certainly going to be built in .NET. If we were to rewrite the Access application in .NET, then the people we use for that can go straight on to this new development. If we were to stay in Access, then we'd have to get some new Access people in, whom would have to be retrained once we start the new development.
So essentially it has come down to two choices, major refactoring work in Access to try and 'organise' the code a bit better, and those of you who have suggested culling parts are most probably right; I'm sure there are parts that are no longer used. However, I fear that if we stay in Access we still won't be able to build in effective testing and we still won't have proper SCC branching, which will lead to support continuing to be a nightmare, and any future developments on this product makings things worse. Either way there is a lot of work that we're about to embark on, which is either going to be done in Acces, or .NET.
I been involved in a lot of migration projects where one is converting from one platform to another. I've also seen spectacular cost overruns, and spectacular under estimations of how difficult these types of projects can be.
In some of the projects and platforms I've created and that I had built for about $25,000, the cost of replacing this application and rewriting it resulted in the other team of people taking over this project and the resulting cost was in excess of $750,000.
You're also making an assumption that the current system needs to be replaced. You MUST have clear in your mind what the actual goals of moving and replacing the current platform and software are. Simply rewriting something and moving the functionality over to another platform yields you very little, except spending a lot of money that really doesn't benefit your business of all (but hey those developers will take the money, if they convince you of the need of doing this)
You might want to take a read of this wonderful article by Joel on software (Joel by the way developed and created this forum stack overflow – and I was a moderator of some of his discussion forums for almost 10 years),. In this article, Joel warns and gives caution against out of the blue simply rewriting perfectly good software that does not rust or wear out.
Things You Should Never Do, Part I
by Joel Spolsky
They did. They did it by making the single worst strategic mistake that any software company can make:
They decided to rewrite the code from scratch.
Article here: http://www.joelonsoftware.com/articles/fog0000000069.html
Joel continues to note that in the past 10 years, that article still remains one of his more popular ones (and somewhat controversial)
It makes no sense to take a perfectly good running application that's been running great for 10 years and doing its job, and then simply rewrite it in another platform, especially if you don't have the Manpower and expertise and personnel available to maintain this new system. And this is especially more so, if the new systems not going to accomplish anything more than what the previous system was doing. In fact if you did have that Manpower, they'd likely already have STARTED converting this system already over time. I mean why all of sudden out of the blue did someone throw a light switch and all of a sudden realize that new developers need to be brought in to rewrite a system that's already been running fine?
I'll also point out having been in this business for a long time (both published and technical editor of access books) I also done migration projects from mainframe systems to desktops . And, I also done migration of desktop database systems to large mainframe systems .
I can only say that it is rare to see an application with that many tables. In fact this issue raises alarm bells right off the bat.
Because of such a large number of tables here, I have to think that there's likely either very many processes, and multiple applications cobble together here that represents this whole system . If this is not the case, then of course rewriting in .net does not make sense unless you address the un normalized nature of the system. The fact that the data is already in SQL server helps, but that just might mean that you had the horsepower and capacity and infrastructure to scale something that was poorly designed and the first place
A very big major portion of software flexibility comes from having properly normalized data models. The problem you have is that you have the data in SQL server, and it's very tempting to rewrite parts of the forms and functionality as .net forms, and continue to use the current existing data models. Unfortunately this put you in a bit of a rock and hard place, because you want to continue to use existing data, and start rewriting functionality in .net. However rewriting of functionality in .net without addressing the data models is a very bad idea.
In an ironic twist of fate, this is a catch 22, because likely if that system had really fantastic great designed data models, you might not even have the need to redesign and move this into .net. Access and SQL server can scale out to 100's of users with ease anway. And, access supports the use of class objects, and even source code control.
In other words keep in mind that people might be asking to rewrite this in .net because they believe the application will then magically have increased flexibility, and be able to be changed faster then that of their changing business rules. In fact often the opposite occurs, because access is a very RAD tool. This means that the frontline people can often make modifications due to business rules changing, faster than the IT Dept and their team of developers working away on the next great version of the application. And worse you don't want to saddle that IT Dept and those developers with a poor data model.
I mean, are you to now hire the IT department to build every single spreadsheet and excel for the people because are current business processes are not flexible enough? It would be wonderful if the IT Dept to go around to everybody's desk, hold their hand, and build the excel sheets CORRECTLY for everybody, but it's not practical in the real world. So in addition to taking access away from these people, you might as well take excel away from them also.
I am just pointing out that my spider sense suggests to me that the data models here are going to be a real challenge. Remember, I would always take poor code and great designed data models over the reverse (Great code, but terrible data models). The reason for this is with great data designs, then the code and applications practically write themselves. And with great data models, then the ease of which you can change for the ever changing business rules again favors great data designs over that of great code. You can also RE factor the code overtime WHEN you have good data models. So, with good data models you can move forms and functionality and the UI over into .net, and you can do this seamlessly and easily WHEN the existing data models make sense to keep.
Also it makes no sense to move to these new technologies and less you're going to introduce the possibility of introducing things like self serve web portals for the existing business processes. So, today we can now allow customers to manipulate and use some of that information that is currently locked up in the system. This might be simple as them checking the status of their orders instead of wasting valuable customer phone time. Or it might be something simple like how a major package company in Canada saved an estimated $10,000,000 in the first year of implementing their package tracking system. Or might be something as simple as allowing the customer to look up their account balance.
So right now in the marketplace, these self serve customer web portal systems allow customers to enter and use and get at their information instead of the calling up some employed within the organization who then turns around and launches the application and then manipulates the information for that customer while on the phone. Might just as well let the customer do this work! So from order status, to balances owing, to banking, or what ever it is, the real cherry model ticket today is to allow the customer to use at a cell serve web portal that represents all that valuable information that that internal application is creating .
As mentioned, you have to ask, is where is the Manpower and personnel going to come to build and maintain this application? Obviously the existing system with enormous numbers of forms and tables you are throwing out must somehow been created, and represents some significant investments of time and efforts . The key concept you have to ask, is where were those significant investments and resources coming from to build that existing application? Who is going to maintain the new system then? In other words you need to design the new system to reduce maintenance costs. (new versions of my software can reduce maintenance by as much as 10 or 15 hours per year for customer ).
At the end of the day, good software development and good designs are good designs. Using Access, or VB6, or vb.net don't matter if the system is meeting your business needs now.
I should also point out that the new version of access 2010, can create .web forms. They are XAML (zammel .net forms). I am pointing this out, because changing the front end skin from access to .net yields you VERY little unless the underlying data structures and designs are also modified to take advantages of new possible business processes that can be accomplished with new technologies (such as those cell serve web portals). Simply repaint the font ends with .net forms is really very much amounts to a waste of money in my humble opinion UNLESS other issues are being addressed such as the data model, or some type of web portal that'll improve flexibility here.
You have some great advice here already. Keep in mind this really comes down to what are the goals and reasons for these people desiring this software to be rewritten in .net. Those new goals and desires better not be based on the pretext of simply remaking the forms you have now into .net as that will really accomplish nothing at all, and will not improve their ability to address their changing business needs that the system they are currently using obviously had been doing in the past.
Good luck on this I don't think this is the kind of question that can be answered in a simple forum post, but at least you have lots to chew on here so you can get the ball rolling.
Instead of telling you how hard it is -- which I'm sure you already realize -- I'll try to toss out some hints:
1) Start by moving everything you can out of MS Access and into MS SQL. This means tables, stored procedures, views, etc. If you get this step right, your MS Access app will be a front-end for a real database, which is already a win.
2) Consider giving up before you start. Instead of porting everything, it might make more sense to recognize which features can just be left alone, while new ones get WCF or MVC front-ends.
3) It's tempting to port from VBA to VB.NET, on the theory that it's more similar, but I don't recommend this.
I'm working in department which is mainly responsible for replacing old Access applications with .NET solutions. In my company Access applications are used for simple scenarios to fulfill business needs of single employee or small group of employees.
Sometimes Access application grows, group of users grows or too many changes are needed. In that case department using that Access application can start a project to recreate application. When this begins we can be sure that new application will be far away from current Access application.
First the business analyst is assigned to the project. His responsibility is to map current solution and to discuss problems of the current solution and expectations for the new solution. I haven't seen a project where "customer" wanted only replacing of the current solution. Everytime the customer also wants some new features and extensions which were not possible in Access.
Business analyst creates some initial description of expected solution which is passed to architects. Architects decide which type of application will be build, what type of HW infrastructure will be needed and how the application will be connected to other systems (if needed). After this initial phase IT have big picture about application and about needed changes. Here some initial estimate is done so project can be planned and resources can be allocated. This estimate is boundary for the project. Than my team starts to do the job.
We are using agile approach so our customer (internal team) incrementally sees new features in the application. First we gather some initial set (backlog) of user stories (special form of requirements) and we estimate these user stories and we let the customer to prioritize them. We choose subset of user stories for iteration (usually 2-4 weeks). New user stories can be added to backlog any time but our selected user stories can't change during iteration. After the iteration we present customer working part of the software. Based on the working part customer can decide to change priorities in backlog or create new user stories. We repeat this approach until customer says stop or unit budget is consumed. The important point is that not all user stories have to be done. Project has been planned with some budget and some low priority user stories don't have to be overtake.
From the technical point of view it is project as any other except few differencies:
You have initial database and you always have to be sure that already implemented part in your new solution also have working migration of exisiting data.
You have existing UI. Users can like or can hate the UI. Make sure that you understand it so that you create UI which is not worse than existing one. I created applications where UI had to be completely different and I created applications where UI had to be exactly the same so that users didn't need additional training.
Try to add some new features so that new application is reasonable. It is always easier to explain needs for the new application if you can describe new needed features.
650 forms/subforms is large by any standards. That represents a major conversion project, and a 'slow' port will be a nightmare.
I would suggest developing a new .NET application 'spike' that contains the basic functionality that is absolutely required and then build upon it. At the same time, freeze the Access application from all but essential fixes.
There are a few tools that will convert MS Access forms to .NET, but they will likely fail on complex forms with sub-forms.
'Effortlessly' Convert Access Forms to VB Objects
So if a user is in Access and they take an action to open a form that happens to be a different executable written in C#, won't it 'appear' to be the same application?
There has to be a user group that would love a separate application that only has the 5-10 forms they use.
Getting rid of tables/forms/features that are not used is an increase in functionality. I don't know the level of documentation for this app, but start there. When users find out they have to document areas of an application and justify the need for parts they don't use, they'll volunteer to have it removed.
Currently I'm working with a big, old and extremely poorly written ASP.NET 1.1 application and the continuous maintenance is becoming quite a problem. Basically it's reaching breaking point and I'm reluctant to expand it any more than I have to as demanded by the business. Based on my experience creating other projects from scratch it would really suit an ASP.NET MVC based solution. Oh how I wish the world were that simple...
The fact is that I just cannot justify re-writing it from scratch and the business cannot afford it. The ideal solution would be to start writing an MVC-based application alongside it and begin a slow migration as new requirements arise.
I've read posts which state that this is entirely possible, but in my experiments I've not found it so easy. The current application contains several large data access and business logic layers shared by other applications that the company produces. These are also written in 1.1 and will not compile in 2.0 (and would destroy the other projects if I tried!) so I cannot upgrade them. Since I can't do that I'm stuck with an application that cannot even be opened in a .NET 3.5 capable visual studio. The new MVC app would also have to make use of these layers.
I am entirely open to suggestions. I'm desperate to find a solution that I can quickly demonstrate would allow me to improve the product immensely without taking too much time or affecting the rest of the business.
You could write a WCF service on top of the existing business layer and have your new app talk to that service instead of referencing the business layer directly.
You need to divide to conquer. Analyse the current app and its layers and see if you find a way to divide each significant piece of functionality into a discrete area with as few changes as possible.
Then make each area a unique service using the old technology.
Then you can rewrite each service slowly as you can fit it in and not affect the whole.
Otherwise you are going to have to come up with a convincing business case for your managers so that they allocate you the time to do the job properly. Sometimes our job is political as well as technical.
Note: When I refer to tier, I mean a physical tier. Many of the questions on this site relating to "tiers" are referring to logical layers, which is not what I'm asking about.
I am designing an app using a standard "3 layer" architecture, with presentation, business logic (BLL) and data access (DAL) layers. The technology is WPF, C#, LINQ and SQL Server 2008. My question relates to the physical architecture of this app.
I can place the BLL/DAL in a standard DLL which is loaded and run on the user machine, making a 2 tier architecture - client machine and database server. But it is not too difficult to turn the BLL/DAL into a WCF service which sits on an app server and is called from the user machine. This would give me a 3 tier architecture - client machine, app server and database server.
My question is this - what is the advantage of using a 3 tier architecture? I've often been told that 3 tiers add scalability, but it's not immediately apparent to me why this would be. And surely you are going to take a performance hit with the same data having to make two hops over the wire - from database server to app server, then from app server to client machine.
I would appreciate the advice of experienced architects and developers out there.
It depends on the use of your application and your requirement for security. If your application is being used over the Internet, and you're storing anything that is potentially sensitive in any way, adding the physical remove for the database is strongly recommended. Never, ever let anyone from the outside onto any machine with direct access to your database. People can and will attempt to break your security for no better reason than they have nothing better to do.
Scalability can be a factor as well, both in front of the presentation layer (in front of the web servers) and in the database. Placing a load balancer in front of the presentation layer allows incoming requests to be routed to an array of machines that can be managed independently. Machines can be added to the pool in times of need and removed for maintenance. Placing load balancers between the other layers can have the same impact. The idea is to provide a flexible, dynamic back-end environment that can be adjusted as demand requires.
Whenever you find yourself asking the question "Is X inefficient?" you should, immediately, ask yourself three precursor questions:
By "inefficient," what resource do you think it should be using efficiently and may not be? Time? Space? Bandwidth? Development hours?
Why do you care? No, seriously: If you're going to spend even one minute answering this question, there has to be a reason. What is that reason?
Compared to what?
As far as your comment about scalability is concerned: For a time, I had the unfortunate responsibility of maintaining a system whose architect who had been told that minimizing round-trips to the database would make an application scalable. He took that insight and ran with it. You can read about this project here. It occurs to me that I ought to have mentioned that at no point during the entire decade-plus-long lifetime of that application were there more than four users logged in simultaneously.
Inefficiency is in the eye of the beholder.
For example, having everything happening on the client may increase the memory footprint or CPU/network requirements of the client computers. If this work can be off-loaded to a server/server farm you may save having to do hardware upgrades of client PCs just to run your software. If more resources or upgrades are needed, they can be added/done in the business logic tier without impacting the clients.
Also, having the business logic on its own tier may be more efficient later (from a software development perspective) when you need to expose some of your application's functionality in a web-based system, or an Outlook add-in, or an iPhone app. You don't want to have to update all of these systems whenever the business logic changes slightly.
Security may be better as your users don't need direct access to the database server, they are isolated by the application server.
It also forces you to think about your application in a modular way with well defined interfaces which may have architectural benefits to the design of your application.
It can be. It depends on what has been implemented and how.
The driving force for creating a 3 tier physical architecture is not necessarily performance related.
The reason scalability is quoted is that a service might run on a server farm, but the clients would be unaware of this. The size of the server farm can be increased to meet demand if the architecture has been designed to support it.
Main advantage of 3t applications described like you did is not scalability. Maintainability maybe.
In order to make your architecture scalable you need one more technology you didn't mentioned.
- you need services. I would suggest WCF.
Making your BLL WCF service (or multiple services) would make your application much more efficient and scalable, allowing your BLL to run on different/multiple machines.