Is a 3 (physical) tier architecture inefficient? - c#

Note: When I refer to tier, I mean a physical tier. Many of the questions on this site relating to "tiers" are referring to logical layers, which is not what I'm asking about.
I am designing an app using a standard "3 layer" architecture, with presentation, business logic (BLL) and data access (DAL) layers. The technology is WPF, C#, LINQ and SQL Server 2008. My question relates to the physical architecture of this app.
I can place the BLL/DAL in a standard DLL which is loaded and run on the user machine, making a 2 tier architecture - client machine and database server. But it is not too difficult to turn the BLL/DAL into a WCF service which sits on an app server and is called from the user machine. This would give me a 3 tier architecture - client machine, app server and database server.
My question is this - what is the advantage of using a 3 tier architecture? I've often been told that 3 tiers add scalability, but it's not immediately apparent to me why this would be. And surely you are going to take a performance hit with the same data having to make two hops over the wire - from database server to app server, then from app server to client machine.
I would appreciate the advice of experienced architects and developers out there.

It depends on the use of your application and your requirement for security. If your application is being used over the Internet, and you're storing anything that is potentially sensitive in any way, adding the physical remove for the database is strongly recommended. Never, ever let anyone from the outside onto any machine with direct access to your database. People can and will attempt to break your security for no better reason than they have nothing better to do.
Scalability can be a factor as well, both in front of the presentation layer (in front of the web servers) and in the database. Placing a load balancer in front of the presentation layer allows incoming requests to be routed to an array of machines that can be managed independently. Machines can be added to the pool in times of need and removed for maintenance. Placing load balancers between the other layers can have the same impact. The idea is to provide a flexible, dynamic back-end environment that can be adjusted as demand requires.

Whenever you find yourself asking the question "Is X inefficient?" you should, immediately, ask yourself three precursor questions:
By "inefficient," what resource do you think it should be using efficiently and may not be? Time? Space? Bandwidth? Development hours?
Why do you care? No, seriously: If you're going to spend even one minute answering this question, there has to be a reason. What is that reason?
Compared to what?
As far as your comment about scalability is concerned: For a time, I had the unfortunate responsibility of maintaining a system whose architect who had been told that minimizing round-trips to the database would make an application scalable. He took that insight and ran with it. You can read about this project here. It occurs to me that I ought to have mentioned that at no point during the entire decade-plus-long lifetime of that application were there more than four users logged in simultaneously.

Inefficiency is in the eye of the beholder.
For example, having everything happening on the client may increase the memory footprint or CPU/network requirements of the client computers. If this work can be off-loaded to a server/server farm you may save having to do hardware upgrades of client PCs just to run your software. If more resources or upgrades are needed, they can be added/done in the business logic tier without impacting the clients.
Also, having the business logic on its own tier may be more efficient later (from a software development perspective) when you need to expose some of your application's functionality in a web-based system, or an Outlook add-in, or an iPhone app. You don't want to have to update all of these systems whenever the business logic changes slightly.
Security may be better as your users don't need direct access to the database server, they are isolated by the application server.
It also forces you to think about your application in a modular way with well defined interfaces which may have architectural benefits to the design of your application.

It can be. It depends on what has been implemented and how.
The driving force for creating a 3 tier physical architecture is not necessarily performance related.
The reason scalability is quoted is that a service might run on a server farm, but the clients would be unaware of this. The size of the server farm can be increased to meet demand if the architecture has been designed to support it.

Main advantage of 3t applications described like you did is not scalability. Maintainability maybe.
In order to make your architecture scalable you need one more technology you didn't mentioned.
- you need services. I would suggest WCF.
Making your BLL WCF service (or multiple services) would make your application much more efficient and scalable, allowing your BLL to run on different/multiple machines.

Related

C# secure connection info for MYSQL

I'm about to release a small tool which uses a database connection for storing data. The question is: How can I prevent people reverse engineering my code and getting the Username and Password to gain access to the database?
For earlier projects (which were used only by myself), I defined the connection-string just as a global variable inside my app. But that's highly unsafe as it only takes minutes to get this string out of the exe.
Also a lot of methods to obfuscate code can be reversed.
I am really a big fan of providing code but I don't know what to post. This is more a question about the theory. Coding is the part I'll take care of myself.
Here is a small idea from me which I don't really like that much:
I could place a second tool on the server. The real app would connect to this second tool, give over the data and the second data would finally connect to my database itself. This way the connection-string would be stored inside the second app where nobody can grab it.
The fact of the matter is that storing sensitive information on the client machine is highly vulnerable to attacks against your database. A suggestion you can look into is a Three-tier architecture model for your application (http://en.wikipedia.org/wiki/Multitier_architecture#Three-tier_architecture). In a Three-tier architecture, you have your presentation layer (your application), your logic tier (this layer will be the central pit stop for all your clients will have access to your database), and you have your database layer (the server where your database is). With this architecture, you can ensure all the data being stored and being retrieved from is from a singular source and high level security.
In the past (and still in the present), programmers would have to create their own socket servers or do advance network programming to develop a solution like this, however Microsoft has developed a tool called Windows Communication Foundation (WCF) which takes away the pain of coding your own socket server and lets you focus on developing your own implementation. Be warned though, WCF is secure by default, but it is no excuse not to research into ways of making your product robust against hackers (like knowing what protocol you are going to use, what security measures you are going to use (Transport vs Message, etc), encrypting data on client side so potential viruses don't uncover sensitive informations, etc). In saying that, WCF is a highly polished service and is really easy to get something up and running.
A good beginner video tutorial on WCF can be found here: https://www.youtube.com/playlist?list=PLhq7kqloVlM-bI9W_7iDZhObAeyrFt1y_
EDIT: The playlist for the videos are gone, but the videos themselves are still there. Just search through all his videos looking for the keyword 'WCF'
Here's the link: https://www.youtube.com/user/JesseDietrichson/featured

.NET distributed layered application

I have been developing n-tier applications using .NET for many years. But I still have no idea how to distribute the tiers/layers (dll) to other servers.
Let say, I have an MVC web application with 4 projects, i.e. MVC (UI), Business, Service and Data. Everything works fine if all class library dlls are in one server.
If I want to scale out the application by distributing the Service layer (dll) and Data layer (dll) to other 2 servers, should I convert the class library to WCF Service Library project (with TCP or pipe as communication protocol for better performance) ? Or should I use other technology like .NET remoting or Web API?
Will that be a lot of work?
Is that one of the purpose of creating multi-tier application?
Thanks.
Update:
Do you have any links (from Microsoft) that explain in detail how to scale out an n-tier architecture application to multiple server by distributing the DLL?
If I want to scale out the application by distributing the Service layer (dll) and Data layer (dll) to other 2 servers, should I convert the class library to WCF Service Library project (with TCP or pipe as communication protocol for better performance) ?
Yep, since they are on different machines, you need some kind of communication mechanism that goes beyond simply DLL invocation.
Or should I use other technology like .NET remoting or Web API?
Which approach you choose depends on many factors like complexity, performance...There are many options like
WCF webservices
Simple REST calls with WebApi
a message bus i.e. NServiceBus
...
Obviously remote calls will also be slower having a potential impact on performance etc.
Will that be a lot of work?
It will be more work and in my opinion that "more work" should really be justified. Keep your architecture as simple as possible or better, only as complex as really needed.
An alternative approach could be to have some deployment pipeline that deploys your entire application on different server instances and have some intelligent load balancing strategy. The only thing you need to pay attention to in that case is to properly share the sessions between your instances (stateless would be better ;) ).
My 50 cents...
As far as I know WCF replaced .NET Remoting (MSDN).
Anyway... Someone before me said. If you don't have to scale the application, do not do it. Communication cost alone between services of any kind will slow things down considerably. Probably to extent, where it would be slower than it is now (which I am assuming is the reason for scaling).
Prior to scaling, I would first see where the bottleneck really is. For instance, if the problem is your DB server, then moving services and data layer to another server is useless, as you will still be using the same database. So, you need to first find out what your bottelneck is.
The easiest and least painful way to scale (in my opinion) would be to just add another IIS server and a load balancer that would direct traffic to either one of them. You would need to store sessions in a database or use dedicated server, but that is about all the change you will need. Plus, if one of your server fails, one will still operate.
By default, avoid premature optimalization.
If you have a only web site, I would keep it as simple as possible and only create logical layering. There are a number of options: typical 3 tier, onion architecture etc. The key is that later, if really really needed, you could still refactor your code and make your data layer a separate physical layer. But unless you are creating a new Amazon or something, this will probably not be the case.
If you are in the situation, for example, that you have a web site, but also have to expose a web api; you could choose to have the web site consume the web api. In fact, your web site would then become a very thin layer (maybe not even using ASP.NET MVC) because most of the logic would be in the web api.
PS - .NET remoting is old technology, consider WCF or Web API instead.

How to determine distributed architecture?

I'm trying to get my head around the thought process when designing a large scale application.
Let's say I have a client who needs a new customer website and he is estimating 40,000 orders per day with an already 25,000 user base. When desiging the application, how do you about determining if a distributed architecutre is needed? Should I use a web farm? etc.
I've mostly build 2 tier (physical) applications in the past and I really want to improve my understanding.
Any insight would be great!
Load Test your new application from the get-go.
Since doing a big-design up-front will never give you the results that you expect from it (15+ years of experience) the best thing to do is to design for change and let the right architecture emerge from your requirements.
Given your description, adopt an agile methodology for this project, and use its practices to guide your project into a success. One of those vital practices is to have a 'Definition of Done' for all the work that you do. Clearly in your DoD you will have the item:
Needs to pass the Load Test (40,000 orders; 25,000 users per day).
As you would then start development, one of the first things to do is ofcourse to set up the environment to be able to run such a load test. If that never happens, you already know very early in the project that you will be in trouble.
Having the load test done as many times as needed (at least once every sprint) you will know that the scale requirement is/can be handled by your architecture.
HtH
It's going to depend on a lot of other factors than just the number of orders per day. Where will it be hosted? What does that physical architecture look like? What else does the application do besides ecommerce? Does it need to integrate with other applications (besides the payment gateways of course)? Etc.
A simple two tier application in the right cloud hosting environment (say VMware for instance) that can scale dynamically would work just fine for an ecommerce website. A simple two tier application in the right on-premises hosting environment (load balanced web farm) should also work fine for an ecommerce website. It's the difference between scaling up (potentially hidden with virtualization, which ends up being a scale out of sorts) and scaling out (adding more servers).
A distributed architecture would allow you to distribute the system load (say order processing) to 1:M servers that sit (perhaps) behind a load balancer. This is a very common approach, and would also work very well for an ecommerce website.
In my opinion, there isn't one architecture or system design that fits every mold. The closest architecture to fit every mold (again, my opinion) would be a service oriented architecture. If all business processes and logic are services (and designed correctly), then no matter how your requirements change, no matter what your hosting environment looks like or changes to, and no matter what integration requirements you have, your system can handle it with little or no changes.
Since you are coding .Net, put the application in Azure (da cLOUD man, DA CLOUD! ;).
It will help you get the processing power that you'll need. The application will work fin as long as you do not do any stupid mistakes.
Scaling can be addressed, at least partially, with load balancing.
Concurrency might be your true problem. Distributed transactions is a heavy handed tool, but it won't solve every use case without additional thought.
Some financial companies have additional security requirements regarding the web servers having direct access to the database.

application completely SOA?

Is it wise to build a large application entirely based off SOA? Or just some portions? User account logins, accounting, gis mapping, sales, etc?
In other words, would it be wise to build a GUI to such an application in HTML & Javascript which does all it's exchanges via ajax to .NET web services on the back-end?
I can't see it worth loosing all the .net .aspx functionality such as forms authentication, view state, etc. But my co-worker is saying if we are going to go SOA there is no need for .NET on the front end. But i think there should be some sort of balance. Where do you draw the line? Should all calls to the database go through the web services?
I just want to say that "with SOA we’re building for change, while with Traditional systems engineering, we’re building for stability."
The problem with stability, of course, is, it only takes the business so far — if the organization requires business agility, then they’re much better off implementing SOA.
So, It solely depends on what you want to achieve, you are the one who should draw the boundary.
I read it in article on SOA few days back as I'm too working on SOA.
EDIT:
Meanwhile I came across this article and thought of sharing with you.
The video quite explains the current scenario of SOA and its views by different people.
I'm getting the words of the song 'If I had a hammer' coming to mind. SOA is an architectural approach to develop software as a series of services. In my opinion this is best for systems that have less than immediate latency and limited bandwidth, and high cost in access etc (these are all obviously highly subjective). You don't need full SOA just get loose couping between components which I would argue is a good goal to achieve.
DB calls can go through a service, take ADO.NET data services for example however you really have to weigh up with what the service is to provide. Take caching. A decent approach to SOA will consider that data is may need to be cached to reduce service load. So can your data be stale in the UI? Are you allowing that use case? Is right for login info to be stale (a rough example I know but possibly something that may need to be addressed).
All in all - it depends. I think some things lend themselves to SOA very well. If you take a DDD approach then the services that represent Domains would probably do so. In this way your UI talks to domain services and not rows in table as the DB is abstracted behind domain services.
Don't use one methodology to solve all problems.
See this SO question too
It's a service oriented architecture, not a service exclusive architecture.
Presentation logic and plumbing have to live somewhere; it all depends on where it makes the most sense for it to live.
For example, let's say you have a UI component that relies on a highly chatty but efficient set of calls to a database to generate a complex analysis of something (take your pick). If your web browser is making all those calls, you introduce massive network latency and concurrency issues. If a web service makes all those calls, you are potentially putting presentation logic into it to format that result.
If you are using Session state (or web services period), you are essentially using ASP.Net anyway. Try uninstalling it and see if your web services still run.
If presentation logic needs to live on the server side, it is better for it to live within a framework intended for presentation rather than a web service, IMO. If you haven't looked at MVC 2, do so. It makes it incredibly easy to set up an application that melds browser and server UI support (for example, jQuery validator controls backed by server-side validation).
Conversely, the web browser provides an expressive platform. Assuming browser support and team knowledge, the AJAX/SOA architecture you describe is a good one. I'm using it more and more and trying to make my server pages cleaner and simpler but I have no plans to exclude ASP.Net from my toolkit any time soon.
Client implementation should be completely disconnected from the back end web service in a SOA. The service should be able to be consumed by ANY client. If you are using .NET on the back end and front end because they can be coded to directly communicate, then you are missing the point, because now they are tightly coupled and what you have now is a stove pipe application. The client should have no idea how the server side is implemented -- shouldn't matter if the back-end web service is built using .NET, Java, or whatever.
In a true SOA, you should be able to search for services in the services repository, perhaps tie the outputs in with other services or use XSLT to create alternative outputs that weren't necessarily considered when the original service was built, and consume it in a standard way in any client on the front end.
It sounds like what you're really asking is how to build a single application. The point of a SOA is to provide standard data sets through re-usable interfaces, that have no specific application or implementation in mind. To start out building a single application with the entire back-end comprised of SOA services would be a huge undertaking. In MY mind, each back-end service should be built because of it's intrinsic value all on it's own and be provided to the entire SOA "domain". Then when you or I decide to make a client that does X, Y, and Z, we can just go find those capabilities in the SOA and injest them.

Moving from single user desktop applications to multiuser development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to bootstrap a micro ISV on my nights and weekends. I have an application at a very early stage of development. It is written in C# and consists mainly of a collection of classes representing the problem domain. At this point there's no UI or data persistence. (I haven't even settled on the .NET platform. Its early enough that I could change to Java or native executables)
My goal for this application is that it will be a hybrid single user/ occasionally connected multiuser application. The single user part will use an embedded database for local storage. This is a development model I'm familiar with.
The multiuser part is where I have no prior experience. I know each user will need two things:
IP based communication to a remote server on the public internet
User authentication and remote data storage
I have an idea of what services I want this server to provide (information lookup and user to user transactions) but beyond that I'm out of my element. The server will need to be hosted by a third party since I don't have resources to run my own server. Keeping in mind that I will be the sole developer for this project for the foreseeable future:
Which technologies would be the simplest way to implement the two things mentioned above? Direct access to the datastore/database or is it better to isolate it? Should I implement a webservice? If so, SOAP or REST?
What other things do I need to consider when moving to a multiuser application?
I know security is a greater concern in a multiuser application. Especially when your dealing with any kind of banking information(which I will). Performance can be an issue when dealing with a remote connection and large numbers of users. Anything else I'm overlooking?
Regarding moving to a multiuser application, centralising your data is the first step of course, and the simplest way to achieve it is often to use a cloud-based database, such as Amazon SimpleDB or MS Azure. You typically get an access key and a long 'secret' for authentication.
If your data isn't highly relational, you might want to consider Amazon SimpleDB. There are SDKs for most languages, which allow simple code to store/retrieve data in your SimpleDB database using a key and secret, anywhere in the world. You pay for the service based on your data storage and volume of traffic, so it has a very low barrier of entry, especially during development. It will also scale from a tiny home application up to something of the size of amazon.com.
If you do choose to implement your own database server, you should remember two key things:
Ensure no session state exists, i.e. the client makes a call to your web service, some action occurs, and the server forgets about that client (apart from any changed data in the database of course). Similarly the client should not be holding any data locally that could change as a result of interaction from another user. Cache locally only data you know won't change (or that you don't care if it changes).
For a web service, each call will typically be handled on its own thread, and so you need to ensure that access to the database from multiple threads is safe. If you use the standard .NET or Java ways of talking to a SQL database, this should be handled for you. However, if you implement your own data storage, it would be something you'd need to worry about.
Regarding the question of REST/SOAP etc., a key consideration should be what kinds of platforms/devices you want to use to connect to the database server. For example if you were implementing your server in .NET you might consider WCF for implementing your web services. However that might introduce difficulties if you later want to use non-.NET clients. SOAP is a mature technology for web services, but quite onerous to implement, and libraries to wrap up the handling of SOAP calls may not necessarily be available for a given client platform. REST is simple to implement (trivially easy if you use ASP.NET MVC on your server), accessible by any client that can handle HTTP POST/GET without the need for libraries, and easy to test, so REST would be my technology of choice.
If you are sticking with .net (my personal preference), I would expose data access calls via WCF. WCF configuration is really flexible and pretty easy to pick up and you'll want to hide your DB behind a service layer.
1.Direct access to db is the simplest, and the worst. Just think about how you'd auth the db access... I would just write a remote-able API with serializable parameters, and worry about which methods to connect later (web services, IIOP, whatever) - the communication details are all wrapped and hidden anyway.
2.none

Categories

Resources