Why business logic? [closed] - c#

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Let's imagine any common operation being executed on website.
After user presses button the application should:
Check conditions whether the operation is allowed or not (user rights, some object's consistency, relations and other things)
Update DB record
Report
This is business-logic layer's concern as it's written in tons of books.
In fact, we firstly read data from DB then we write data to DB. And in the case when data were changed by any other user/process during our checkings we will put invalid result into the database. Seemes like the problem should be well-known but i still can't find good solution for this.
The question is: why do we need business logic layer with no opportunity to maintenance business transactions?
You probably would say TransactionScope. Well, how do you prevent read data from its changes by external processes? How it's possible to UPDLOCK from business layer? If it's possible then wouldn't it all be much more expensive than doing transactions in stored procedures?
No way to bring a part of logic to DB - only the whole. Both parts #1 and #2 must be implemented in the same transaction, moreover, read data must be locked until updation has been made.
Ideas?

I really think you are arguing this from the wrong angle. Firstly, in your specific example, there doesn't seem to be anything saying that the change in votes by one user invalidates the attempt of another user to affect an upvote. So, if I opened the page, and there were 200 votes for the item, and I clicked upvote, I don't really care if 10 other people have done the same in the meantime. So, validations can be run by the business layer, and if the result is that the vote can go through, the update can be done in an atomic way using a single SQL statement (E.g. UPDATE Votes SET VoteCount = VoteCount+1 WHERE ID=#ID), or a select with UPDLOCK and update wrapped in a transaction. The tendency for ORMs and developers to go with the latter approach is neither here nor there, you still have the option to change the implementation if you so choose.
Now, if the requirement is that an update to the vote count actually invalidates my vote, then it's a completely different situation. In this case, we are absolutely correct to use either optimistic or pessimistic concurrency, but these are (obviously) not applicable to a website where hundreds of people may vote at the same time, for the same item. The issue here is not the implementation, it's the nature of allowing multiple people to work on the same item.
So, to summarise, there's nothing stopping you from having a business layer outside of the DB and keeping the increment atomic. At the same time, you hopefully enjoy the benefits of having your business logic outside of the DB (which is a post in itself, but I'd argue that it's a large benefit).

Related

using SignalR in my MVC application to get users latest information in Real Time [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have been searching for a service that could do something like, notify a user (specific user) that they have a new friend request. I came across SingalR and thought this may be something that might be useful to my application. I see alot of the examples and live uses of SignalR are chat application, which makes sense. Anyway here is what I am trying to accomplish here. I have a MVC social application that uses RavenDB as the datastore, A user may request friendship with another user, I would like to update that client in realtime that they have a new request (something that checks every X seconds). I am either looking for a good SignalR example, or documentation (hopefully example) that may point me in the right direction, or a good service other than SignalR that would suit my app better. Thanks for any answers.
SignalR would definitely suit your app well. JabbR (http://jabbr.net/, https://github.com/davidfowl/JabbR) for instance may be a chat room but it is constantly reaching out to the database to update/retrieve its records.
For your case I'd recommend queuing up a command on database writes to notify other users rather than checking periodically. Meaning lets say user A requests to be friends with user B. First that request is written to the database and then it broadcasts a message via SignalR to all parties involved.
However, if you still would like to implement a timer check every X seconds this is still possible. See ShootR (shootr.signalr.net, https://github.com/NTaylorMullen/ShootR), a multiplayer game that utilizes a game timer and broadcasts collisions when it detects them. Granted ShootR is doing calculations on the server at a much higher interval (50+ times / second) it's essentially the same.
Therefore if you want to take the check every Xs approach I'd suggest taking a hybrid of the two projects (JabbR & ShootR) and implementing a threaded timer (instead of a custom timer used for high frequency updates which is what ShootR uses) and then retrieving data from the database and using that data to send updates to users.
Hope this helps!

improve perfomance of a REST Service [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have method which calls a stored procedure 300 times in a for loop and each time the stored procedure returns me 1200 records. How can i improve this ? I cannot eliminate the 300 calls but is there any otherways i can try out. I am using REST service impletemented through ASP.NET and using IBATIS for database connectivity
I cannot eliminate the 300 calls
Eliminate the 300 calls.
Even if all you can do is to just add another stored procedure which calls the original stored procedure 300 times, aggregating the results, you should see a massive performance gain.
Even better if you can write a new stored procedure that replicates the original functionality but is structured more appropriately for your specific use case, and call that, once, instead.
Making 300 round trips between your code and your database quite simply is going to take time, even where the code and the database are on the same system.
Once this bit of horrible is resolved, there will be other things you can look to optimise, if required.
Measure.
Measure the amount of time spent inside the server-side code. Measure the amount of that time that is spent in the stored procedure. Measure the amount of time spent at the client part. Do some math, and you have a rough estimate for network time and other overheads.
Returning 1200 records, I would expect network bandwidth to be one of the main issues; you could perhaps investigate whether a different serialization engine (with the same output type) might help, or perhaps whether adding compression (gzip / deflate) support would be beneficial (meaning: reduced bandwidth being more important than the increased CPU required).
Latency might be important if you are calling the REST service 300 times; maybe you can parallelize slightly, or make fewer big calls rather than lots of small calls.
You could batch the SQL code, so you only make a few trips to the DB (calling the SP repeatedly in each) - that is perfectly possible; just use EXEC etc (still using parameterization).
You could look at how you are getting the data from ADO.NET to the REST layer. You mention IBATIS, but have you checked whether this is fast / slow compared to, say, "dapper" ?
Finally, the SP performance itself can be investigated; indexing or just a re-structuring of the SP's SQL may help.
Well, if you have to return 360,000 records, you have to return 360,000 records. But do you really need to return 360,000 records? Start there and work your way down.
Without knowing too much of the details, the architecture appears flawed. On one hand its considered unreasonable to lock the tables for the 6 seconds it takes to retrieve the 360,000 records using a single S.P. execution, but it fine to return a possibly inconsistent set of 360,000 records that are retrieved via multiple S.P. executions. It makes me wonder what exactly are you trying to implement and if there is a better way to design the integration between the client and the server.
For instance, if the client is retrieving a set of records that have been created since the last request, then maybe a paged ATOM feed would be more appropriate.
What ever it is you are doing, 360,000 records is a lot of data to move between the server and the client and we should be looking at the architecture and purpose of that data transfer to make sure the current approach is appropriate.

C# when do I need locks? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have read something about threadsafety but I want to understand on what operations I need to put a lock.
For example lets say I want a threadsafe queue/
If the deqeue operation will return the first element if there is one, when do I need a lock? Lets say i'm using an abstract linked list for the entries.
Should write actions be locked? Or reading ones? Or both?
Hope if someone can explain this to me or give me some links.
Synchronization in concurrent scenarios is a very wide topic. Essentially whenever two or more threads have some shared state between them (counter, data structure) and at least one of them mutates this shared state concurrently with a read or another mutation from a different thread, the results may be inconsistent. In such cases you will need to use some form of synchronization (of which locks are a flavor).
Now going to your question, a typical code that does a dequeue is the following (pseudocode):
if(queue is not empty)
queue.dequeue
which may be executed concurrently by multiple threads. Although some queue implementations internally synchronize both the queue is not empty operation as well as the queue.dequeue operation, that is not enough, since a thread executing the above code may be interrupted between the check and the actual dequeue, so some threads may find the queue empty when reaching the dequeue even though the check returned true. A lock over the entire sequence is needed:
lock(locker)
{
if(queue is not empty)
queue.dequeue
}
Note that the above may be implemented as a single thread-safe operation by some data structures, but I'm just trying to make a point here.
The best guide for locking and threading I found, is this page (this is the text I consult when working with locking and threading):
http://www.albahari.com/threading/
Yo want the paragraph "Locking and Thread Safety", but read the rest also, it is very well written.
For a basic overview see MSDN: Thread Synchronization. For a more detailed introduction I recommend reading Amazon: Concurrent Programming on Windows.
You need locks on objects that are subject to non atomic operations.
Add object to a list -> non atomic
Give value to a byte or an int -> atomic
As the simplest rule of thumb, all shared mutable data requires locking a lock while you access it.
You need a lock when writing, because you need to ensure no people are writing the same fields at the same time.
You need to lock when reading, because another thread could be halfway writing the data, so it could be in an inconsistent state. Inconsistant data can produce incorrect output, or crashes.
Locks have their own set of problems associated with them, (Google for "dining philosophers") so I tend to avoid using explicit locks whenever possible. Higher level building blocks, like the ConcurrentQueue<> are less errorprone, but you should still read the documentation.
Another simple way to avoid locks is to make a copy of the input data for your background process. Or even better, use immutable input (data that can not change).
The basic rules of locking
Changing the same thing simultaneously does not fly
Reading a thing that is being changed does not fly
Reading the same thing simultaneously does fly
Changing different things simultaneously might fly
Locking needs to prevent situations that do not fly. This can be done in many ways. C# gives you a lot of tools for this. Among other the Concurrent<> collection types like ConcurrentDictionary, ConcurrentQueue etc. But also ReaderWriterLockSlim and more.
You might find this free .pdf from microsoft useful. It's called 'An Introduction to Programming with C# Threads'
http://research.microsoft.com/pubs/70177/tr-2005-68.pdf
Ot this somewhat more humorous relay
http://www.codeproject.com/Articles/114262/6-ways-of-doing-locking-in-NET-Pessimistic-and-opt

Heavy load website [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have an aspx website with many pages which accept user input, pull data from a SQL database, do some heavy data processing then finally present the data to the user.
The site is getting bigger and bigger and it is starting to put a lot of stress on the server.
What I want to do is to maybe separate things a bit:
Server-A will host the website, the site will accept input from users and pass those parameters to applications running on Server-B
Server-B will fetch data from SQL, do the heavy data processing, then pass a dataset or datatable object back to the website.
Is this possible?
Sure, this is called an N-Tier Architecture.
The most obvious thing to separate is one database server, tuned to meet the demands of a database (fast disks, lots of RAM) and one or more separate web servers.
You can expand on that by placing an application tier between the web server and the database server. The application tier can accept the user input that was collected in the web tier, interact with the database, do the heavy crunching, and return the result to the web tier. Most typically, you would use Windows Communication Foundation (WCF) to expose the functionality of the application tier to the web server(s). Application servers might often be tuned to have very fast CPU's and might have slower disks and possibly less memory than database servers, depending on exactly what they need to do. The beauty of this solution is that you can just add more and more identical application servers as the load on your application grows.
Based on the business model, you need some caching strategy to prevent heavy calculation for each input.
Consider an stock website. Although there are many transactions each minute, they won't update market trend for each of them. They can schedule an update based on something like, defined intervals (hourly, daily...), defined number of interactions (based on count of value) etc.
The task should be done when the server load is low. This way visitors see the stock trend on main page while it is accurate enough.
For such heavy load scenarios, good design is everything since sometimes even the expensive hard-wares will not help much.
If you like, share some info about what is going done.
i would use a load balancer like an F5
that way your architecture does not change and
but i would use the ntier approach to split your site into a data and presentation layer
then the load balancer will direct each request to the server with the lightest load

What is the best way to create a reminder service in .Net? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am looking to build a web application that will allow users to manage simple tasks or todos. As part of this, I'd like to send users email reminders daily of their tasks. I'd like each user in the system to specify what time of day they get their reminder, though. For example, one user can specify to get their emails at 8AM while another user might choose to get their emails at 7PM. Using .Net, what is the best way to architect and build this "reminder" service? Also, as a future phase to this, I'd like to expand the reminders to SMS text and/or other forms of notification.
Any guidance would be appreciated.
Your question is a little broad, but there are two general approaches to handling reoccurring tasks on Windows:
Write a service, and design it to check for tasks periodically. You might have it poll a database every quantum, and execute any reminders due at that time (marking off those it's completed).
Run a program as a Windows scheduled task (the Microsoft equivalent of cron). Run it every hour, and check a database as above.
I'm assuming you know how to send email from .NET - that's pretty straightforward, and most carriers have mail-to-SMS gateways.
I suppose you could do it two ways:
Windows Service running in the background, which polls the DB, looks for items at the current time, sleeps if it finds nothing, loops
This is fine enough, but may not scale well if suddenly there are 1,000s of items to process in 30 seconds, or so, as it will take too long to do it. You can get around this by building it with MSMQ in mind, which allows distribution over different machines, and so on.
Similar Windows Service, but have it interactable via some sort of pulse/wait system, which is fired each time a new DB entry gets made. Then it can legitimately sleep, but is kind of "brittle".
I'd probably go with the first approach.
I would create a service that runs all the time and checks once a minute for task to preform. You could then preform what ever actions you need. you can create a website for users to use that goes to a database that the service also reads off of. If you want to get a little more fancy and a WCF interface to your service have the website go to the service and store your needed reminders in the database.

Categories

Resources