Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to learn RavenDB and using it's .Net client. If I am not wrong I think the main reason to use NoSql database like RavenDB is speed. It seems to be very fast as it is not relational. However when I am trying to play with .Net client of RavenDB I find that all calls are REST based calls. Doesn't that slow down speed? For each call of adding document it makes some call to HILO which basically lets the .Net client know which should be the next unique number to use and then it makes 2nd call to store the actual document.
You seems to be running RavenDB in a console app, and checking what happens in a very short lived fashion.
Run RavenDB in a real app over time, and you'll see that it is highly optimized for network usage.
You only see this hilo call once per X number of updates, and that X number changes based on your actual usage scenarios.
The more you use RavenDB, the faster it becomes.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I want to stress my .net web service, It feels like something is limiting the concurrent connections I can have.. even when trying from 2 different computers on them server the results were pretty the same. (All of this is done locally, server and clients are on local network so response time is very fast)
So is there a settings I need to change in my server machine to allow more incoming connections?
There are various things that can limit the amount of processing possible, each of which require research to see if they apply. So you might want to add more to your question about what has been verified today.
Regardless, based on your information I would assume that SessionState is enabled. This, with default behavior will limit processing to a single request at a time for each client due to synchronization locks for guaranteed read-write ability. I assume this is the root cause of what you are seeing today. This StackOverflow post talks about this specifically
Others have posted various details in the comments that can help also.
I have found though that load testing is best done from outside sources as well to ensure your entire production pipeline is involved. (Network components, etc)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I work for a small Point of Sale Company, and we are working on a in-house tool to make our lives easier when it comes to ticketing and troubleshooting. Part of my task in this tool is to write a 'softphone' in C# WPF that we can use to accept incoming and make ongoing calls with.
We currently use OnSIP as our SIP provider, and are looking to build custom software to essentially allow us to auto-generate support tickets based on the phone number of the incoming call. In addition we will need call transferring, recording, hold/wait, etc.
The question that seems to be causing me the most trouble is really where to begin on something like this. Thoughts?
I'm presuming this is a desktop application?
Lookup pjsip.org, it's a portable C library which is very well proven. It will allow you to do all that you are asking, although it'll take you some time to write the wrapper code - you can find examples on the internet, however we have written a wrapper ourselves which I'll check on as we had intended open sourcing it. This is because when we did this last year, the examples just didn't work too well :-)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We have a .NET application using ADO.NET and Entity Framework and a lot of legacy stored procedures. Occasionally an operation will bring the database server's CPU to 100% for seconds or even minutes. During this time no other operations can be executed against the database. There is some culprit code which is far too complex and business critical to be feasible refactoring in the short term, but this can also happen from newer code depending on the situation.
I would like to prevent any one SQL operation from taking 100% of the CPU, etc. Is there any way to configure MS SQL to provide no more than, say 20% of the CPU, to any one query?
I know that ideally we would rewrite the code to not be as intensive, but that is not feasible in the short term, so I'm looking for a general setting which ensure this can never happen.
Take a look at the Resource Governor (assuming you're using SQL 2008 or up). A good simple overview on usage is here. Though it won't work necessarily on a specific query, using a reasonable classifier function will/should allow you to narrow it down pretty closely if you like. I don't have 10 rep yet so I can only post 2 links, but if you google "Sql Server classifier function" you'll get some decent guidance.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
As Big Data term used widely to manage huge data, I want to give a try to build a small application with Big Data to understand structure and how I can start with ASP.NET technology?
Is it possible?
"Big data" is a marketing term for "highly scalable large load computing". So can you use ASP.NET for highly scalable large load computing...
Yes, and here is how (Scaling Strategies for ASP.NET Applications).
Adding to Scott's answer, apart from ASP.NET being capable of scaling to high loads with effective strategies, .NET ecosystem also provides HDInsight in Azure, which implements MapReduce programming model to query over large clusters of Data.
Azure HDInsight could closely be related to the marketing buzzword of 'Hadoop','Big Data' etc.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a requirement to load a file containing up to 1 million lines of string data. My first thought is to use C# 5.0 async to load the data whilst not blocking the UI thread. If the user tries to access something that relies on the data they will get a loading message.
Still I would like the fastest possible method in order to improve the user's experience.
Is the speed of reading data from the disk purely a function of the disk speed and thus StreamReader.ReadAllLines() is as performant as other c# code? Or is there something 'fancy' I can do to boost performance programmatically. This does not have to be described in detail. If so what approximate percentage improvement might be achieved?
I am purely interested in read speed and not concerned with the speed of code that may process the data once loaded.
First of all, take a look on File size, here is detailed Performance measurements