Application on clients machine with database operations called directly from server [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am having trouble phrasing this correctly but is it possible to have a desktop based application where all of the work is done on the server (specifically with the database) instead of the client? The use case is that for an internal application where our server is located, the application works great. However, remote workers that have to use the application really struggle with latency issues over our VPN. Our current solution is to just make it into a web application but the problem with that is we feel we will lose efficiency in the application. Ideally we would create a desktop app in something along the lines of WPF but with all of the work (database connection/calls) not done on the client. It doesn't necessarily have to be Microsoft but that is what we are trying to go with. Does anyone have any insight into how this could be done? Thanks in advance!

#Dan's comment: well it's really hard to diagnose without seeing the application itself. Since your application is connecting directly to the Database server, the question would be: Is the db server returning large datasets that are being processed by your desktop application; if yes, then moving that logic to a server application that connects to the database to perform the processing and return a result would reduce network latency if that result is smaller than the source data it processed.
However this quickly spirals into other questions. What data processing is being done? Depending on what is happening with the data would change your server side architecture. If it's CPU intensive than it will be important that your server side application can horizontally scale with demand.
All that being said, this is only posted as an answer because it wouldn't fit in a comment. It's not really an answer, and your question really requires a good architect to sit with you and look at the needs of your users and application to address this fairly.
UPDATED
So based on your comments I can't be sure if this is better or worse in your scenario, but you could try VDI solutions or application streaming solutions. The great thing is that despite whether it works out or not you can test an application streaming solution with no changes to your application. However depending on your network and security requirements the real work would be getting your systems connected with the application streaming service.
You could try something like https://aws.amazon.com/appstream2/ AWS APPSTREAM, and see if this is any faster for your users. Above and beyond that you would probably need to get another set of eyes on your solution and architecture to help you redesign/rearchitect the application to work within the constraints you are dealing with.
Talk to your VPN software vendor to see if you can better scale their solution.
Configure your VPN to use a split tunnel connection so that all of your users internet traffic doesn't route over your VPN gateway unless that's a requirement for you.

Related

What will be the best way for Server to server data transmission in an offline satellite environment with low bandwidth [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We have a customer which connects to our servers through Satellite. However, they have a major concern that the facilities in which they want to run our applications, at many times have connection issues. So, they want to run our applications locally (on specific PC's). Data from the host system would feed to the local machines continuously. If the connection is lost, the PC still have enough data to conduct it's business until the connection is restored. At that time, the data changes from the PC are reflected back to the host system and visa versa.
I guess we would be considering some type of replication (this is all new to me). This has many questions but here are the main ones.
If we replicate, then they need a copy of SQL Server on each PC. We are talking about 60 sites which would be very expensive due to licensing. Also, other support costs.
Is it better to always run replication or only in the event that the connection was lost?
How does the local system get in sync with the hosted system?
Just looking for a better/less expensive solution.
The way I see it, there are two ways to for it (depending on your requirements.
If you think the problem will not persist you can use the circuit breaker pattern:
https://learn.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker
Handle faults that might take a variable amount of time to recover from, when connecting to a remote service or resource. This can improve the stability and resiliency of an application.
If you need to retry indefinitely and you can't afford to lose data then you will need a custom solution.
On a totally local environment you could go with either a local database like sql lite, where you can store items and retry if not successful, or store the calls in Microsoft
Queue. Then you call build a service that reads the database or the queue and retries.

How can I reduce page load times in a *VERY UNCONVENTIONAL* C# ASP.NET web forms application? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
(I have been at this problem for about 6 months now, so please forgive me if anything about this question does not fit Stack Overflow's guidelines. I'm pretty sure I'm ok though. I knew there was a chance of this being too vague, I just wasn't sure.)
Long story short: I have been tasked with improving an absolute demon of an asp.net web application that was developed in 2001. The original programmer did not use a single ASP html object, but instead opted to print ALL html, javascript, and css via Response.Write(). All of the .aspx pages are, quite literally, blank - everything is done in the aspx.cs.
Here is an example of what I'm working with, and what 90% of the codebehind looks like:
if (Common.IsSuperuser())
{
Response.Write("<div style=\"float:left;padding:5px 5px 5px 25px;\">");
Response.Write("<form action=\"users.aspx?id=" + strUserID + "&siteid=" + thisSite._siteGUID + "\" method=\"post\" style=\"display:inline;\">");
Response.Write("<input type=\"hidden\" name=\"isClearEmpty\" value=\"true\" />");
Response.Write("<input type=\"submit\" name=\"clearEmptyAccounts\" value=\"Clear Empty Users\" />");
Response.Write("</form>");
Response.Write("</div>");
}
Please end me.
One of the major goals is to reduce page load times, which are currently sitting between 4 and 8 seconds. However, when I set two breakpoints at the '{' and '}' of the page_load() function, the time elapsed between them is only ~475ms for any given .aspx page. Where is the other 3500-7500ms coming from?
According to Chrome's console, that time is spent simply waiting for the server. We host dozens of web stores on one server (50+), each store is its own ASP.NET application in it's own directory, and furthermore each store's admin component is also it's own ASP.NET application within it's respective store's directory. Each store/admin combo also has it's own Microsoft Access (yes, you read that right) database. It is, frankly, insane.
The kicker is that the actual store application loads just fine, but the admin side application is always plagued with >5s load times. We also quite often get a plaintext response "Server Too Busy" while on the admin application, but never on the client side store. (Literally zero other information, just a blank, white HTML page that says "Server Too Busy")
Our stores are hosted on a virtual semi-private server by a third party, so let's rule out server settings and hardware for now, as I do not have the ability to change these things. I'm just wondering if there is anything I can do within the boundaries of the application itself to alleviate our administrators' frustrations while I rewrite the entire thing from the ground up.
If there is anything that anyone can suggest, great, I'll try it. If I'm limited by my inability to control server settings, or by the vagueness of this question (I have no more information to go on, sorry), that's fine too.
Thank you all for your time!
Install YSlow for FireBug or PageSpeed it will tell you the key things that are slow.
Limit your HTTP requests
Minimize, compress and combine your JavaScript and CSS files
Use CSS Sprites
Put script tags at the bottom of the page
Compress your images
Install Memcache to further reduce the database load,
Set up a separate domain or CDN to serve static files.
In my experience Access Databases are terrible for more than 4 or 5 concurrent users. Migrate to SQLExpress ASAP which has the same 4GB storage limit but it doesn't use the JET engine.
The StringBuilder suggestion I suspect wont make much of a differce because Response.Write internally appends strings to a reusable buffer so that it does not suffer the performance overhead of allocating memory, in addition to cleaning that memory up.
Microsoft Access databases are not designed to scale. For example, it maxes out (in theory) at 255 users, but Microsoft's advice is to limit it to 10 users. Yes, ten.
The wait times may be related to threads blocking while they wait for Access resources to be released. So, step #1, migrate it to SQL Server or some other database solution that is designed for enterprise use.
Microsoft Jet has a read-cache that is updated every PageTimeout milliseconds (default is 5000ms = 5 seconds). It also has a lazy-write mechanism that operates on a separate thread to main processing and thus writes changes to disk asynchronously. These two mechanisms help boost performance, but in certain situations that require high concurrency, they may create problems.
Jet can support up to 255 concurrent users, but performance of the file-based architecture can prevent its use for many concurrent users. In general, it is best to use Jet for 10 or fewer concurrent users.
Article: What are the limitations of Microsoft Access?
I am not realy sure about this but I would try to add the html markup to a StringBuilder object and call the Response.Write just once with the content of that StringBuilder object... I can imagine that could cause the problems.

Huge performance drop after moving to Azure [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Currently we were working on a cloud migration project where we made the following changes:
SQL Server 2014 to Azure SQL PaaS
Redis cache[Windows porting] to Azure Redis PaaS
Static files from shared drives to Azure File Service
Implemented Transient Fault Handling for database interactions.
HttpSessionState changed from SQL Server to Custom[Redis PaaS]
The Application had two web applications that use the same database:
One built in Classic model of dot net coding with web-forms.
The other application built using dot net MVC4.
After we moved the applications from existing Rackspace environment[2 servers each with 4GB RAM] to Azure and ran a load test and received the following results:
The MVC4 application is fractionally faster.
The Web-Form application started performing poorly, with same load,
response time increased from 0.46 seconds to 45.8 seconds.
The memory usage is same, database utilization is around 30%-40% and the CPU utilization in nearly 100%(all the web-servers) at 1100 concurrent users(at Rackspace, it served 4500 concurrent users).
We tested the application 2 D5 azure server VMs with RAM being higher and CPU being faster.
Can anyone highlight how such drastic performance drop(one application performing almost same, other one performing almost 100 times slower) is possible?
NB: One observation, the CPU utilization stays at 100% even after 30mins of stopping the load-test. Then it drops quickly.
I will second the notion (emphatically) that you invest as much time and energy as you can in profiling your application to identify bottlenecks. Run profiles on-premises and in Azure and compare, if possible.
Your application clearly has many moving parts and a reasonably large surface area... that's no crime, but it does mean that its hard to pinpoint the issue(s) you're having without some visibility into runtime behavior. The issue could lie in your Redis caching, in the static file handling, or in the session state loading/unloading/interaction cycle. Or it could be elsewhere. There's no magic answer here... you need data.
That said... I've consulted on several Azure migration projects and my experience tells me that one area to look closer at is the interaction between your ASP.NET Web Forms code and SQL. Overly-chatty applications (ones that average multiple SQL calls per HTTP request) and/or ones that issue expensive queries that perform lots of logic on the database or return large result sets, tend to exhibit poor performance in public clouds like Azure, where code and data may not be co-located, "noisy neighbor" problems can impact database performance, etc. These issues aren't unique to Web Forms applications or Azure, but they tend to be exacerbated in older, legacy applications that were written with an assumption of code and data being physically close. Since you don't control (completely) where your code and data live in Azure relative to each other, issues that may be masked in an on-prem scenario can surface when moving to the cloud.
Some specifics to consider:
take a close look at the use of data binding in your Web Forms app... in practice it tends to encourage expensive queries and transfer of large result sets from database to application, something you might sometimes get away with on-premises but not in the cloud
take another look at your SQL configuration... you don't mention what tier you're using (Basic, Standard, Premium) but this choice can have a big impact on your overall app performance (and budget!). If it turns out (for example) that your Web Forms app does issue expensive queries, then use of a higher tier may help
Azure SQL DB tiers
familiarize yourself with the notion of "cloud native" vs. "cloud capable" applications... generally speaking, just because you can find a way to run an application in the cloud doesn't mean its ideally suited to do so. From your description it sounds like you've made some effort to utilize some cloud-native services, so that's a good start. But if I had to guess (since we can't see your code) I'd think that you might need some additional refactoring in your Web Forms app to make it more efficient and better able to run in an environment you don't have 100% control over.
More on cloud-native
dated but still relevant advice on Azure migration
If you can give us more details on where you see bottlenecks, we can offer more specific advice.
Best of luck!
There is some loop in the code that cause 100% CPU.
When the problem occurs, take a dump from (the kudu). Analyze it in windbg by
1) list threads cpu time with !runaway
2) check the callstack of the threads, specifically the greatest cpu consumer with
~*e!clrstack and with ~*kb

SQLite vs. SQL Server [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I created an application in C# using Winforms which has daily transaction of 2000 rows of data per day. I'm using SQL Server 2012 but I'm trying to use SQLite because of his fame and most people refer this
So, can you give me some ideas which one is better for my needs?
Thanks
SQLite integrates with your .NET application better than SQL server
SQLite is generally a lot faster than SQL Server.
However, SQLite only supports a single writer at a time (meaning the execution of an individual transaction). SQLite locks the entire database when it needs a lock (either read or write) and only one writer can hold a write lock at a time. Due to its speed this actually isn't a problem for low to moderate size applications, but if you have a higher volume of writes (hundreds per second) then it could become a bottleneck. There are a number of possible solutions like separating the database data into different databases and caching the writes to a queue and writing them asynchronously. However, if your application is likely to run into these usage requirements and hasn't already been written for SQLite, then it's best to use something else like SQL Server that has finer grained locking.
SQLite is a nice fast database to use in standalone applications. There's dozens of GUI's around to create the schema you want and interfaces for pretty much any language you would want (C#/C/C++/Java/Python/Perl). It's also cross platform and is suitable for Windows, Linux, Mac, Android, iOS and many other operating systems.
Here are some advantages for SQLite:
Perfomance
In many cases at least 2-3 times faster than MySQL/PostgreSQL.
No socket and/or TCP/IP overhead - SQLite runs in the same process as your application.
Functionality
Sub-selects, Triggers, Transactions, Views.
Up to 281 TB of data storage.
Small memory footprint.
Self-contained: no external dependencies.
Atomic commit and rollback protect data integrity.
Easily movable database.
Security
Each user has their own completely independent database(s).

Articles on how to organize background queue operations [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Now I'm thinking about how to organize architecture of system. The system will consists of web site, where user can upload some documents and then get it processed back and a some background daemon with an queue of tasks that should process provided documents.
My question is:
Should I implement the daemon I told you above, as a WCF service with only named pipes (no netowork access to this service needed)?
Any suggestions/tips/advices on that?
The data user can provide is just a bunch of XML files. ASP.NET web site will expose functionality to get this XML files and then somehow should be able to pass them to daemon.
Could you point me please on some articles on that topic.
Thanks in advance!
POST EDIT
After some hours discovering MSMQ suggested here by guys, my thought on that technology is about that is more for distributed architecture (processing nodes are located on separate machines and there is exchanging messages between differents computers through network).
At the moment separating to independent machines is not needed. There will be just on machine on which being an ASP.NET website and some processing program.
Is that using of MSMQ so necessary?
POST EDIT #2
As I using .NET Framework here, please suggest only offers what are compatible for .NET. There is really no any options here.
If your deployment will be on a single server, your initial idea of a WCF service is probably the way to go - see MSDN for a discussion regarding hosting in IIS or in a Windows Service.
As #JeffWatkins said, a good pattern to follow when calling the service is to simply pass it the location of the file on disk that needs processing. This will be much more efficient when dealing with large files.
I think the precise approach taken here will depend on the nature of files you are receiving from users. In the case of quite small files you may find it more efficient to stream them to your service from your website such that they never touch the disk. In this case, your service would then expose an additional method that is used when dealing with small files.
Edit
Introducing a condition where the file may be streamed is probably a good idea, but it would be valuable for you to do some testing so you can figure out:
Whether it is worth doing
What the optimal size is for streaming versus writing to disk
My answer was based on the assumption that you were deploying to a single machine. If you are wanting something more scalable, then yes, using MSMQ would be a good way to scale your application.
See MSDN for some sample code for building a WCF/MSMQ demo app.
I've designed something similar. We used a WCF service as the connection point, then RabbitMQ for queuing up the messages. Then, a separate service works with items in the queue, sending async callback when the task if finished, therefore finishing the WCF call (WCF has many built in features for dealing with this)
You can setup timeouts on each side, or you can even choose to drop the WCF connection and use the async callback to notify the user that "processing is finished"
I had much better luck with RabbitMQ than MSMQ, FYI.
I don't have any links for you, as this is something our team came up with and has worked very well (1000 TPS with a 4 server pool, 100% stateless) - Just an Idea.
I would give a serious look to ServiceStack. This functionality is built-in, and you will have minimal programming to do. In addition, ServiceStack's architecture is very good and easy to debug if you do run into any issues.
https://github.com/ServiceStack/ServiceStack/wiki/Messaging-and-redis
On a related note, my company does a lot of asynchronous background processing with a web-based REST api front end (the REST service uses ServiceStack). We do use multiple machines and have implemented a RabbitMQ backend; however, the RabbitMQ .NET library is very poorly-designed and unnecessarily cumbersome. I did a redesign of the core classes to fix this issue, but have not been able to publish them to the community yet as we have not released our project to production.
Have a look at http://www.devx.com/dotnet/Article/27560
It's a little bit dated but can give you a headstart and basic understanding.

Categories

Resources