The company I work for makes a complex accounting application. This is a desktop app that connects to a local database server on the client's network. Some of our clients want to get e-commerce sites built but they will need access to this data.
Is it OK to install the web site at one location and feed data to it from a web server in another location. I've built stuff like this in the past and I know it could potential be slow. I'm hoping to mitigate this problem with stacks of Asp.NET caching. Is this a reasonable architecture (for a small to medium size e-commerce site) or will it run like a dog? Due to much pain in the past, I'm trying to keep this simple and avoid any sort of replication of the database.
Cheers
Ma
Well, replication of the database might actually be the fastest option. Think about it: getting a whole bunch of data on each request, with some cache misses, or basically having a 'complete' local cache (and thus no cache misses, well, not in-transfer anyway, your DB might cache, of course).
Edit: so basically my answer would be: no, it's not OK to run the website and database in two completely different locations. Two boxes in the same rack could be OK, but it seems that it would be preferable to have your web-service and DB on the same (virtual) machine.
Related
I have a product, and a front end website where people can purchase the product. Upon purchase, I have a system that creates an A record in my DNS server that points to an IP address. It then creates a new IIS website with the bindings required.
All this works well, but I'm now looking at growing the business and to do this I'll need to handle upgrades of the application.
Currently, I have my application running 40 websites. It's all the same code base and each website uses it's own SQL Server database. Each website is ran in a separate application pool and operate completely independently.
I've looked at using TeamCity to build the application and then have a manual step that runs MSDeploy for each website but this isn't particularly ideal since I'd need to a) purchase a full license and b) always remember to add a new website to the TeamCity build.
How do you handle the upgrade and deployments of the same code base running many different websites and separate SQL Server databases?
First thing, it is possible to have a build configuration in TeamCity that builds and deploys to a specific location...whether a local path or a network drive. I don't remember exactly how but one of the companies I worked with in Perth had exactly the same environment. This assumes that all websites are pointing to the same physical path in the file system.
Now, a word of advice, I don't know how you have it all setup, but if this A record is simply creating a subdomain, I'd shift my approach to a real multi-tenant environment. That is, one single website, one single app pool for all clients and multiple bindings associated to a specific subdomain. This approach is way more scalable and uses way less memory resources...I've done some benchmark profiling in the past and amount of memory each process (apppool) was consuming was a massive waste of resources. There's a catch though, you will need to prepare your app for a multi-tenant architecture to avoid any sort of bleeding such as
Avoiding any per-client singleton component
Avoiding static variables
Cache cannot be global and MUST a client context associated
Pay special attention to how your save client files to the file system
Among other stuff. If you need more details about setting up TeamCity in your current environment, let me know. I could probably find some useful info
I'd like to know my options for the following scenario:
I have a C# winforms application (developed in VS 2010) distributed to a number of offices within the country. The application communicates with a C# web service which lies on a main server at a separate location and there is one database (SQL Server 2012) at a further location. (All servers run Windows Server 2008)
Head Office (where we are) utilize the same front-end to manage certain information on the database which needs to be readily available to all offices - real-time. At the same time, any data they change needs to be readily available to us at Head Office as we have a real-time dashboard web application that monitors site-wide statistics.
Currently, the users are complaining about the speed at which the application operates. They say it is really slow. We work in a business-critical environment where every minute waiting may mean losing a client.
I have researched the following options, but do not come from a DB background, so not too sure what the best route for my scenario is.
Terminal Services/Sessions (which I've just implemented at Head Office and they say it's a great improvement, although there's a terrible lag - like remoting onto someones desktop, which is not nice to work on.)
Transactional Replication (Sounds like something quite plausible for my scenario, but would require all offices to have their own SQL server database on their individual servers and they have a tendency to "fiddle" and break everything they're left in charge of!) Wish we could take over all their servers, but they are franchises so have their own IT people on site.)
I've currently got a whole lot of the look-up data being cached on start-up of the application but this too takes 2-3 minutes to complete which is just not acceptable!
Does anyone have any ideas?
With everything running through the web service, there is no need for additional SQL Servers to be deployed local to the client. The WS wouldn't be able to communicate with these databases, unless the WS was also deployed locally as well.
Before suggesting any specific improvements, you need to benchmark where your bottlenecks are occurring. What is the latency between the various clients and the web service, and then from the web service and the database? Does the database show any waiting? Once you know the worst case scenario, improve that, and then work your way down.
Some general thoughts, though:
Move the WS closer to the database
Cache the data at the web service level to save on DB calls
Find the expense WS calls, and try to optimize the throughput
If the lookup data doesn't change all that often, use a local copy of SQL CE to cache that data, and use the MS Sync Framework to keep the data synchronized to the SQL Server
Use SQL CE for everything on the client computer, and use a background process to sync between the client and WS
UPDATE
After your comment, two additional thoughts. If your web service payload(s) is/are large, you can try adding compression on the web service (if it hasn't already been implemented).
You can also update your client to do the WS calls asynchronously, either in a thread or if you are using .NET 4.5 using async/await. This would at least allow the client to use the UI, but wouldn't necessary fix any issues with data load times.
I have three applications running in three separate app pools. One of the applications is an administrative app that few people have privileged access to. One of the function the administrative app allows is creating downtime notices. So when a user goes into the administrative app and creates a downtime notice the other two apps are supposed to pick up on there being a new notice and display it on the login page.
The problem is that these notices are cached and being that each app is in a separate app pool the administrative app doesn't have any way to clear the downtime notices cache in the other two applications.
I'm trying to figure out a way around this. The only thing I can think of is to insert a record in the DB that denotes the cache needs to be cleared and the other two apps will check the DB when loading the login page. Does anyone have another approach that might work a little cleaner?
*Side note, this is more widespread than just the downtime notices, but I just used this as an example.
EDIT
Restarting the app pools is not feasible as it will most likely kill background threads.
If I understand correctly, you're basically trying to send a message from the administrative app to other apps. Maybe you should consider creating WCF service on these apps that could be called from the administrative application. That is a standard way to communicate between different apps if you don't want to use e.g. shared medium such a database and it doesn't force you to use polling model.
Another way to look at this is that this is basically an inter-application messaging problem, which has a number of libraries already out there that could help you solve it. RabbitMQ comes to mind for this. It has a C# client all ready to go. MSMQ is another potential technology, and one that already comes with Windows - you just need to install it.
If it's database information you're caching, you might try your luck at setting up and SqlCacheDependency.
Otherwise, I would recommend not using the ASP.NET cache, and either find a 3rd party solution that uses a distributed caching scheme, that way all applications are using one cache, instead of 3 separate ones.
I'm not saying this is the best answer or even the right answer, its just what I did.
I have a series of ecommerce websites on separate servers and data centers that rely on pulling catalog data from a central backoffice website location and then caches them locally. In my first iteration of this I simply used GET requests that the central location could ping the corresponding consuming website to initiate its own cache refresh routine. I used SSL on each of the eCommerce servers as I already had that setup and could then have the backoffice web app send credentials via SSL GET to initiate the refresh securely.
At a later stage, we found it more efficient to use sockets instead on the backoffice where each consuming website would be a client and listen for changes in the data. The backoffice website could then communicate to its corresponding website when a particular account change and then communicate this very specifically. This approach is much more granular and we could update in small bits as needed as opposed to a large chunked update but this was definitely more complicated than our first try.
I've read about various cross-machine caching mechanisms (Redis, Velocity, nMemCached, etc...). They all seem to require a central machine to manage the cache.
Is there such a thing as a cache engine that self installs - e.g. if caching does not exist on the current subnet, it creates a node. If it does exist, it joins the machine to the caching pool?
Context: I have an app that deploys to around 100 users within the same subnet via ClickOnce. Each of these users access a resource via the WAN (across country and in some cases across the ocean) that performs very CPU-intensive computations and takes significant time to complete.
As a result, the app feels sluggish. I've done what I could to alleviate that by throwing long-lived queries onto separate threads. But that only takes you so far. I've added local caching (via a SQL Compact DB) which works pretty good, but most users access the similar information and together they exert a bit of pressure on the computation server. I think I can take it to the next level if I am able to ship an in-memory cache with my app that is able to seemlessly work with other machines to create a network wide caching mechanism.
You're the one who knows what will be the best, but having a "server app" that would coordinate the whole lot might be a good thing :
User1 asks Server "I need X".
Server tells User1 "Well, ask for it to DataBase"
User2 asks Server "I need X."
Server tells User2 "User1 got it."
...
User1 tells Server "I don't want X anymore."
You could also make some type of data "uncacheable" due to their volatile nature, or to avoid clogging one of the users' connection. Sure the server will get a lot of requests, but if you'd have to compare this with a broadcast across the network solution. If I didn't understand your problematic correctly, just write a comment, disregard this, and I'll remove the answer not to mislead SO Users.
If you don't want a master machine, or you don't want to rely on a specific network layout/installation, you could consider Peer to Peer as an option. WCF has native peer to peer support. Here is a link that looks somewhat relevant to your need: How To Design State Sharing In A Peer Network
I'm going to develop a POS system for medium scale company
and the requirement for me is to make all data on time for all of their branches
while in my mind, move the server from local to web would solve this problem
but, i never done any online server for window application
may i know what is the best option for use as secure database ?
such as SQL can handle this well ?
i tried to google but all of the result return is not what i want
may i know what will you do when you facing this problem ?
my knowledge on coding is just VB and CS
also SQL for database
i would like to learn new if there is better option
i hope it is impossible to access by anonymous and it is store secure at back-end only
What you probably want to do is create a series of services exposed on the internet and accessed by your application. All database access would be mediated by these services. For security you would probably want to build them in WCF and expose them through IIS. Then your Windows application would just call these services for most of its processing.
If you design it properly you could also have it work with a local database as well so that it could work in a disconnected manner if, for example, your servers go down.
Typically you don't move the server off of the site premises.
The problem is that they will go completely down in the event your remote server is inaccessible. Things that can cause this are internet service interruption (pretty common), remote server overloaded (common enough), basically anything that can stop the traffic between the store location and your remove server will bring them to their knees. The first time this happens they'll scream. The second time and they'll want your head due to the lost sales.
Instead, leave a sql server at each location. Set up a master sql server somewhere. Then set up a VPN connection between the stores and this central office. Finally, have the store sql boxes do merge replication with the central office. Incidentally, don't use the built in replication, but an off the shelf product which specializes in replicating sql server. The built in one can be difficult to learn.
In the event their internet connection goes dark the individual stores will still be able to function. It will also remain performant as all of the desktop app traffic is purely to the local sql box.
Solving replication errors is much easier than dealing with a flaky ISP.
I would recommend you to check Viravis Platform out.
It is an application platform that also can be used just as an online database for any .NET client with the provided SDK. It has its own generic windows and web clients and some custom web solutions for some specific applications.
You may be using it as a complete solution or as a secure online database backend.