We're migrating our databases to an offsite data center that contains newer more robust servers. I have a process that imports data from my application to our local sql server and it works great. However, I've moved my database to the new server and I am periodically receiving RPC timeout errors or cannot errors that state it can't make an RPC call.
The old sql server really only contained my database and a couple of other custom application databases. That said, the new server is hosting other databases as well as our Sharepoint database and Team Foundation Server database. While looking at the SQL Profiler, I notice many frequent RPC calls from a TFSService even though no one is using TFS at the time. Similarly, Sharepoint is constantly connecting through RPC as well, but unlike TFS, people are actively using it.
To me, those databases should be either by themselves or together on their own sql server. Am I wrong? Do you think the RPC calls from TFS and Sharepoint could be hogging my connection? If that's the case and if I'm not permitted to move the database and the another sql server, is there a way to configure TFS and Sharepoint to tone down the amount of "needless" interactions to the database? Any other ideas I should look for?
By the way, I've received this error from my machine as well as a from a virtual machine that exists in the data center so I don't think it's a connection (distance) issue.
Thank You.
Team Foundation Server 2010 has a notifications system built-in (not to be confused with the events/alerts system that sends E-Mail or SOAP events).
Each application tier periodically polls a table in the Tfs_Configuration database asking "has there been any notifications that I'm subscribed to happen since I last checked?". An example of a notification is when somebody changes a configuration setting, all the application tiers pick up that change almost immediately without having to restart.
In the SQL Profiler, this will look like a lot of activity and load on your server, but it's really not.
Related
I have to implement a functionality, where client A sends some information to the server (which stores it to an SQL db), and client B wants to retrieve that piece of information. To do so, client B calls a WCF function, which waits for the information (for a long time, until a timeout happens, or the information arrives).
What is the best practice to implement the WCF function? Polling the SQL db to query if the information is there or not? Is there any SQL DB side functionality to help?
The WCF must be written in C# (.NET CORE is preferred). The SQL server is not chosen yet, preferred MS SQL or an Azure solution, but can be other sql server which fits for the .NET CORE.
Have a look at SQL Server's Query Notification.
You can do a lot of things actually. Its more like a project management architecture related question.
You can connect your user B to a group which sends information to all its connected clients when available. You can use two of the following, there maybe more.
SignalR - Real time communication between server and clients
FireBase - By Google for real-time apps
I'm new to web development and I'm developing a web app in MVC 5 / C# where I want to access data from the SQL server from multiple devices (laptop, PC, iPad etc).
I've setup a small test website and SQL database on the Azure account and have been able to run CRUD operations from the website from a single device.
The problem I'm facing is when trying to access the data from another device. I'm constantly needing to manually add new IP address to the SQL firewall. To make matters worse my ISP has me on a dynamic IP.
Eventually I'm planning to provide a subscription service where clients can login via the website and access their data. Is there any way to allow multiple connections to an Azure SQL database without having to manually update the firewall?
Would setting up an Azure VPN an a VM running SQL server be the way to go?
Regards,
Marc
Might be worth taking a look at Windows Azure Mobile Services. Mobile Services provides a REST interface over your Windows Azure SQL Database automatically. Could be a good option, especially if looking to access the database from multiple devices.
http://www.windowsazure.com/en-us/documentation/articles/mobile-services-windows-store-dotnet-get-started-data/
In general, under NO circumstances should you ever make your database server directly accessible to the general public. There are far too many security risks associated with doing so- by exploiting vulnerabilities in the SQL capabilities, you (as a hacker) could quite easily take full control of the instance. That's one reason why you have to constantly update your firewall settings.
To solve your issue with the ISP re-assigning IP addresses, I would ask the ISP for a static number. It will probably cost you on the order of $10 per month, but worth the saved headache in my opinion. I am fortunate to have Comcast, and they do not reassign IP addresses randomly, but I know several other ISPs who do.
The generally-accepted way to make your data available is through a REST-based web service.
I realise this is a much discussed topic but all the suggestions I see seem to involve direct access to the SQL Server which in our instance is not ideal.
Our scenario is a remote SQL Server database with (say) 100 tables. We are developing a lightweight desktop application which will use an SQL Server Compact database and sync a subset of (say) 20 tables with the remote server periodically.
I would like to have control over how the replication occurs because speed is a major issue for us since the remote server is 1000's of miles away.
Also I don't need to sync all the records in each table - only those relevant to each user.
I quite like the SQL Merge facility however it requires that the client be connected to the remote SQL Server. This is currently not possible and we were thinking of interfacing to the remote server through a web service accessed through our application or some other method.
Any suggestions welcome.
UPDATE
Just to clarify, internet connection will be intermittent, that's the main reason why we need to sync the two databases.
The fact that you are using a compact db for the client puts some pretty heavy limitations on you for available options in this scenario.
Given the limitations and the performance requirements you desire, you could consider implementing a service-based http endpoint to keep the desired tables in sync. If your architecture allows for it, doing so asynchronously would boost performance significantly but again, it may not even be viable depending on your architecture.
Something else to consider is using web sockets rather than standard http connections for a web service like mentioned above. That way you could keep the clients synced real-time, since web sockets are true fully duplex real-time connections. The major catch with this is you would either have to ensure all clients are web-socket compliant or provide a fall-back to emulate a websocket connection with an emulation framework for clients that aren't up to par.
Not sure if this helps any.
You have the choice of both Sync Framework (requires more coding and has some other limitations) or Merge Replication, both work over http/https) - see this blog post for a comparision: http://blogs.msdn.com/b/sqlservercompact/archive/2009/11/09/merge-replication-vs-sync-services-for-compact.aspx
Can you not use the MS Sync framework?
Pretty much designed for your scenario AFAIK.
Quick google turned this tutorial up...
http://social.technet.microsoft.com/wiki/contents/articles/2190.tutorial-synchronizing-sql-server-and-sql-server-compact-sync-framework.aspx
We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).
I want to create an application in C# with client and server sides. It will work over local network. Client sides must check for updates on a remote SQL Server. Lets say we've set update time interval to 2 seconds. If i have 20 client side applications, then they'll send query to the remote SQL Server every 2 sec and it will load server quite a lot. Now I want to know is there any way to reduce server load or it's only way to check for updates?
From my point of view, there is no need to allow clients to connect the DB serer directly. There should be one more tier here which will only connect to the server and cache information about the updates. Your clients should connect to this additional information and work with the cached info.
UPDATE
As far as I understand, the problem appears because all your clients ping your DB server every two seconds. The solution to this problem is to create a special module which will only have access to the DB server and asks it for the update. For example, every two seconds. If the update is ready, it should be able to fetch it from the DB and store. This is what I meant under the additional tier.
Now, let's return to your clients. They should be able to communicate with this module and get information from it about a ready update (this information is cached and thus it is really fast to obtain it. Also you needn't ping the server at every client request). If update is ready, fetch it to the client side and work on client side.
As for the communication between this additional tier and clients. Since you are working with .NET, I would suggest that you take a look at the WCF which, from my point of view, becomes a standard approach of implementing the between-process communication in .NET. There are a lot of information in the network about it, I will post the links shortly.
Here is my favorite WCF book:
Programming WCF Services
MSDN entry:
Windows Communication Foundation