I want to create an easy autoupdate system in my program. It works fine, but I want it to proceed only when the user is connected to the internet.
I tried many ways, every worked, but when I'm disconnected from the internet, the time till the application loads is around 10 seconds, which is really slow. My program checks for the update on load and so does the connection test, which I think is the problem, because if I run the test inside a button click, it loads pretty fast, even when you are disconnected from the internet.
If you are curious, I tried to use every connection test I found, including System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable();.
Your problem is that checking for a connection has a timeout. When there's a connection it finds that out really fast (usually) and you don't notice the delay. When you don't have a connection it has to do more checks and wait for responses. I don't see anyway to adjust the timeout, and even if you could you'd risk it not detecting connections even if they were available.
You should run the check on a separate thread so that your GUI loading isn't disrupted.
Rather than checking at startup, check on a background thread while the application is running and update then. Any solution for checking connection can have a delay even if the internet is up, if there are DNS issues or just general slowness.
Related
I want to rename database file and even I use using with connection every time I have to call:
FirebirdSql.Data.FirebirdClient.FbConnection.ClearAllPools();
The problem is that this method doesn't block the thread and I don't know how to check if all connections are cleared, because if I get value from:
FirebirdSql.Data.FirebirdClient.FbConnection.ConnectionPoolsCount
It is zero immediately after the method, but I am still not able to rename the database file. If I set some timeout after the method (I tried 1s) then the file is not locked and I can rename it. The problem is that this timeout could be certainly different on different machines.
FWIK the only other method how to check if the file is not locked is to try the renaming in the loop with some timeout, but I can not be sure if the lock is made by connections from my application or from somewhere else.
So is there a better way, how I can wait until this method clears the connections?
Making it an answer for the sake of formatting lists.
#Artholl you can not safely rely upon your own disconnection for a bunch of reasons.
There may be other programs connected, not only this your running program. And unless you connect with SYSDBA or database creator or RDB$ADMIN role - you can not query if there are other connections now. However, you can query, from MON$ATTACHMENTS, the connections made with the same user as your CURRENT_CONNECTION. This might help you to check the state of your application's own pool. Just that there is little practical value in it.
in Firebird 3 in SuperServer mode there is the LINGER parameter, it means that server would keep the database open for some time after the last client disconnects, expecting that if some new client might decide to connect again then the PAGE CACHE for DB file is already in place. Like for middle-loaded WWW servers.
even in Firebird 2 every open database has some caches, and it would be installation-specific (firebird.conf) and database specific (gfix/gstat) how large the caches are. After the engine seeing all clients disconnected decided the database is to be closed - it starts with flushing the caches and demanding OS to flush their caches too ( there is no general hardware-independent way to demand RAID controllers and disks themselves to flush caches, or Firebird would try to make it too ). By default Firebird caches are small and preempting them to hardware layer should be fast, but still it is not instant.
Even if you checked that all other clients did disconnected, and then you disconnected yourself, and then you correctly guessed how long to wait for Linger and Caches, even then you still are not safe. You are subject to race conditions. At the very time you start doing something requiring explicit owning of DB there may happen some new client that would concurrently open his new connection.
So the correct approach would be not merely proving there is no database connection right NOW, but also ensuring there CAN NOT be any new connection in future, until you re-enable it.
So, as Mark said above, you have to use Shutdown methods to bring the database into no-connections-allowed state. And after you've done with file renaming and other manipulations - to switch it back to normal mode.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
If I was responsible for maintaining the firebird provider, I wouldn't want users to rely on such functionality.
Other applications could have the file open (you're only in control of connection pools in the current AppDomain), and the server might be running some kind of maintenance on the database.
So even if you can wait for the pools to be cleared, I'd argue that if you really really have to mess with these files, a more robust solution is to stop the firebird service instead (and wait for it to have fully stopped).
I've got an API written in C# (webforms) and an SQL Server 2008 database that accepts JSON POST data on an AWS EC2 VM. My problem is that the "first" use of this API is rather slow to respond.
What I mean by "first" is that if I were to wait for an hour or so, then post some data, that would be the first. Subsequent posts would process rather quickly in comparison, and I would need to wait another hour or so before experiencing the slow "first" transaction again.
Since only the initial post is slow, it makes me wonder if something is "spinning down" after being idle for some time, and then spinning up again upon first use, adding the extra time.
Things I have tried -
Run program through a performance profiler - This didn't really help. As far as I can see, the program itself doesn't have any obvious parts that run very slowly or inefficiently.
Change configuration to persist at least 1 connection to the database at all times. Again, no real change. I did this by adding "Min Pool Size=1;Max Pool Size=100" to my connection string.
Change configuration to use named pipes instead of TCP. Once again, no real change. I did this by adding "np:" before the server specified in my connection string, eg. server=np:MyServer;database=MyDatabase;
Is there anything else I can do to diagnose the problem? What else should I be looking for in this scenario?
Chances are your app pool is shutting down after a designated period of non-use. The first call after the shutdown forces everything to get loaded back into memory which explains the lag.
You could play with these settings: http://technet.microsoft.com/en-us/library/cc771956%28v=ws.10%29.aspx to see if you get the desired effect, or setup a task scheduler job that makes at least one call every 10+/- minutes of so by doing a simulated post - a simple powershell script could handle that for you and will keep everything 'primed' for the next use.
createWindowEx failed exception is thrown by my server which is using overbyteICS dll in .net C# windowsforms.
I have a server which handles large number of clients throughout the day. But when the total connections(i.e Connection and disconnections altogether) count reaches to 10000 the above error appears and the server stops accepting user connections and also hangs the machine.
I agree with Roger, but let's confirm it first - When this error occurs, run SPY++ from MicrosoftVisualStudio\Tools in the start Menu and look through the window tree. Expand the branches and look for duplicates of some windows. Surely there will be many of them, but you are interested in hundreds and thousands of copies. If you hit that, then it's what Roger said... ...and there's almost no solution other that periodically restarting the connection-server process (or whole machine, just in case) just to be sure it doesnt hang (of course, server restart will irritate the users almost as much..), or fixing/patching/reimplementing the connection-server process to be more resource-friendly..
Note that while opening a hidden window per single connection is a very wasteful approach, it still shuold not hang the machine. It simply should drop the connections that it cannot handle. Here, it seems it has no limits implemented at all, which is a bug.
edit: on pre-NT (i.e. win9x) the limit is hardcoded. On NT class systems, you can try to tweak the pool:
http://weblogs.asp.net/israelio/archive/2007/02/07/max-num-of-open-windows-under-xp-2003-vista-resolved.aspx
but still, I'd consider that as a last restort, as problem will return when number of connection rises again. First, try to ping the server developers to fix that permanently..
You diagnosed it well. Yes, a CreateWindowEx() failure and 10,000 belong together. 10,000 is the default user32 object quota for a process. In other words, a single process isn't allowed to create more than 10,000 windows. This is a counter-measure against apps that leak window handles, a very common bug. The total number of windows that can be created in a session is a limited resource, having one process consume them all would cause outright failure, you couldn't shut down Windows anymore.
Clearly it is not a leak in your case. You can find temporary relief by changing a registry setting, HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\USERProcessHandleQuota. Reboot to make it effective.
Increasing from 10,000 to the maximum of 18,000 should be okayish if the machine doesn't otherwise run processes that require a lot of windows. Something you can see with Taskmgr.exe, Processes tab. Choose View + Select Columns and tick USER objects. Also tick GDI objects and Handles while you are at it, other resources that have a quota.
Long term, this behavior does not scale well. You'll need to find the code that creates a window handle for every web request and fix it.
This is a pretty vague question and getting it answered seems like a long shot, but I don't know what else to do.
Ever since I made my website live every now and then it will just freeze. You click on a link and the browser will just site there looking like its trying to connect. It seems the freezing can last up to 2 minutes or so, then everything is just fine. Then a little while later, it will do the same thing.
I track all the exceptions that occur on my website in a log file.
I get these quite a bit ..
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding
And the stack trace shows it leading to some method that's connecting to the database.
I'm assuming the freezing has to do with this timeout problem. My website is hosted on a shared server, and my database is on some other server with about a billion other databases as well.
But even being on a shared server, this freezing problem happens all the time. Its extremely annoying. And I can see this being a pretty catastrophic problem considering my site is ecommerce based and people are doing transactions on it. The last thing I want is the site freezing when my users hit the 'Submit payment' button, then it results in them hitting the submit payment button over and over again because the site froze, then there credit card gets charged about 10 extra times.
Does anyone have any suggestions on the best way to handle this?
I am guessing that it has to do with the database connections. Check to see that they are getting released properly? If not then it will use them all up.
Also check to see if your database has connection pooling configured.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding
That's a Sql command timeout exception - it can be somewhat common if your database is under load. Make sure you're disposing of SqlConnections and SqlCommands - though that'd usually result in a pool timeout exception (can't retrieve a connection from the connection pool).
Odds are, someone is running queries that are badly tuned or otherwise sucking resources. It may be your site, but since you're on a shared db server, it could just as easily be someone else's. It could also be blocking, or open transactions - since those would be on your database, that'd be a coding issue. You'll probably need to get your hosting provider involved to track it down or move to a dedicated db server.
You can decrease the CommandTimeout of your SqlCommands - I know that sounds somewhat counter-intuitive, but I often find that it's better to fail early than try for 60 seconds throwing additional load on the server. If your .5 second query isn't done in 5 seconds, odds are it won't be done in 60 either.
Alternatively, if you're the patient type, you can increase the CommandTimeout - but there's also a IIS timeout of 90 seconds that you'll need to modify if you bump it up too much.
The timeout errors is definitely the source of the freezing pages. When it happens the page will wait something like a minute for the database connection before it returns the error message. As the web server only handles one page at a time from each user, the entire site will seem to be frozen for the user until the timeout error comes. Even if it only happens for a few users once in a while, it will seem quite severe to them as they can't access the site at all for a minute or so.
How severe the problem really is depends on how many errors you get. From your description it sounds like you get a bit too many to be normal.
Make sure that all your data readers, command objects and connection objects gets disposed properly, so that you don't leave connections open.
Look for deadlock errors in the log also, as they can cause timeouts. If you have queries that lock each other, you may be able to improve them by changing the order that they use the tables.
Check the SQL Server logs, especially for deadlocks.
If you have multiple connections open, one might be waiting on a row that is locked by the other.
We have very strange problem, one of our applications is continually querying server by using .net remoting, and every 100 seconds the application stops querying for a short duration and then resumes the operation. The problem is on a client and not on the server because applications actually queries several servers in the same time and stops receiving data from all of them in the same time.
100 Seconds is a give away number as it's the default timeout for a webrequest in .Net.
I've seen in the past that the PSI (Project Server Interface within Microsoft Project) didn't override the timeout and so the default of 100 seconds was applied and would terminate anything talking to it for longer than that time.
Do you have access to all of the code and are you sure you have set timeouts where applicable so that any defaults are not being applied unbeknownst to you?
I've never seen that behavior before and unfortunately it's a vague enough scenario I think you're going to have a hard time finding someone on this board who's encountered the problem. It's likely specific to your application.
I think there are a few investigations you can do to help you narrow down the problem.
Determine whether it's the client or server that is actually stalling. If you have problems determining this, try installing a packet filter and monitor the traffic to see who sent the last data. You likely won't be able to read the binary data but at least you will get a sense of who is lagging behind.
Once you figure out whether it's the client or server causing the lag, attempt to debug into the application and get a breakpoint where the hang occurs. This should give you enough details to help track down the problem. Or at least ask a more defined question on SO.
How is the application coded to implement the continuous querying? Is it in a continuous loop? or a loop with a Thread.Sleep? or is it on a timer ?,
It would first be useful to determine if your system is executing this "trigger" in your code when you expect it to, or if it is, and the remoting server is not responding... so, ...
if you cannot reproduce this issue in a development environment where you can debug it, then, if you can, I suggest you add code to this Loop to write out to a log file (or some other persistence mechanism) each time it "should" be examining whatever conditions it uses to decide whether to query the remoting server or not, and then review those logs when the problem reoccurs...
If you can do the same in your remoting server, to record when the server receives a remoting request, this would help as well...
... and oh yes, just a thought, (I don;t know how you have coded this... ) but if you are using a separate thread in client to issue the remoting request, and the channel is being registered, and unregistered on that separate thread, make sure you are deconflicting the requests, cause you can't register the same port twice on the same machine at the same time...
(although this should probably have raised an exception in your client if this was the issue)