hope you guys are fine?
OK.. i am using MySQL.Data client/library to access and use MySQL database. I was using happily it for sometimes on quite a few project. But suddenly facing a new issue that causing me hold on my current project. :(
Because current project makes some (looks like it's a lot) db queries. and i am facing following exception :
Can't create more than max_prepared_stmt_count statements (current value: 16382)
i am closing and disposing the db engine/connection every time i am done with it. But getting damn confused why i am still getting this error.
here is the sample code just to give you idea.. (trimmed out unnecessary parts)
//this loop call an API with pagination and get API response
while(ContinueSalesOrderPage(apiClient, ref pageNum, days, out string response, window) == true)
{
//this handle the API date for the current page, it's normally 500 entry per page, and it throws the error on 4th page
KeyValueTag error = HandleSalesOrderPageData(response, pageNum, out int numOrders, window);
}
private KeyValueTag HandleSalesOrderPageData(string response, int pageNum, out int numOrders, WaitWindow window)
{
numOrders = json.ArrayOf("List").Size;
//init db
DatabaseWriter dbEngine = new DatabaseWriter()
{
Host = dbHost,
Name = dbName,
User = dbUser,
Password = dbPass,
};
//connecting to database
bool pass = dbEngine.Connect();
//loop through all the entry for the page, generally it's 500 entries
for(int orderLoop = 0; orderLoop < numOrders; orderLoop++)
{
//this actually handle the queries, and per loop there could be 3 to 10+ insert/update query using prepared statements
KeyValueTag error = InsertOrUpdateSalesOrder(dbEngine, item, config, pageNum, orderLoop, numOrders, window);
}
//here as you can see, i disconnect from db engine, and following method also close the db connection before hand
dbEngine.Disconnect();
}
//code from DatabaseWriter class, as you see this method close and dispose the database properly
public void Disconnect()
{
_CMD.Dispose();
_engine.Close();
_engine.Dispose();
}
so, as you can see i close/dispose the database connection on each page processing, but still it shows me that error on 4th page. FYI, 4th page data is not the matter i checked that. If i skip the page and only process the 4th page, it process successfully.
and after some digging more in google, i found prepare statement is saved in database server and that needs to be close/deallocate. But i can't find any way to do that using MySQL.Data Client :(
following page says:
https://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html
A prepared statement is specific to the session in which it was created. If you terminate a session without deallocating a previously prepared statement, the server deallocates it automatically.
but that seems incorrect, as i facing the error even after closing connection on each loop
so, i am at dead end and looking for some help here?
thanks in advance
best regards
From the official docs, the role of max_prepared_stmt_count is
This variable limits the total number of prepared statements in the server.
Therefore, you need to increase the value of the above variable, so as to increase the maximum number of allowed prepared statements in your MySQL server's configuration
Open the my.cnf file
Under the mysqld section, there is a variable max_prepared_stmt_count. Edit the value accordingly(remember the upper end of this value is 1048576)
Save and close the file. Restart MySQL service for changes to take place.
You're probably running into bug 77421 in MySql.Data: by default, it doesn't reset connections.
This means that temporary tables, user-declared variables, and prepared statements are never cleared on the server.
You can fix this by adding Connection Reset = True; to your connection string.
Another fix would be to switch to MySqlConnector, an alternative ADO.NET provider for MySQL that fixes this and other bugs. (Disclaimer: I'm the lead author.)
Related
I have an Azure Event hub with readings from my smart electricity meter. I am trying to use an Azure Function to write the meter readings to an Azure SQL DB. I have created a target table in the Azure SQL DB and a Stored Procedure to parse a JSON and store the contents in the table. I have successfully tested the stored procedure.
When I call it from my Azure Function however I am getting an error: The type initializer for 'System.Data.SqlClient.TdsParser' threw an exception. For testing purposes, I have tried to execute a simple SQL select statement from my Azure Function, but that gives the same error. I am lost at the moment as I have tried many options without any luck. Here is the Azure function code:
#r "Microsoft.Azure.EventHubs"
using System;
using System.Text;
using System.Data;
using Microsoft.Azure.EventHubs;
using System.Data.SqlClient;
using System.Configuration;
using Dapper;
public static async Task Run(string events, ILogger log)
{
var exceptions = new List<Exception>();
try
{
if(String.IsNullOrWhiteSpace(events))
return;
try{
string ConnString = Environment.GetEnvironmentVariable("SQLAZURECONNSTR_azure-db-connection-meterreadevents", EnvironmentVariableTarget.Process);
using(SqlConnection conn = new SqlConnection(ConnString))
{
conn.Execute("dbo.ImportEvents", new { Events = events }, commandType: CommandType.StoredProcedure);
}
} catch (Exception ex) {
log.LogInformation($"C# Event Hub trigger function exception: {ex.Message}");
}
}
catch (Exception e)
{
// We need to keep processing the rest of the batch - capture this exception and continue.
// Also, consider capturing details of the message that failed to process so it can be processed again later.
exceptions.Add(e);
}
// Once processing of the batch is complete if any messages in the batch failed process throw an exception so that there is a record of the failure.
if (exceptions.Count > 1)
throw new AggregateException(exceptions);
if (exceptions.Count == 1)
throw exceptions.Single();
}
The events coming in are in JSON form as follows
{
"current_consumption":450,
"back_low":0.004,
"current_back":0,
"total_high":13466.338,
"gas":8063.749,
"current_rate":"001",
"total_low":12074.859,
"back_high":0.011,
"timestamp":"2020-02-29 22:21:14.087210"
}
The stored procedure is as follows:
CREATE PROCEDURE [dbo].[ImportEvents]
#Events NVARCHAR(MAX)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
-- Insert statements for procedure here
INSERT INTO dbo.MeterReadEvents
SELECT * FROM OPENJSON(#Events) WITH (timestamp datetime2, current_consumption int, current_rate nchar(3), current_back int, total_low numeric(8, 3), back_high numeric(8, 3), total_high numeric(8, 3), gas numeric(7, 3), back_low numeric(8, 3))
END
I have added a connection string of type SQL AZURE and changed {your password} by the actual password in the string. Any thoughts on how to fix this issue or maybe how to get more logging as the error is very general?.
I managed to fix this exception by re-installing Microsoft.Data.SqlClient.SNI. Then clean and rebuild your project.
I managed to fix the issue by changing the Runtime version to ~2 in the Function App Settings.
Does this mean this is some bug in runtime version ~3 or should there be another way of fixing it in runtime version ~3?
I might be late to the party, in my case the cause of the error was "Target runtime" when publishing, I developed on windows machine but was transferring the file to linux, the solution was to change target runtime to the correct one, initial it was win-x64(merely because I started off by deploying locally), see screenshot below
Try to connect to a local SQL, use SQL profiler, and check what you are sending, and what precisely SQL is trying to do with the command being executed.
It's very hard to replicate your code, because, I obviously do not have your Azure SQL :)
So I would suggest, try to execute each step in the Stored procedure, as direct queries.
See if that works, then try to wrap the statements into store-procedures called back-to-back, and get that to work.
Then combine the commands to a single command, and fiddle with it till you get it to work ;)
Get the most simple query to execute towards the Azure SQL, so you are sure your connection is valid. (Like just a simple select on something)
Because without more information, it is very difficult to assist you.
Pretty silly but I got this after installing the EntityFrameworkCore Nuget package but not EntityFrameworkCore.SqlServer Nuget package. I had the SqlServer version for EntityFramework 6 installed.
I had the same error with a VSTO-application that was installed with a double click in the file explorer. Windows copied not all files to such an automatic location somewhere into ProgramData, so the application was simply not complete!
The solution was to register the VSTO-application manualy in HKEY_CURRENT_USER and pointed the "Manifest" to the complete directory with all the files. (like Microsoft.Data.SqlClient.dll, Microsoft.Data.SqlClient.SNI.x64.dll etc)
Those automatically by Windows chosen installations/directories will give unexpected behaviour. :(
Some ws methods ran fine and other would fail with this same error. I ran ws method that was failing in the browser on the box that the ws was being served from and got a lengthy, and helpful error message. One item was an InnerException that said
<ExceptionMessage>Failed to load C:\sites\TXStockChecker.xxxxxx.com\bin\x64\SNI.dll</ExceptionMessage>
I noticed that that file was right were it was expected in my development environment so I copied it to the matching directory on the prod ws and now all the methods run as expected.
I do a query on a path then add new data on the same path then read again with the same query and the new data is not in the result. I can see the new data in my FB console and if I restart my app, it will show. It's like I'm reading from cached data. What is wrong?
public static void GetScores(string readDbPath)
{
FirebaseDatabase.DefaultInstance.GetReference(readDbPath).OrderByChild("score")
.LimitToLast(Constants.FIREBASE_QUERY_ITEM_LIMIT)
.GetValueAsync().ContinueWith(task =>
{
if (task.IsFaulted)
{
// Handle the error...
Debug.LogError("FirebaseDatabase task.IsFaulted" + task.Exception.ToString());
}
else if (task.IsCompleted)
{
DataSnapshot snapshot = task.Result;
// Do something with snapshot...
List<Score> currentScoreList = new List<Score>();
foreach (var rank in snapshot.Children)
{
var highscoreobject = rank.Value as Dictionary<string, System.Object>;
string userID = highscoreobject["userID"].ToString();
int score = int.Parse(highscoreobject["score"].ToString());
currentScoreList.Add(new Score(score, userID));
}
OnStatsDataQueryReceived.Invoke(currentScoreList); // catched by leaderboard
}
});
}
It's very likely that you're using Firebase's disk persistence, which doesn't work well with Get calls. For a longer explanation of why that is, see my answer here: Firebase Offline Capabilities and addListenerForSingleValueEvent
So you'll have to choose: either use disk persistence, or use Get calls. The alternative to calling Get would be to monitor the ValueChanged event. In this case your callback will be invoked immediately when you change the value in the database. For more on this, see the Firebase documentation on listening for events.
This post was deleted stating it was just additional infos on my question. If fact it is the solution which is to use GoOffline() / GoOnline().
Thanks Frank for the detailed answer.
I tried with
FirebaseDatabase.DefaultInstance.SetPersistenceEnabled(false)
but the problem stayed the same. Using listeners is not what I want since every time a player would send a score, every player on the same path would receive refreshed data so I'm worried about b/w costs and performance.
The best solution I just found is to call before using a get
FirebaseDatabase.DefaultInstance.GoOnline();
then right after I get a response I set
FirebaseDatabase.DefaultInstance.GoOffline();
So far no performance hit I can notice and I get what I want, fresh data on each get. Plus persistence if working off line then going back.
found I'm trying to create contacts into a user's Mailbox programmatically (using Redemption), based on values from a database.
RDOContactItem rci = (RDOContactItem)session.GetDefaultFolder(rdoDefaultFolders.olFolderContacts).Folders["Contacts Subfolder"].Items.Add("IPM.Contact");
...
rci.Save();
As soon as I reach the limit 250, I get the error:
Error in IMsgStore::OpenEntry(Inbox or Root): MAPI_E_TOO_BIG
ulVersion: 0
Error: Your server administrator has limited the number of items you can open simultaneously. Try closing messages you have opened or removing attachments and images from unsent messages you are composing.
Component: Microsoft Exchange Information Store
Read Dmitry Streblechenko's comments on "This is an indication that you have too many open objects. Do you open each and every message in a folder?" suggestions on http://www.microsoft-questions.com/microsoft/Plaform-SDK-Mapi/32731171/mapietoobig.aspx and even tried his suggestion "Do you release all Exchange objects as soon as you are done with them?"
if (rci != null) Marshal.ReleaseComObject(rci);
even casting to IDisposable to able to dispose it, but it didn't work.
I haven't find a way to close a contact item after being saved.
Increasing the number of items that can be opened simultaneously on the server side is not a happy option either.
How to solve this?
You are using multiple dot notation (5 if I am counting correctly), and that causes the compiler to create implicit variables that you cannot explicitly release. Try the following. You can also try to call GC.Collect() every once in a while, but that would be a sledgehammer of a solution...
RDOFolder contacts = session.GetDefaultFolder(rdoDefaultFolders.olFolderContacts);
RDOFolders folders = contacts.Folders;
RDOFolder subfolder = folders["Contacts Subfolder"];
RDOItems items = subfolder.Items;
RDOMail msg = items.Add("IPM.Contact");
RDOContactItem rci = (RDOContactItem)msg;
...
rci.Save();
Marshal.ReleaseComObject(rci);
Marshal.ReleaseComObject(msg);
Marshal.ReleaseComObject(items);
Marshal.ReleaseComObject(subfolder);
Marshal.ReleaseComObject(folders);
Marshal.ReleaseComObject(contacts);
I have a particular situation where my client require to import (periodically) an ms-access database into his mysql website database (so it's a remote database).
Because the hosting plan is a shared hosting (not a vps), the only way to do it is through PHP through an SQL query, because I don't have ODBC support on hosting.
My current idea is this one (obviusly the client has a MS-Windows O.S.):
Create a small C# application that convert MS-Access database into a big SQL query written on a file
The application will then use FTP info to send the file into a specified directory on the website
A PHP script will then run periodically (like every 30 minutes) and check if file exists, eventually importing it into the database
I know it's not the best approach so I'm proposing a question to create a different workaround for this problem. The client already said that he wants keep using his ms-access database.
The biggest problem I have is that scripts can last only 30 seconds, which is obviusly a problem to import data.
To work around the 30-second limit, call your script repeatedly, and keep track of your progress. Here's one rough idea:
if(!file_exists('upload.sql')) exit();
$max = 2000; // the maximum number you want to execute.
if(file_exists('progress.txt')) {
$progress = file_get_contents('progress.txt');
} else {
$progress = 0;
}
// load the file into an array, expecting one query per line
$file = file('upload.sql');
foreach($file as $current => $query) {
if($current < $progress) continue; // skip the ones we've done
if($current - $progress >= $max) break; // stop before we hit the max
mysql_query($query);
}
// did we finish the file?
if($current == count($file) - 1) {
unlink('progress.txt');
unlink('upload.sql');
} else {
file_put_contents('progress.txt', $current);
}
last year i developed an ASP.NET Application implenting MVP Model.
The site is not very large (about 9.000 views/day).
It is a common application witch just desplays articles, supports scheduling (via datetime),vote and views, sections and categories.
From then i create more than 15 sites with the same motive ( The database michanism was build in the same logic).
What i did was :
Every time a request arrive i have to take articles, sections, categories, views and votes from my Database and display them to the user...like all other web apps.
My database objects are somthing like the above :
public class MyObjectDatabaseManager{
public static string Table = DBTables.ArticlesTable;
public static string ConnectionString = ApplicationManager.ConnectionString;
public bool insertMyObject(MyObject myObject){/*.....*/}
public bool updateMyObject(MyObject myObject){/*.....*/}
public bool deleteMyObject(MyObject myObject){/*.....*/}
public MyObject getMyObject(int MyObjectID){/**/}
public List<MyObject> getMyObjects( int limit, int page, bool OrderBy, bool ASC){/*...*/}
}
When ever i want to communicate to the database i do something like the above
MySqlConnection myConnection = new MySqlConnection(ConnectionString);
try
{
myConnection.Open();
MySqlCommand cmd = new MySqlCommand(myQuery,myConnection);
cmd.Parameters.AddWithValue(...);
cmd.ExecuteReader(); /* OR */ ExecuteNonQuery();
}catch(Exception){}
finally
{
if (myConnection != null)
{
myConnection.Close();
myConnection.Dispose();
}
}
Two months later i've run into trouble.
The performance start falling down and the database starts to return errors : max_user_connections
Then i think.. " Let's cache the page "
And the start to use Output cache for the pages.
(not a very sophisticated good idea..)
12 months later my friend told to me to create a "live" article...
an article that can be updated without any delay. (from the output cache...)
Then it came into my mind that : " Why to use cache? joomla etc **doesn't"
So...i remove the magic "Output cache" directive...
From then i run again into the same problem...
MAX_USER_CONNETCTIONS! :/
What i'm doing wrong?
I know that my code communicates alot with the database but...
the connection pooling?
Sorry for my english
Please...help :/
i have no idea how to figure it out:/
Thank you.
I'm running into share hosting packet
*My db is over 60mb in size*
I have more than 6000 rows in some tables like articles
*My hosting provider gives me 25 connections to the database (very large number in my opinion)*
Your code looks fine to me, although from a style perspective I prefer "using" to "try / finally / Dispose()".
One thing to check is to make sure that the connection strings you're using are identical, everywhere in your code. Most DB drivers to connection pooling based on comparing the connection strings.
You may need to increase the max_connections variable in your mysql config.
See:
http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Actually, Max #/connections is an OS-level configuration.
For example, under NT/XP, it was configurable in the registry, under HKLM, ..., TcpIp, Parameters, TcpNumConnections:
http://smallvoid.com/article/winnt-tcpip-max-limit.html
More important, you want to maximum the number of "ephemeral ports" needed to open new connections:
http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html
Windows:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
On the Edit menu, click Add Value, and then add the following registry value:
Value Name: MaxUserPort Data Type: REG_DWORD Value: 65534
Linux:
sudo sysctl -w net.ipv4.ip_local_port_range="1024 64000"