I've recently started to use SQLite and began to integrate it into a C# project I'm working on.
However, randomly my project will throw the exception:
Input array is longer than the number of columns in this table
I'm having a hard time trying the trace the problem because it seems to be thrown on a random basis.
DataTable table = new DataTable();
//exception is thrown here
table = Global.db.ExecuteQuery("SELECT * FROM vm_manager");
Some of the data that gets returned from this query is as follows:
http://i.stack.imgur.com/9rlLN.png
If anyone has any advice, I'd be grateful.
EDIT: I'm unable to show the execute query function as it resides inside a dll from the following sql lite wrapper http://www.codeproject.com/KB/database/cs_sqlitewrapper.aspx
EDIT 2 Problem stems from the new record array function inside this particular sql lite wrapper.
Based on your SQLite wrapper's implementation, it is adding columns to its own internal DataTable before returning to yours. I suspect your defect is in the wrapper, and not in your code. I dug into the source of your SQLiteWrapper from CodeProject; here it is at PasteBin: http://pastebin.com/AjGaX0kL
I suspect the error is occurring in that helper method ExecuteQuery(), or its helpers: ReadFirstRow() or ReadFirstRow(), and not your code. Wrap your code in a try catch. Inspect the Exception, and the properties will tell you which method this exception is actually being created in.
Likely you're encountering a defect in this wrapper class.
try
{
DataTable table = Global.db.ExecuteQuery("SELECT * FROM vm_manager");
}
catch (Exception ex){
//who threw this from which method and line?
}
If this SQLite provider isn't working, suggest evaluating:
System.Data.SQLite - An open source ADO.NET provider for the SQLite database engine
ADO.NET 2.0 Provider for SQLite at SourceForge -- and here's a nice tutorial by Mike Duncan on this SQLite provider.
Related
I have a little problem with Entity Framework 6.
My application is a WPF C# application, I use SQL SERVER 2012 Express.
I try to insert data into my Person table.
It was working for a long time. Today I had an error : receiving an invalid column length from the client 46.
I searched and found some articles, they are talking about column sizes etc but in my case, tis is not the problem.
This code was working : dc.BulkInsert(listToInsert, options);
**using EntityFramework.BulkInsert.Extensions;**
//I have a list of person object to insert.
var listToInsert = PersonList.Where(ro => !ExistingPerson.Contains(ro.Pers_Code.ToLower())).ToList();
using(MyEntities dc = new MyEntities())
{
*//If I add items one by one, it works*
foreach (var item in listToInsert)
{
dc.Person.Add(item);
}
dc.SaveChanges(); //Success.
//But If I use Bulkinsert, I have an error message
BulkInsertOptions options = new BulkInsertOptions();
options.BatchSize = 1000;
dc.BulkInsert<Person>(listToInsert, options); // at this moment I have this error message : receiving an invalid column length from the client 46.
dc.SaveChanges();
}
I checked the data length of items, I didn't see any problem.
Does anyone have an idea ?
Thanks.
The SaveChanges use a SqlCommand. If the name is longer than the database limit, it will silently be truncated, so no error will be thrown.
The BulkInsert use a SqlBulkCopy, if the name is longer than the database limit, an error will be thrown.
That explain why you get an error in one case and none in the other case.
SqlBulkCopy doesn't raise this error for fun, so I would double check your length with your column size.
Perhaps the type is char(xyz) and there is space at the end?
Perhaps there is some space at the start?
etc.
.NET Fiddle support Entity Framework and NuGet packages. So, if you could reproduce it online, it could be possible to tell exactly why that happens.
Example: https://dotnetfiddle.net/35mQ0W
I know that you can get the data of a table in a SAP Server with the function RFCDestination.Repository.GetTableMetadata(string tablename). Unfortunately I get an error when I try to execute the command. The weird thing is when I give a exisiting table I get a different error when I try something random as a tablename.
Existing table:
var x = dest.Repository.GetTableMetadata("TFTIT");
Error:
SAP.Middleware.Connector.RfcInvalidStateException: "cannot find TABLE specified by TFTIT"
Random tablename:
var x = dest.Repository.GetTableMetadata("Test123");
Error:
SAP.Middleware.Connector.RfcInvalidStateException: "metadata for TableOnly TEST123 not available: NOT_FOUND: No active nametab exists for TEST123"
I know there is a way to get the data of a table with the help of a function module but I need to use the GetTableMetadata function.
One cannot do so much wrong when calling RfcRepository.GetTableMetadata(string). Does your used user ID has the required RFC authorizations for repository queries as listed in SAP note 460089 (scenario 3)? If yes, this is maybe a bug in the NCo3 library or even in the ABAP backend. Do you use NCo's latest patch level? This is currently NCo 3.0.20.
If not, try updating the library first.
Otherwise I recommend to create an SAP support ticket for the first error message. The second error is normal when the specified table name does not exist.
Alternatively you may also try what happens if calling RfcRepository.GetStructureMetadata(string) for this table instead. The meta data for tables and structures is quite similar and the same remote function modules are used for the DDIC queries. Maybe this works. However, I think in the first place RfcRepository.GetTableMetadata(string) should work here.
I hope this helps.
I have a .net winforms application which is using SQLite DB and I am using System.Data.SQLite dll.
This is how I load a Datatable:
DataTable table = new DataTable();
SQLiteDataAdapter m_readingsDataTableDataAdapter;
SQLiteCommandBuilder m_readingsDataTableCommandBuilder;
m_readingsDataTableCommandBuilder = null;
m_readingsDataTableDataAdapter = null;
// Create fresh objects
m_readingsDataTableDataAdapter = new SQLiteDataAdapter(sql, Database.getInstance().connectionObj);
m_readingsDataTableCommandBuilder = new SQLiteCommandBuilder(m_readingsDataTableDataAdapter);
m_readingsDataTableDataAdapter.Fill(table);
return table;
This data table has one primary key and no other constraints.
I set it as a data source for DataGridView and after all the edits, I update the DataTable like this:
m_readingsDataTableDataAdapter.Update(table);
Occasionally, the updates fires an error and I don't know when, it stops throwing errors - probably after system restart ( not sure ). And then, the updates go fine until another situation where this update throws an error again. When the error occurs, it happens for all updates from then on even after application restart.
Error:
An unhandled exception of type 'System.Data.DBConcurrencyException' occurred in System.Data.dll
Additional information: Concurrency violation: the UpdateCommand affected 0 of the expected 1 records.
I would appreciate any help or questions as this is quite a critical section of my project.
Thanks.
Update:
Based on suggestions, to ensure no other part of the program is editing the row, I loaded the DataTable, updated the row immediately and updated it as per the following code. And I still got the error. There is no other program running on my machine so the row is not updated by any other program except my application:
DataTable table = new DataTable();
SQLiteDataAdapter m_readingsDataTableDataAdapter;
SQLiteCommandBuilder m_readingsDataTableCommandBuilder;
m_readingsDataTableCommandBuilder = null;
m_readingsDataTableDataAdapter = null;
// Create fresh objects
m_readingsDataTableDataAdapter = new SQLiteDataAdapter(sql, Database.getInstance().connectionObj);
m_readingsDataTableCommandBuilder = new SQLiteCommandBuilder(m_readingsDataTableDataAdapter);
m_readingsDataTableDataAdapter.Fill(table);
// Update immediately
table.Rows[0]["RS485_ADDRESS"] = "400";
m_readingsDataTableDataAdapter.Update(table); // - Still throws error
return table;
I had a similar issue that drove me nearly mad. I even debugged into System.Data.SQLite . My issue was caused by the dynamic typing of SQLite:
Dynamic typing means that the column type is just a hint for the type of the stored value but not an enforcement. Here is what happened in my application: An integer column contained an empty string. Fetching the data triggered an implicit conversion to the column type so the empty string was converted to zero. The DbDataAdapter used this zero value in the WHERE clause of an UPDATE or DELETE statement which failed miserably because the conversion of zero to a string is not an empty string. I changed the affected integer columns to contain NULL and everything is fine now.
The sad fact about that: It is not a bug in SQLite or ADO.NET . DbAdapter and DbCommandBuilder expect strong typing and SQLite doesn't provide that by design.
I've seen this problem, it seemed to appear in some cases and not others. I have resolved it a different way. Since this is primarily a data issue, it involves changing the update and delete statements. There may be cases where your code might need to be validated, I'm not referring to those cases, this is for when all else seem to be in order and "Concurrency Violation" persists. And a good starting place before you spend hours debugging is to address how the updates are handled.
Open the Data Set Designer.
Click on the Table Adapter.
In the properties, expand the UpdateCommand
Edit the SQL so the Where-Clause refers only to the primary-key.
Example: Suppose the column EmployeeID is the primary-key in our table
Change to:
update ..... WHERE EmployeeID=#EmployeeID
By default it's usually constructed as:
update ... where ID=#ID AND col1=#col1 AND col2=#col2 AND col3=#col3
The where-clause is formed with all the columns to handle multi-user updates, to enforce an implicit concurrency. So if one user has a copy of the record, and another user updates that version by the time the first user updates it, first user's version is not current. Otherwise, primary-keys should be used to refer to a specific record.
But, this is an update/delete policy decision that the development and database teams need to coordinate. For a single developer/dba in-one, have a little meeting with yourself. Don't take the Microsoft where-clause without your review and understanding and explicit implementation. Need to specifically address what happens when 2 users have a copy of the same record, both from the application and database levels. Database "Transaction Isolation" Levels in SQL Server is a way to address it. ADO, ADO.NET, OLE DB, ODBC all have isolation level settings. More on SQL Server Transactions and isolation levels
In some cases multiple users are not involved, or the application is updating internally, concurrency is fully controlled, where-clause can be formed with the primary-key.
In my case, the update is done without a user involvement, so it makes sense to use the primary key. If you understand the reasoning and the cause of the behavior, it's a matter of adjusting to fit your environment.
G'Day everybody.
Need the help to finalize the answer to similar question. Unfortunately, since I do not have enough points, I can not ask the final solution for people involved. THe original question is here: "[Passing SQL Server exception to .net][1]"
So,
PROBLEM:
TRY/CATCH statement in SQL Server 2012 swallows the exception that I want to pass or re-throw back to EF 5.0.
As long as the exception is without Try/catch, .NET code gets it without problems.
Attempts to solve:
1. FOR SQL SERVER SIDE: We have tried with THROW / RAISEERROR, Raising the exception in and outside of the Try/Catch block, recording the original error and re-throwing it again.
2. From .NET SIDE tried as planned overloaded "ExecuteSprocAccessor" and changed back to basic code
"...
SqlDataAdapter dataadap = new SqlDataAdapter(command);
dataadap.Fill(dt);
"
Nothing worked
3. Another option - guess - is to think about some parameter in SQL SERVER that might be blocking it - do not know where to look as yet
Solution:
LOOKING for help and advise how the original managed to supressed the TRY/Catch behavior.
Thanks!
We have spent another day in further attempt to get to the bottom of it and, accidentally, found temporary solution.
since there is no initial response, I am posting this as a solution. Though, it's more like a temporary solution.
The usage of "db.ExecuteDataSet(dbc)" worked fine providing the relevant handling of exceptions passed from SQL Server 2012 from Catch block. for us it shows that there are some tricks or problems that we do not know as yet with EL6.0 and/or ADO.NET in new version
*using ( DbCommand dbc = db.GetStoredProcCommand("[Personnel].[uspWebLogin]"))
{
db.AddInParameter(dbc, "UserLogin", DbType.String, user.UserLogin);
db.AddInParameter(dbc, "UserPassword", DbType.String, user.Password);
DataSet ds = db.ExecuteDataSet(dbc);
DataTableReader dtr = ds.CreateDataReader();
// string count just to check theat the core query actualy worked when there is no exception
string count = dtr.FieldCount.ToString();
......
return new WebUser();
}*
The problems are now on how to optimize the best mapping of objects and return set instead of writing heaps of code which "ExecuteSprocAccessor" and "IRowMapper" were doing for us.
Thanks for reading. Any further comments or suggestions are still welcomed. Cheers.
I'm using the following code to do a batch insert using the C# driver. I have a unique index, and I want it to fail silently if I try to insert a record that isn't unique.
Even though I have InsertFlags.ContinueOnError set, I still get an error on the InsertBatch call. If I swallow the error as I have shown below, everything works ok. But this certainly feels wrong.
var mio = new MongoInsertOptions {Flags = InsertFlags.ContinueOnError};
// newImages is a list of POCO objects
try
{
_db.GetCollection("Images").InsertBatch(newImages, mio);
}
catch (WriteConcernException)
{
}
Are you using version 1.8 of the csharp Mongo driver?
If so, try upgrading to version 1.8.1 which contains a fix for the following two issues:
InsertBatch fails when large batch has to be split into smaller sub batches
InsertBatch throws duplicate key exception with too much data...
So your inserts could succeed, but the driver is still throwing an exception on bulk insert operations due to the bug above.
And this exception doesn't originate from the database itself, explaining why the inserts succeed but you still need to catch the exception afterwards - i.e. the db is in fact respecting your ContinueOnError flag but the driver throws an exception anyway afterwards.