All rows are not getting imported to datatable using oledbconnection - c#

I am stucked up at this point. I searched alot on gooogle but didnot find anything.
My problem is:
I have an Excel file which i want to export to datatable and from datatable i want to save it to oracle DB.
Excel file contains multiple columns and each column consists of large data(approx 20000characters/numbers).
using oledbconnection,excel columns with such large data are not copied to datatble.(Small data columns gets copied).
Can anyone suggest workaround to my problem???
Thanks in advance.

Check the dataypes and their length i.e nvarchar(3000)
If that doesnt work, test with a small set of data maybe 5 rows, you should be able to see a trend there.
Also check the data type of your application, sometimes for large number you may use long, or if they are large strings maybe use a stringbuilder to pass the data instead of just a string....

Related

How to read the header row (assuming first row is header) in excel without loading entire excel file

I am combining multiple large excel files with different columns and number of columns.
Before starting to combine, I want to collect all header rows in order to make a data table which having all columns in advance.
I know that there is a method datatable.merge in c#, which allow to add missing column while combining.
Because there are too many big excel files, and the maximum rows per sheet in excel is about 1 millions row. So when reaching limit, I must save part of combining to excel, clear the content and keep combine after that. This will lead to the result that the saving part in the early process will don't have the same schema as the final one.
This is the reason why I must collect all header in advance.
As far as I am concerned, library in c# like Epplus or ExcelDataReader load entire content of excel. This lasts very long. I don't need to load all content at once.
Somebody here know how to load excel header row only ?
Thank you so much.

Strange error when adding row to datatable

I'm hoping you guys can help me figure out why this is happening. I've been tearing my hair out trying to figure this out.
Here's an example directly from my code (with the boring bits cut out)
...(Set up the connection and command, initialize a datatable "dataTable")...
using (SqlDataReader reader = cmd.ExecuteReader())
{
//Query storage object
object[] buffer = new object[reader.FieldCount];
//Set the datatable schema to the schema from the query
dataTable = reader.GetSchemaTable();
//Read the query
while (reader.Read())
{
reader.GetValues(buffer);
dataTable.Rows.Add(buffer);
}
}
The error is
Input string was not in a correct format.Couldn't store in NumericScale Column. Expected type is Int16.
The specific column data types as returned by the schema are (ordered by column)
System.Data.SqlTypes.SqlInt32
System.Data.SqlTypes.SqlInt32
System.Data.SqlTypes.SqlByte
System.Data.SqlTypes.SqlMoney
System.Data.SqlTypes.SqlString
System.Data.SqlTypes.SqlGuid
System.Data.SqlTypes.SqlDateTime
It would appear that the data that should be in column #5 is actually appearing in column #3. But that is pure speculation.
What I know is that in order to use a dataTable "dynamically" with a query that can continue any number of different types of data the best route is to use GetSchemaTable() to retrieve it.
What I Saw In The Debugger
When I dropped into the debugger I took a look at dataTable's types built from the schema vs. the types returned to the object from reader.GetValues(). They are exactly the same.
It seems like dataTable.Rows.Add(buffer) is adding the columns a few columns off from where it should be. But this shouldn't be possible. Especially with the schema being directly built from the reader. I've played with options such as "CommandBehavior.KeyInfo" within ExecuteReader() and still had the same error occur.
Note: I need to run the query this way to enable the end-user to halt the query mid-read. Please do not suggest I scrap this and use an SqlDataAdapter or DataTable.Load() solution.
I'd really appreciate any help. Thank you!
Method DbDataReader.GetSchemaTable() return metadata table containing column names and types. It is not an empty table that one could expect. For more details see MSDN
I'm sorry but GetSchemaTable retrieves schema for a given table and GetValues retrieves the actual row data. E.g. your dataTable will contain columns like column name, column type, etc. (see MSDN reference) while your buffer will contain the actual data whose representation will differ from what the dataTable contains.
Why don't you just use:
dataTable.Fill();
Why are you manually loading it one row at a time?
Please check what you are inserting into the column row which gas GUID data type. I got the same error and found that while inserting records after reading records from csv file, there were couple of empty spaces in csv file.

c# Excel import query

I've been requested to import an excel spreadsheet which is fine but Im getting a problem with importing a certain cell that contains both numeric and alphanumeric characters.
excel eg
Col
A B C
Row 0123 8 Fake Address CF11 1XX
XX123 8 Fake Address CF11 1XX
As per the example above when the dataset is being loaded its treating Row 2, col (A) as a numeric field resulting in an empty column in the array.
My connection for the OleDb is
var dbImportConn = new OleDbConnection(#"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dataSource
+ #";Extended Properties=""Excel 8.0;HDR=No;IMEX=1"";")
In this connection i have set the IMEX = 1 which should parse all contents as string into the dataset. Also if i change Row 1 Col (A) to have 'XX123' the entire Col (A) successfully parses as string! Unfortunately this is not going to help my scenario as the excel file is passed from an external client who have also advised that do not have the means to pass through the file with a header row which would solve my issue.
My one thought at this point is when I receive the file to edit the file (programmatically) to insert a header but again as the client may change how many columns are contained this would not be a safe option for me.
So basically I need to find a solution for dealing with the current format on the spreadsheet and to pass through all cells into the array. Has anyone come across this issue before ? Or know how to solve this ?
I await your thoughts
Thanks
Scott
ps If this is not clear just shout
Hi There is a registry setting called TypeGuessRows that you can change that tells Excel to scan all the column before deciding it's type. Currently, it seems, this is set to read an x number of rows in a column and decides the type of the column e.g. if your first x rows are integers and x+1 is string, the import will fail because it has already decided that this is an integer column. You can change the registry setting to read the whole column before deciding..
please see this also
http://jingyangli.wordpress.com/2009/02/13/imex1-revisit-and-typeguessrows-setting-change-to-0-watch-for-performance/
This isn't a direct answer, but I would like to recommend you use the Excel Data Reader, which is opensource under the LGPL licence and is Lightweight and fast library written in C# for reading Microsoft Excel files ('97-2007).

how to limit rows in dataGridView?

i have a data file(csv) consisting of 2 columns & 1000 rows, as i load it to my datagridview it takes alot of time, i just want to show only the first 6 rows just as a preview of file to user. Is there any way i can show only the first 6 rows in my datagrid view. Following is the code im displaying the data in DataGridView.
DataTable csvDataTable = CSVReader.ReadCSVFile(textBoxCsv.Text, true);
dataGridViewCsvData.DataSource = csvDataTable;
dataGridViewCsvData.SelectionMode = DataGridViewSelectionMode.FullColumnSelect;
CSVReader is an open source project isn't it? try to add ReadTopLines method to that class that will read only top N lines given as parameter
Every datatable has it's own DefaultView.
http://msdn.microsoft.com/en-us/library/system.data.datatable.defaultview.aspx
You can then get the Table from the view by DefaultView.GetTable. And you can manipulate data in you View the way you want. You can filter up, query.
You can find out more about expressions here:
http://msdn.microsoft.com/en-us/library/system.data.datacolumn.expression.aspx
OR, since CSVReader is an open-source project, you can simply change
public DataTable CreateDataTable(bool headerRow)
Add number of lines to this method, and you will get what you need without reading the whole file.
I didn't read the whole source, so there might be a solution without even changing a code.
Use Open Source for 100%. Change it, customize it, send you patches! People do appreciate it! And you will get experience, knowledge and new friends who might help you in future :)

Copying data from one DataTable to another

What is the fastest way of transferring few thousand rows of data from one DataTable to another? Would be great to see some sample code snippets.
Edit: I need to explain a bit more. There is a filtering condition for copying the rows. So, a plain Copy() will not work.
You can't copy the whole table, you need to copy one rows. From http://support.microsoft.com/kb/308909 (sample code if you follow the link)
"How to Copy DataRows Between DataTables
Before you use the ImportRow method, you must ensure that the target table has the identical structure as the source table. This sample uses the Clone method of DataTable class to copy the structure of the DataTable, including all DataTable schemas, relations, and constraints.
This sample uses the Products table that is included with the Microsoft SQL Server Northwind database. The first five rows are copied from the Products table to another table that is created in memory."
What is wrong with DataTable.Copy?
Copying rows to a table throws some flags at me. I've seen people try this before, and in every single case what they really wanted was a System.Data.DataView. You really should check to see if the RowFilter property will do what you need it to do.

Categories

Resources