I have a table with 6 columns, one of which is a date default value.
I want to import a 5-column CSV file, letting that date default.
I get the error Invalid character value for cast specification.
I also used a format file, but it doesn't help: number of table and csv file column are not matching.
How can I fix this?
Create a View with the columns that match the data file, then BCP into the View. Make sure any other columns in the table will allow Null values and/or have Default values.
If your still having issues:
You may need a format file to tell bcp the file to table (or view) mapping BCP Format Files
I would generate the format file, and edit it to see what BCP thinks the mapping is. BCP may be mapping one of your CSV file columns to the wrong field, or its getting thrown off in some other way.
Just modify the file to map the correct CSV columns to the correct table or view columns and you should be good.
Related
So I am new to Oledb and I a have project that requires me to grab data from an excel file using a Console Application. The excel file has around 500 columns and 55 rows. How could I possibly get data from columns past 255?
In order to read columns 256 -> you just need to modify the Select statement. By default both the Microsoft.ACE.OLEDB.12.0 and the Microsoft.Jet.OLEDB.4.0 drivers will read from Column 1-255 (A->IU) but you can ask it to read the remaining columns by specifying them on the Select statement.
To read the next 255 columns and taking "Sheet1" as your sheet name you would specify...
Select * From [Sheet1$IV:SP]
This will work even if there aren't another 255 columns. It will simply return the next chunk of columns if there are 1...255 additional columns.
Incidentally, the Microsoft.ACE.OLEDB.12.0 driver will read both .xls and any variant of .xlsx, .xlsm etc without changing the extended properties from "Excel 12.0". There is no need to if...then...else the connection string depending on the file type.
The OLEDB driver is pretty good for the most part but it really does rely on well formed sheets. Mixed data types aren't handled terribly well and it does weird things if the first columns/rows are empty but aside from that it's fine. I use it a lot.
I am reading CSV files and dynamically creating database tables based on headers of CSV files.
But getting problem while reading csv data and dumping in to relevant tables. The problem comes when I have a column with decimal datatype and CSV fields contains blank so it is treated as string and getting below error.
Error :
Failed to convert parameter value from a String to a Decimal.
I am using CSV Helper and through class map, maybe I can achieve this but I am dynamically creating structure based on CSV files so I can't define any fixed class.
Here is 1 answer I found which might be helpful in case of class map: Using CsvHelper can I translate white space to a nullable?
Sample Record :
Problem occurs when sqlbulk copy tries to dump those data in SQL Server database table.
I think that you might need to do column mapping. If you are using SqlBulkCopy you can use some of what is described on this post:
Mapping columns in a DataTable to a SQL table with SqlBulkCopy
Sample code is
INSERT INTO OPENROWSET ('Microsoft.ACE.OLEDB.12.0', 'Excel 8.0;Database=E:\Application\PASpready\Files\NK\NKAll.xlsx;HDR=YES;','SELECT * FROM [All$]')
select ..... from table
All the data comes over as text and number decimals loose their format. How can i keep the data format in excel.
Try building that NKAll.xlsx file prior to exporting, as a template, having a dummy row of data with correct formats.
That might help excel infer the types correctly. If that works, then you can initially update the first row, and then insert all the rest.
I have parsing a CSV via the Microsoft.Jet.OLEDB.4.0 provider. Which has been working fine for most of our tasks, but recently I've noticed an issue.
I have a CSV which has a column called Rating, this is generally an integer but occasionally it will be "1-2" or a Date e.g "1/1/2010". The datatable I am importing it into has had its columns explicitly set to strings but when a non-integer field is read it is null instead.
Any ideas how I get round this??
Use a schema.ini file (in the folder that contains your .csv) and specify the columns data types correctly.
Likely what is happening is that the first few fields in the column are being sniffed to determine data type, and then when there are later columns of a different type, they're dropped.
I believe you can turn off this behavior by adding IMEX=1 to your Extended Properties in the connection string. This sets the reader to Intermixed Mode which will read the fields as text. Then you can go through in another pass and set the types yourself.
Im using ExcelQueryFactory to retrieve the column names of a worksheet using C# 4.0.I'm able to see the list of column names but why do i get an additional set of column names like F12,F100,F20 etc . It happens with any reader i used to read my excel file.
It's not an C# issue, that is an Excel issue.
if your columns do not have column names, Excel will provide names in the format F###, where the number is the number of the column, and F is short for Field (I guess).
Now, if you have any columns in the file that at anytime contained data, or are referenced in formulas, or Excel just thinks they are important, you'll get them in the column list.
Just filter out any columns in that format, of course with the caveat that real columns with real names like F7 will get the short end of the stick there.