I'm trying to load multiple csv files into a sql database using SSIS but I'm having some issue reading the csv file.
This is how the files are coming:
,,,,
ID, Name, Amount, Important_Dates, Company
101, Mark,"157,500",11/18/19, Amazon, Inc
102,Tom, "388,000",11/14/19, Ebay Corp
103,Tim, "484,000",11/25/19, Wish
104,Richard,"384,750",10/5/19, NBA. INC
Every time I try to open the file with SSIS the data get mixed because of the commas inside the values and this is how it looks:
And trying to read the file in a way that all values stay on its own column like:
Note:
I'll be running this 2 or 1 times a day that's why I'm trying to automate it.
And I already tried the code from this page but didn't work:
https://radacad.com/problem-with-comma-values-in-comma-delimited-file
Probably, a script task with C# code before the data flow, that will take care of the commas across all columns might help but I have no idea how to do that.
Any help or idea I'll really appreciate it.
Thank you!
You should have a connection manager item for whatever the source of this data is, and the connection manager will allow you to set a text qualifier property. Do that, and SSIS should handle this data properly.
Your CSV is not so clean. I have edited and it's
ID,Name,Amount,Important_Dates,Company
101,Mark,"157,500",11/18/19,"Amazon,Inc"
102,Tom,"388,000",11/14/19,Ebay Corp
103,Tim,"484,000",11/25/19,Wish
104,Richard,"384,750",10/5/19,NBA. INC
Then you can change the separator using a classic command line tool as Miller (https://github.com/johnkerl/miller) and convert it from CSV to TSV, running
mlr --c2t cat input.csv >output.tsv
to have
ID Name Amount Important_Dates Company
101 Mark 157,500 11/18/19 Amazon,Inc
102 Tom 388,000 11/14/19 Ebay Corp
103 Tim 484,000 11/25/19 Wish
104 Richard 384,750 10/5/19 NBA. INC
And then you should change import separator, choosing tab
Related
I am using ctreeACE to create a local database, and I was given a csv file that contains 1000 entries of data and wanted to know if there was a way to import it without hard coding it?
Right now I am having to insert line by line with:
INSERT INTO testdata VALUES
('1ZE83A545192635139','2018-06-19 00:00:00',etc)
Note that ctreeACE only allows single row inserts with INSERT...VALUES (Source)
I can't find a way to do this directly, but you could use this tool to create your insert statements.
First input your data. You can load the csv directly, I just hardcoded two sample lines:
Next set your input options as needed. I used comma separators and ' as a quoting character in the example:
Third, set your output options. This would be a huge screenshot and is pretty self-explanatory so I'm leaving it out.
Last, click CSV to SQL Insert, and it will generate formatted insert statements (one line per insert) for you:
Hope that helps.
I've tried researching this question but most of the answers is for .csv files which does not help me a lot.
I have a couple of large .dat files containing quite a lot of data (each file around 700MB), and I am trying to develop a software in C# where I will be able to search for a specific string and locate the line where it is (duplicates will occur so a listview / listbox might be a good idea).
Every line follows the exact same data format and the starting index/length of each datatype is well documented.
Example:
Line 1: ZATIXIZ20SWEDENSTACKOVERFLOWCHROME
Documented like this:
Username : 0-6 Age : 7-8 Country: 9-14 Website :
15-27 Browser : 28-33
My guess is that the best approach would be to do some kind of BULK INSERT on the data files into a database and then index it for faster searching later on. I am not quite sure how to do this though, nor what the best approach would be. (It also needs to search through all of the files so maybe it could be a good idea to insert them all into the same table?)
So far I have only tried to read one of the files into memory and then do a simple Regex which of course was not a good idea. Unfortunately I am a bit inexperienced with SQL queries which is why I have not tried a lot yet.
Thanks in advance!
'Insert all data of the same type into a table with indexed columns.
If the properties vary between each file, use multiple tables.
If you want to be able to trace the match back to the original file, use a table with columns:
Key - Internal key from a sequence
FileName - So you know where it came from
Line - The line number
Username
Age
Country
Website
Browser
Where FileName, Line is a unique key.
Here is a link to an article on full-text searching on MSSQL as we don't know which RDMS you are using: http://msdn.microsoft.com/en-us/library/ms142571.aspx#queries
From you example, the line 'ZATIXIZ20SWEDENSTACKOVERFLOWCHROME' becomes:
| Key | FileName | Line | Username | Age | CountryKey | Website | BrowserKey
1 'Data1.dat' 1 'ZATIXIZ' 20 46 'STACKOVERFLOW' 4
In this example, you'd need two more tables: Countries and Browsers. These are optional, as you could just include the information directly in the main table.
I must stress though, that it really depends on how you wish to query this data. The above structure gives you the opportunity to search for 'all swedish users between 20 and 25)' by performing the following query:
select * from TABLENAME where Age < 25 and Age >= 20 and CountryKey = 46
In regards to how you import a fixed width file, it depends greatly on your RDMS. If you're using Oracle, you can use SQL*Loader. Remember that it does not necessarily have to be a single-stage process. You can load the data into the tables and then look up the keys internally after the initial import.
For MSSQL, here is another answer from the stack: Bulk insert fixed width fields
You can also preprocess it in .NET. Again, it depends on your scenario. If you are piping these into your system at a rate of one 900MB file every 10 minutes, you're looking at some serious optimization of the bulk load process (and some serious hardware). But if you only need to load this file once a month, the .NET approach is absolutely fine, even though it may take a few hours.
Problem.
I regularly receive a feed files from different suppliers. Although the column names are consistent the problem comes when some suppliers send text files with more or less columns in there feed file.
Furthermore the arrangement of these files are inconsistent.
Other than the Dynamic data flow task provided by Cozy Roc is there another way I could import these files. I am not a C# guru but i am driven torwards using a "Script Task" control flow or "Script Component" Data flow task.
Any suggestion, samples or direction will greatly be appreciated.
http://www.cozyroc.com/ssis/data-flow-task
Some forums
http://www.sqlservercentral.com/Forums/Topic525799-148-1.aspx#bm526400
http://www.bidn.com/forums/microsoft-business-intelligence/integration-services/26/dynamic-data-flow
Off the top of my head, I have a 50% solution for you.
The problem
SSIS really cares about meta data so variations in it tend to result in exceptions. DTS was far more forgiving in this sense. That strong need for consistent meta data makes use of the Flat File Source troublesome.
Query based solution
If the problem is the component, let's not use it. What I like about this approach is that conceptually, it's the same as querying a table-the order of columns does not matter nor does the presence of extra columns matter.
Variables
I created 3 variables, all of type string: CurrentFileName, InputFolder and Query.
InputFolder is hard wired to the source folder. In my example, it's C:\ssisdata\Kipreal
CurrentFileName is the name of a file. During design time, it was input5columns.csv but that will change at run time.
Query is an expression "SELECT col1, col2, col3, col4, col5 FROM " + #[User::CurrentFilename]
Connection manager
Set up a connection to the input file using the JET OLEDB driver. After creating it as described in the linked article, I renamed it to FileOLEDB and set an expression on the ConnectionManager of "Data Source=" + #[User::InputFolder] + ";Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=\"text;HDR=Yes;FMT=CSVDelimited;\";"
Control Flow
My Control Flow looks like a Data flow task nested in a Foreach file enumerator
Foreach File Enumerator
My Foreach File enumerator is configured to operate on files. I put an expression on the Directory for #[User::InputFolder] Notice that at this point, if the value of that folder needs to change, it'll correctly be updated in both the Connection Manager and the file enumerator. In "Retrieve file name", instead of the default "Fully Qualified", choose "Name and Extension"
In the Variable Mappings tab, assign the value to our #[User::CurrentFileName] variable
At this point, each iteration of the loop will change the value of the #[User::Query to reflect the current file name.
Data Flow
This is actually the easiest piece. Use an OLE DB source and wire it as indicated.
Use the FileOLEDB connection manager and change the Data Access mode to "SQL Command from variable." Use the #[User::Query] variable in there, click OK and you're ready to work.
Sample data
I created two sample files input5columns.csv and input7columns.csv All of the columns of 5 are in 7 but 7 has them in a different order (col2 is ordinal position 2 and 6). I negated all the values in 7 to make it readily apparent which file is being operated on.
col1,col3,col2,col5,col4
1,3,2,5,4
1111,3333,2222,5555,4444
11,33,22,55,44
111,333,222,555,444
and
col1,col3,col7,col5,col4,col6,col2
-1111,-3333,-7777,-5555,-4444,-6666,-2222
-111,-333,-777,-555,-444,-666,-222
-1,-3,-7,-5,-4,-6,-2
-11,-33,-77,-55,-44,-666,-222
Running the package results in these two screen shots
What's missing
I don't know of a way to tell the query based approach that it's OK if a column doesn't exist. If there's a unique key, I suppose you could define your query to have only the columns that must be there and then perform lookups against the file to try and obtain the columns that ought to be there and not fail the lookup if the column doesn't exist. Pretty kludgey though.
Our solution. We use parent child packages. In the parent pacakge we take the individual client files and transform them to our standard format files then call the child package to process the standard import using the file we created. This only works if the client is consistent in what they send though, if they try to change their format from what they agreed to send us, we return the file.
I am currently working on a project for traversing an excel document and inserting data into a database using C#.
The relevant data for this project is:
The excel sheet has 14 rows at the top that I do not care about. (sometimes 15, see Russia/Siberia below)
The data is grouped by name into 2 columns (date and value), such as:
Sheet 1
USA China Russia
Date Value Date Value Siberia
1/1/09 4.3654 1/1/09 2.7456 Date Value
1/2/09 3.5545 1/3/09 9.3214 2/5/09 0.2454
1/3/09 3.2322 1/21/09 5.2234 2/6/09 0.5557
The name I need to acquire is whichever is listed directly above "Date".
I only care about data from dates we do not have in the database. Before each column set is parsed, I will acquire the max date for any given name from the database, and skip anything at or before it.
There is no guarantee that the columns will be in a constant order or have constant spacing.
I do not want data for all names, rather only those in a list I put together before the file is acquired.
My current plan is this:
For each column, if the date field is at row 16, save the name as the value in row 15 above it, check the database for the last date for that name, only insert data where the date is greater than the acquired date.
If the date field is at row 17, do the same thing, but start the for loop through each row at 18.
If the name is not in the list, skip the column. If it is, make sure to grab the column next to it for the necessary values.
My problem is:
I am currently trying to use the ExcelDataReader from Codeplex(http://www.codeplex.com/ExcelDataReader). This only likes csv-like sheets, which this project has not.
I do not know of any alternative Excel readers.
To the best of my knowledge, a straight FileStream traversal of this file can only go row-by-row, rather than column-by-column.
To anyone still reading, thank you for your time. Any recommendations on how to proceed? Please ensure that solutions can traverse each column, not each row.
Also, please don't worry about the database stuff, or the list of names that precedes the traversal.
Addendum: What I'd really like to end up with is some type of table that I can just traverse with a nested loop, making column-centric traversal much, much easier. Because there is so much garbage near the top of the sheet (14+ rows), most simple solutions are not feasible.
If you want to read from excel in C#, i've used this library with great success, it'll give you the flexibility to parse columns/rows just however you'd like:
http://sourceforge.net/projects/koogra/ (read-only)
Other open source libraries i haven't used but could be good:
http://nexcel.sourceforge.net/ (read-only)
http://npoi.codeplex.com/ (can read and write)
http://developer.novell.com/wiki/index.php/Poi.Net (this project is dead)
Alternatively, you can use one of the many good Java libraries, and convert it into a C# assembly using IKVM:
http://jxls.sourceforge.net/
http://www.andykhan.com/jexcelapi/
http://poi.apache.org/ (this one's the grand-daddy of java XLS libraries)
I've covered how to do the IKVM Java -> C# conversion here (it's really not as horrible an option as you think):
http://splinter.com.au/blog/?p=207
Not a straight answer to your question but an alternative idea:
Your data looks like a pivot-ish table. I'd recommend "unpivoting" it into simple table.
Example:
Russia USA
Q1 123 323
Q2 456 321
Q3 567 843
Becomes:
Quarter Country Value
Q1 Russia 123
Q1 USA 323
Q2 Russia 321
....
If that is the case, not sure if I got this right in your question, than processing the data using a OleDB driver or whatever CSV kind of stuff should be become much less painful.
You can access Excel directly using ADO.NET via the ODBC driver. See http://www.davidhayden.com/blog/dave/archive/2006/05/26/2973.aspx or Google for more info on how to do that. You may wish to try HDR=No in your connection string, since your first row isn't really proper headers by the looks of it.
I haven't done this for a while, but I remember that it is a bit "temperamental" and takes some playing around with to get the column names right, but it should work. Try SELECT * FROM [Sheet1$] and see what you get.
I highly recommend saving this Excel document in a CSV format before doing anything else with it. You can do using this code
After you have a CSV, you can either parse it using that library, or write your own parser for it.
As I did before, I prefer to use OLEDB connection in order to connect to an Excel document.
By the way, you can take a look at the following article for more information:
http://www.codeproject.com/KB/office/excel_using_oledb.aspx
SpreadsheetGear for .NET can load workbooks and access any cells on any sheet in any order. You can get the formatted text of the cell (such as "1/1/09") or the underlying value ("1/1/09" is stored as the double 39814.0 in Excel or SpreadsheetGear).
You can see some live ASP.NET samples here and download the free trial here if you want to try it yourself.
Disclaimer: I own SpreadsheetGear LLC
How do I import data in Excel from a CSV file using C#? Actually, what I want to achieve is similar to what we do in Excel, you go to the Data tab and then select From Text option and then use the Text to columns option and select CSV and it does the magic, and all that stuff. I want to automate it.
If you could head me in the right direction, I'll really appreciate that.
EDIT: I guess I didn't explained well. What I want to do is something like
Excel.Application excelApp;
Excel.Workbook excelWorkbook;
// open excel
excelApp = new Excel.Application();
// something like
excelWorkbook.ImportFromTextFile(); // is what I need
I want to import that data into Excel, not my own application. As far as I know, I don't think I would have to parse the CSV myself and then insert them in Excel. Excel does that for us. I simply need to know how to automate that process.
I think you're over complicating things. Excel automatically splits data into columns by comma delimiters if it's a CSV file. So all you should need to do is ensure your extension is CSV.
I just tried opening a file quick in Excel and it works fine. So what you really need is just to call Workbook.Open() with a file with a CSV extension.
You could open Excel, start recording a macro, do what you want, then see what the macro recorded. That should tell you what objects to use and how to use them.
I beleive there are two parts, one is the split operation for the csv that the other responder has already picked up on, which I don't think is essential but I'll include anyways. And the big one is the writing to the excel file, which I was able to get working, but under specific circumstances and it was a pain to accomplish.
CSV is pretty simple, you can do a string.split on a comma seperator if you want. However, this method is horribly broken, albeit I'll admit I've used it myself, mainly because I also have control over the source data, and know that no quotes or escape characters will ever appear. I've included a link to an article on proper csv parsing, however, I have never tested the source or fully audited the code myself. I have used other code by the same author with success. http://www.boyet.com/articles/csvparser.html
The second part is alot more complex, and was a huge pain for me. The approach I took was to use the jet driver to treat the excel file like a database, and then run SQL queries against it. There are a few limitations, which may cause this to not fit you're goal. I was looking to use prebuilt excel file templates to basically display data and some preset functions and graphs. To accomplish this I have several tabs of report data, and one tab which is raw_data. My program writes to the raw_data tab, and all the other tabs calculations point to cells in this table. I'll go into some of the reasoning for this behavior after the code:
First off, the imports (not all may be required, this is pulled from a larger class file and I didn't properly comment what was for what):
using System.IO;
using System.Diagnostics;
using System.Data.Common;
using System.Globalization;
Next we need to define the connection string, my class already has a FileInfo reference at this point to the file I want to use, so that's what I pass on. It's possible to search on google what all the parameters are for, but basicaly use the Jet Driver (should be available on ANY windows install) to open an excel file like you're referring to a database.
string connectString = #"Provider=Microsoft.Jet.OLEDB.4.0;Data Source={filename};Extended Properties=""Excel 8.0;HDR=YES;IMEX=0""";
connectString = connectString.Replace("{filename}", fi.FullName);
Now let's open up the connection to the DB, and be ready to run commands on the DB:
DbProviderFactory factory = DbProviderFactories.GetFactory("System.Data.OleDb");
using (DbConnection connection = factory.CreateConnection())
{
connection.ConnectionString = connectString;
using (DbCommand command = connection.CreateCommand())
{
connection.Open();
Next we need the actual logic for DB insertion. So basically throw queries into a loop or whatever you're logic is, and insert the data row-by-row.
string query = "INSERT INTO [raw_aaa$] (correlationid, ipaddr, somenum) VALUES (\"abcdef", \"1.1.1.1", 10)";
command.CommandText = query;
command.ExecuteNonQuery();
Now here's the really annoying part, the excel driver tries to detect you're column type before insert, so even if you pass a proper integer value, if excel thinks the column type is text, it will insert all you're numbers as text, and it's very hard to get this treated like a number. As such, excel must already have the column type as the number. In order to accomplish this, for my template file I fill in the first 10 rows with dummy data, so that when you load the file in the jet driver, it can detect the proper types and use them. Then all my forumals that point at my csv table will operate properly since the values are of the right type. This may work for you if you're goals are similar to mine, and to use templates that already point to this data (just start at row 10 instead of row 2).
Because of this, my raw_aaa tab in excel might look something like this:
correlationid ipaddr somenum
abcdef 1.1.1.1 5
abcdef 1.1.1.1 5
abcdef 1.1.1.1 5
abcdef 1.1.1.1 5
abcdef 1.1.1.1 5
abcdef 1.1.1.1 5
abcdef 1.1.1.1 5
abcdef 1.1.1.1 5
Note row 1 is the column names that I referenced in my sql queries. I think you can do without this, but that will require a little more research. By already having this data in the excel file, the somenum column will be detected as a number, and any data inserted will be properly treated as such.
Antoher note that makes this annoying, the Jet Driver is 32-bit only, so in my case where I had an explicit 64-bit program, I was unable to execute this directly. So I had the nasty hack of writing to a file, then launch a program that would insert the data in the file into my excel template.
All in all, I think the solution is pretty nasty, but thus far haven't found a better way to do this unfortunatly. Good luck!
You can take a look at TakeIo.Spreadsheet .NET library. It accepts files from Excel 97-2003, Excel 2007 and newer, and CSV format (semicolon or comma separators).
Example:
var inputFile = new FileInfo("Book1.csv"); // could be .xls or .xlsx too
var sheet = Spreadsheet.Read(inputFile);
foreach (var row in sheet)
{
foreach (var cell in row)
{
// do something
}
}
You can remove beginning and trailing empty rows, and also beginning and trailing columns from the imported data using the Normalize() function:
sheet.Normalize();
Sometimes you can find that your imported data contains empty rows between data, so you can use another helper for this case:
sheet.RemoveEmptyRows();
There is a Serialize() function to convert any input to CSV too:
var outfile = new StreamWriter("AllData.csv");
sheet.Serialize(outfile);
If you like to use comma instead of the default semicolon separator in your CSV file, do:
sheet.Serialize(outfile, ',');
And yes, there is also a ToString() function too...
This package is available at NuGet too, just take a look at TakeIo.Spreadsheet.
You can use ADO.NET
http://vbadud.blogspot.com/2008/09/opening-comma-separate-file-csv-through.html
Well, importing from CSV shouldn't be a big deal. I think the most basic method would be to do it using string operations. You could build a pretty fine parser using simple Split() command, and getting the stuff in arrays.