I'm creating a package to load an .XLSX file in a SQL Server destination table. After using a lookup with referential table; I used derived column to put the name of the unmatched column in a reject table, It's all fine till now.
After this, I need to used again the same source file but this time as a destination, and add to it a new column at the end of existing columns, where I will put the column nam that I stored in my reject table and which doesn't match.
My issue, is that when I go to Advanced Editor for Excel Destination => Input and Output Properties => External Columns => Add Column. When I add a new column and go back to Column Mappings I do the mapping but after validation the new column desapears.
I use a variable sheet name and I've tried to use a SQL command and even when I select more columns A1:AC1 (I have columns in range A1:AB1), I don't get extra column available.
Related
Using EntityFramework Reverse POCO Generator v2.26.0 and I cannot find where to change the .tt to stop the column rename when generating the POCOs. I suspect it is in UpdateColumn, which I've updated to just the single line:
UpdateColumn = (Column column, Table table) => column;
But still the columns get renamed from e.g. "Batch_ID" to "BatchId".
Without stopping the column rename, I'm getting the error:
The data reader is incompatible with the specified 'DocumentExport.DataAccess.Databases.Batches.Batch'. A member of the type, 'BatchId', does not have a corresponding column in the data reader with the same name.
How does one stop column renaming during POCO generation?
In the database.tt,
UsePascalCase = false; // This will rename the generated C# tables & properties to use PascalCase. If false table & property names will be left alone.
While this accomplished suppressing column names, it also affected table names and possibly other things.
I have a C# application with a SQL Server database.
One functionality of the application is to read a big Excel file and go row by row trying to match an ID column with an ID in the database. Once a row matches, then some of the values are stored in a table.
Later the user is able to change the value on that table but I would like have a way to audit the original values. So I was thinking on creating an XML column in that table that will store the original values of the whole Excel row.
I don't need to implement audit trail to the table, just want to keep the Excel reference row in my SQL Server table for future audit.
Any advice?
External storage of the old value is too hard to maintain, in the long run.
All you need to do is add an "OriginalValue_[fieldname]" column in the SQL table for each value that the user can change. When you populate the table, put the same value in [fieldname] and [OriginalValue_fieldname]. When the user makes changes, you only change [fieldname] but the old value is right in the table.
If this won't work (i.e. you're not allowed to make schema changes), make another table with the ID fields and the original values. Same approach -- no external data.
I need to transform one single column values(columnA) from TABLE A in one database to another column B in TABLE B in some other database.
When i transfer columnA for example has (Employee names) but in the destination TABLE B it should be the (Employee ID). I have a lookup table which has the employeeID for the matching employee name.
Conditions:
I need only one single column gets updated in the destination TABLE B. without affecting any other columns.(Is it possible to insert in that way ? i know insertion involves all the columns to be involved)
I know this can be done in SSIS and i have created
* source oledb
*Lookup Transform
*Destination OLEDB
But the problem is at the destination output TABLE B the lookup transform is inserting NULL values to the unmatched column.
Can someone please guide me which is the best way to do this ?
Similarly i need to include various flows for the destination table from various databases.
You need to use a SQL Command as your destination, and have it do an UPDATE command to update your destination table to update the value of the EmployeeID column with the ID from the lookup table. If they are both on the same database, you can do a join in the command, otherwise you should use a Lookup transformation to get the ID based on the name.
How to change data type of column in sql server if the column contain millions of rows?
I try this it should work.
alter table employee
alter column dob datetime
but I get an error.
That is - table contain huge amount of data or it is full you are not able to change data type.
Please take following steps:
create new column with desired datatype and new name
update values of that column with converted values from the old one
remove the old column
rename the new one to the old name
If you go this way, you can see, where the problem is:
step 1 - you have no rights?
step 2 - there is some problem with conversion
step 3 - I guess there is some foreign key or index which holds this column in place
I am using a SQLite database. Since drop/rename/reorder columns are not supported in SQLite (using alter table commands), I am writing customized methods for the following tasks: (taking backup of existing table, then creating a new table with matching requirements and so on..) which is described in several other threads.
The following are the DB operations:
DROP columns
RENAME (columns and datatype)
ADD columns
REORDER columns.
I am wondering in what order should these operations be done? My confusion is mainly around whether drop should come before rename columns or the other way?
Also I need some pointers on how to rename columns including datatype and moving data around?
Any thoughts?
First, create a backup of your database in case something goes wrong during the following instructions.
The order of operations should be CREATE newtable, INSERT INTO newtable, DROP oldtable.
Create a new table with the correct column names and datatypes for each column. Then just do something like this:
INSERT INTO new_table(columns, ...)
(SELECT columns, ...
FROM old_table)
You may need to perform casts on different datatypes if the new datatype isn't directly compatible with the old datatype.
You will need to make sure that the columns you select from your old table are in the same order as defined by the columns in your new table in your INSERT INTO statement.
After data is inserted into your new table, you can then verify that all data has been inserted correctly. Be sure to update any foreign key references in your other tables to avoid any issues with foreign key constraints.
Then you can drop the old table and rename the new table to the name of your old table:
DROP old_table;
ALTER TABLE new_table_name RENAME TO old_table_name;