I have 2 script components which extract data from result set objects say User::AllXData and User::AllYData.
It is run through a foreach loop and the data is stored in a data table.
Next, I'm adding the data into a excel sheet using Excel destination. Now when I do that. All the data corresponding to column A (i.e, the data from User::AllXData) is being added to the excel sheet, but the column B gets filled with null values till the end of column A's data.
Then column B gets filled leaving column A with null data. It's supposed to be aligned.
Is there a workaround for this?
Edit:
After a long of grinding and running many tests, finally came across a solution.
The answer to this is pretty simple. Instead of using two objects as result set, it's better to use only one.
If you're going to query from a single source, include all the required columns in your SQL query into one object result set and use that as a read only variable in the script component.
Create a single data table that includes all the required columns and adds them into your excel destination row by row without any null values.
Here's an article that has a good example.
Related
I have an excel file .xlsx which i will get from client and after certain rows data it will have a blank row and then a generic comment everytime and the starting work of that phrase will be same all the time and not the whole phrase.
I want to execute ssis only till row 5 including header as columns and not want to process line 6 and 7 and every time data changes so i can't even take a range in this case.
I have the flexibility do it either in on prem SSIS or ADF.
Tried using filters but as the text in the cell is splitted in 4-5 lines it only ignores the 1st line.
I would appreciate any help.
Using Azure data factory data flow, you can use filter transformation to ignore certain rows based on conditions.
Example:
Excel input:
ADF Data flow:
Create a source excel dataset. If your first row is not a header do not enable the First row as header property in the dataset. Here by default, the empty rows will be skipped while reading.
Add source transformation and connect it to the excel dataset. If you do not have a header row in the data, the default column names (like col0) were given to your data.
Add filter transformation after source, to filter out the unwanted rows.
Expression: not(like({_col0_},'This%'))
Filter output:
Using SSIS, in your Excel source, you can use the SQL command and write a query to pull the data from excel. If you have any mandatory columns, use filters with where condition to pull not null rows from the file (ex: SELECT * FROM [Sheet1$] WHERE [column1] IS NOT NULL). Or use the conditional split task to filter the required rows from the excel source.
I am trying to create an excel file via c# in which the cells that contains the same data are merged. This is the easy part, the hard part is to make the excel display all the rows of the merged cells with the same value when filtering that value.
For example, for this data:
I expect this result after filtering the value "bla":
The result I would normally get after filtering merged cells is only the first row with this value (in this case, the row that contains the value "1").
I have found this solution which is extremely not elegant and not efficent:
Create two copies of the the data, in the first copy merge cells with the same values and in the second copy don't merge cells with the same value.
Use the format painter to copy the format of the merged version of the data onto the unmerged version of the data (apparently this way keeps the data of the cells but display them as merged).
delete the merged copy of the data.
I guess I can program this solution, but there must be a better way to solve this problem.
I have a data table
and it contains some int type columns, some type double columns, some date type columns
what i am trying to do is,
i want to do double.TryParse for double column, and if there is any value with it then it will store dbnull value in corresponding rows,
same thing i will do for date, int
since my data table could have 100000 records so i don't to run loop for each row
is it possible through linq or with any method
Thank You
LINQ is not good for batch operations. You should create a stored procedure in your DB and import it in your model (If you are using EF that is import function, if using LINQ to SQL then a simple drag and drop will do it).
LINQ is no silver bullet for all problems where you need to loop over a (maybe very large) set of data. So if you want to go over each data set and change the values depending on some condition, a foreach loop is your friend.
LINQ is a query language to retrieve data and not some kind of super-fast way to alter large lists or other enumerable objects. It comes in handy if you want to get data from a given object applying some conditions or doing a GroupBy without ending up in a 20-lines unreadable mash of foreach-loops and if-statements.
It doesn't matter if you'll do it in a loop or with linq, you'll still need to iterate over the entire data table ...
there's no silver bullet that will save you from doing the checks and inserts i'm afraid
I am using a dataset and a table adapter to populate a datagridview. In my SQL statement I am using the RTrim function for two of the columns. For both of them, I am setting the result variable to the same name as the original column name.
This works, but then I cannot update the data using the dataset, because the trimmed values are read-only.
What I want is to fill a datagridview with trimmed values, and then be able to update using the same dataset. This seems simple, yet it will not allow me to do this. Everything updates except the two columns that I used Trim on.
Here is the SQL statement I am using.
SELECT
PK, RTRIM(Description) AS Description, ContractNumber,
RTRIM(Status) AS Status, Active
FROM
ConstructionProjects
ORDER BY
CASE WHEN ContractNumber > 0
THEN ContractNumber
ELSE 99999
END
I know I can easily trim the cells on the client side in the Windows App., but I was looking for a way to do this at the SQL side, at the query. Is there an easy way to do this, and still be able to call the Update method?
Thanks,
Matt Fomich
A possible workaround could be tried loading the column Description and Status without the trimming operation,then hide them in your gridview. When you update your trimmed (and visible) column copy the value back to the untrimmed (and hidden) column in the same row. Then the update should work as usual.
Perhaps you should change the name of the column. (the trimmed Description and Status columns)
I am stucked up at this point. I searched alot on gooogle but didnot find anything.
My problem is:
I have an Excel file which i want to export to datatable and from datatable i want to save it to oracle DB.
Excel file contains multiple columns and each column consists of large data(approx 20000characters/numbers).
using oledbconnection,excel columns with such large data are not copied to datatble.(Small data columns gets copied).
Can anyone suggest workaround to my problem???
Thanks in advance.
Check the dataypes and their length i.e nvarchar(3000)
If that doesnt work, test with a small set of data maybe 5 rows, you should be able to see a trend there.
Also check the data type of your application, sometimes for large number you may use long, or if they are large strings maybe use a stringbuilder to pass the data instead of just a string....