This is a add on to this question that was asked previously with no answer.
The problem I have is a need to occasionally update a set 35 children SSIS packages with one parent. They are all the same, differing only in what data they process. When I make a change, I delete all the children and paste them again in the same folder, updating the value of a variable that tells the package which child package it is so it knows which data to process (has a value 1-35).
My goal was to find a solution that allows the packages to somehow be aware to who they are (by file name, variable, configuration, etc) so that it would cut down on maintenance and setup up for production after a update.
The file names of the package keep the appended numbered value after the paste (packagename 1, packagename 2,....packagename X) in the same folder. I am using package deployment in SSIS 2012, so I don't have access to the file name as a parameter like would if I were using project deployment. All of the packages are in a SSDT solution with a parent package calling all 35 children. With Package Deployment, I'm using configurations in a SQL table to change the file path as its promoted from server to server.
I'd love to automate other things related to the children, but I can't unless I get this part solved first. Also, I need to add another 15 children or so and this would save a LOT of time.
Any help is appreciated
Have you tried to use environment variables?
And start the packages with diffrent parameters.
Packages_with_Parameter_from_Environments
(Sorry I am not allowed to comment.)
update a set 35 children SSIS packages with one parent. They are all the same, differing only in what data they process.
It seems like you shouldn't be using 35 different copies of the same package as a child and instead should just use parameters to fix the problem.
If the way they are processed is in the filename, you can use a filename parameter with a mask to pull out the variables using a for each loop, feed those parameters into the package being called. If not, you can store the processing options in a sql table, load those with file name and parameters and have that contain all your information and have the parent package pull that information out and use it to call the child packages.
Related
I'm developing C# on Visual Studio. I was able to config TortouseSVN to use $Rev$ property inside the main .cs file. As I tested, every time this .cs file is changed, latest revision number is updated on this file.
But then new tests showed that if any other file is changed and commited, this .cs file doesn't have latest revision number.
I'm thinking then on using Visual Studio's standard \Properties\AssemblyInfo.cs file to store SVN's $Rev$ property, and use .Net assembly to read it as I already do with FileVersion. This also has the advantage of storing version/revision metadata separated from normal source code.
But then, is there a way to setup Subversion/TortoiseSVN to always update $Rev$ property on a given file it is present, even when this file hasn't changed?
Update. What I'm looking to achieve is to automatically store the last revision the SVN folder was commited. This number will then be part of compiled binary (which will be generated after commit) and be shown in logs together with the app version, and be used on DEV, TQS and PRD environments. This way I'll keep in each log what revision a given execution refered to and be able to see what had changed since that execution and current HEAD.
Of course this is an app-wide info and not a specific info per source file. But I think SVN won't support updating a specific file every time a commit happens.
My best idea, a very very ugly one, would be to create a static class and during startup make each object, as they are instantiated, call it and inform their revision. This class method would then store the max value, which would then be app's compiled revision.
I am new to app development and I am doing a proof of concept app in Visual Studio 2017 community. The aim is to have Android and IOS versions.
The app purpose is to record five exercises results per test and give a pass / fail result based on a set of targets per exercise. The test targets will depend on the age, gender, and level of the person completing the test
For example a test target for a male 25 year old level 1 maybe
Push ups Target 22
Shuttle runs in 60 seconds Target 20
And so on
I expect to have two hundred lines of targets for all the variation of users, age, gender and levels. In particular, if i consiser sqlite (which i am using in the project already to store student info and results) I am thinking of how to seed the initial data table. If I code by a static resource file on first start I can read the data file (xml / csv /json) and seed the data table of targets and replace that file later to re-import using an app setting to signify a re-seed of the data table is required, but I am concerned I am "bloating" the App size and wondering what format is more efficient to read in
These targets will not change very often but may be review once a year and changed
In WPF I would create an a csv or Json file with this data in as a resource and read it in a plain C# class to model the targets. However on reading up there is concern about such static files "bloating" the size of the finished app and delays in creating the list of targets when in use and that there is no native csv library
I would also like to be able to import new data (targets) into the resource file
What is the most efficient way to achieve this please
If I understand your scenario correctly you need to save 5 targts for each user then Xamarin.Essentials: Preferences might be worth considering. It uses shared preferences on Android and NSUserDefaults on iOS.
If you have more information to save, then SQLite seems like a good option.
Review option to put "initial" DB file as a resource. So, you can prepare as DB structure as seed data. At first startup, app will be in need to just copy to document directory and use it (simple binary copying of the resource as is).
Later time, you will need some kind of code snippet that will merge changed data from new "initial DB file" to your local one at document directory.
I have lots of requirements to generate and deliver files with different extensions, such as: .xlsx, .txt, .csv
I am not good at c#, but I am assuming there is a SSIS package with Script Task where you just have to change variable names like Extension, StoredProcedureName etc. And it will dynamically create file of whatever extension and insert output of that SP into the file.
Does anybody encounter SSIS package template that would do something like that?
Thank you
Because of how an SSIS dataflow works, what you are looking for won't exist. The metadata (data about the types, number of columns, etc) is tightly bound to the design-time experience of an SSIS package. Attempting to change it at run time will result in a validation error (VS_NEEDSNEWMETADATA).
Even if you could get changing metadata to work, the output becomes the next problem. You can change the file name or target table name or any of a host of other things at run-time. What you cannot change though, is the target itself. I can't have a package modify itself to emit to Flat File and in the next run, flip a flag and now we generate an Excel file.
If you have coding skills, you could create the package on the fly based on metadata and then run itself but that's not a fun task due to the mix of COM and .NET objects.
If it were me, I would look at a something like Biml. Use the metadata you have to build a package that addresses all the possible permutations (Source query A to Excel, Flat File, CSV, etc). Once you have the pattern down, you then make a package for all possible source queries. Then, you've reduced your problem to orchestration - which package do I run?
I have to make the same program for two different companies (different logo, different name).
I would like to have this done automatically when I do a release.
I am thinking to use a resource file or a satellite assembly, but I am not sure how to proceed.
Which way do you think is the best?
Thanks.
EDIT:
The differences are only one image and one string.
I would like to
have everything generated in one click.
We might get more clients in
the future.
I must also add that I use SmartAssembly to merge all my dependencies into one exe file.
You could create a class library that contains 99% of the code, and then have two projects which each reference the common library with the 1% that differs for each company. Then a build of the whole solution will generate an executable for each company. Use this if the differences between what each company wants is just enough that you need to have [slightly] different code.
Alternatively, you could make the sections that vary dependent on data, not code. In this case, it might mean getting the logo from a file, rather than embedding it in the executable, and having a configuration xml file with the company name. This would allow you to have just a single executable for both companies.
Resource string in separate assembly would be the easiest distribution.
But honestly, I'd have it be a customization feature.
Last thing you want is to maintain everyone's logo changes due to: legal reasons, copywrite cases, whimsical artistic license, etc.
Which is short for.... have them provide a formatted image, and have them assign the company name during installation and store that off in the registry or in a meta file of some type (XML, manifest, etc.)
The best I can think of it a batch script.
Get your project to reference files (images (logo), text (company name), etc). i.e. C:\MyProject\Resources. So that when the project builds it complies them into the application/installer etc.
This way, you can write a script (.bat file) which copies in the resources needed per company.
Step 1 - Delete all files in the Resources folder
Step 2 - Use MSBuild.exe to build you project
Step 3 - Copy the files needed from the bin/release folder to a directory (i.e. C:\Release\CompanyA)
Step 4 - Change the variables in the script to load the details for the next company. And repeat from step 1, so it copies over the needed resource files and rebuilds.
I have an SSIS package that copies the data in a table from one SQL Server 2005 to another SQL Server 2005. I do this with a "Data Flow" task. In the package config file I expose the destination table name.
Problem is when I change the destination table name in the config file (via notepad) I get the following error "vs_needsnewmetadata". I think I understand the problem... the destination table column mapping is fixed when I first set up the package.
Question: what's the easiest way to do the above with an ssis package?
I've read online about setting up the metadata programmatically and all but I'd like to avoid this. Also I wrote a C# console app that does everything just fine... all tables etc are specified in the app.config ... but apparently this solution isn't good enough.
Have you set DelayValidation to False on the Data Source Destination properties? If not, try that.
Edit: Of course that should be DelayValidation to True, so it just goes ahead and tries rather than checking. Also, instead of altering your package in Notepad, why not put the table name in a variable, put the variable into an Expression on the destination, then expose the variable in a .DtsConfig configuration file? Then you can change that without danger.
Matching source destination column with case sensitive has done the work for me.
Like in my case SrNo_prod was column in dev and using it we developed the dtsx, while it is been created as SrNo_Prod in prod, after making case change from P to p, we got successful execution of package.
Check if the new destination table has the same columns as the old one.
I believe the error occurs if the columns are different, and the destination can no longer map its input columns to the table columns. If two tables have the same schema, this error should not occur.
If all you are doing is copying data from one SQL2005 server to another I would just create a Linked Server and use a stored proc to copy the data. An SSIS package is overkill.
How to Create linked server
Once the linked server is created you would just program something like...
INSERT INTO server1.dbo.database1.table1(id,name)
SELECT id, name FROM server2.dbo.database1.table1
As far the SSIS package I have always had to reopen and rebuild the package so that the meta data gets updated when modifying the tables column properties.