The MST created using wix does not have updated summary information stream values.
//The temp msi (copy of original msi) has updated summary info values
Database d2 = new Database(tempmsiPath, DatabaseOpenMode.Direct);
//origDatabase is a Database object of original msi;
d2.GenerateTransform(origDatabase, mstPath);
//this code is used to create the mst.
d2.CreateTransformSummaryInfo(origDatabase, mstPath,
TransformErrors.None,TransformValidations.None);
Please let me know how i can implement the writing updated summary values to MST using C#.
If I open an MSI in ORCA, create a new transform and then go to Summary Information all of the fields are greyed out.
If I then go to (in ORCA) Transform | Transform Properties I get a screen titled "Transform SummaryInfo". It has a series of checkboxes for suppress errors and validation. This maps to the arguments available in CreateTransformSummaryInfo. Reading the DTF help topic on the same method says:
Creates and populates the summary information stream of an existing
transform file, and fills in the properties with the base and
reference ProductCode and ProductVersion.
There is also a TranformInfo class in ....WindowsInstaller.Package assembly but it only supports reading transform information. Rob might be able to tell you more but it seems pretty much by design to not give unrestricted access. Probably because the transform has to be compatible with the base MSI.
Maybe if I understood exactly what/why you are updating I could give a better answer.
Related
I'm trying to read task details from a mpp file using net.sf.mpxj library. However, when trying to read custom fields, I get a byte array which I do not know what to do with! It is not the exact value of the custom field from that specific task. Can anyone tell me what to do?
ProjectReader reader = new MPPReader();
ProjectFile project = reader.read(#"C:\EPM\test2.mpp");
foreach (net.sf.mpxj.Task task in project.Tasks)
{
var Value = task.GetFieldByAlias("My Custom Field Name");
}
The "Value" will be a byte array and I do not know how to get the real value from it.
UPDATED ANSWER:
As of MPXJ 10.7.0 you can retrieve correctly typed values for enterprise custom fields. You'll also find a CustomFieldDataType attribute as part of the CustomField class which indicates what type you'll be retrieving.
(One interesting "gotcha" is that if your MPP file contains an enterprise custom field which is based on a lookup table, i.e. the user can only select from a fixed set of values, the user-visible text is NOT stored in the MPP file. You'll only get back a GUID representing the value the user has selected. Microsoft Project itself has this same issue... if you open the MPP file when you're not connected to Project Server, these values will appears as blanks...)
ORIGINAL ANSWER:
The main problem is unfortunately that MPXJ doesn't currently offer the same level of support for Enterprise Custom Fields as it does for other fields. While it is able to identify Enterprise Custom Fields and the aliases they've been given, at the moment it is only able to read the raw bytes representing the field data.
Enterprise Custom Fields are not as commonly used as other field types so there hasn't been as much time invested in locating the definitions of these fields in the MPP file. The field definition will contain the type information necessary to convert from the raw bytes to the expected data type.
Improved support for Enterprise Custom Fields is on the "to do" list for MPXJ.
I'm new to both UniData and Uniobjects so if I ask something that obvious I apologize.
I'm trying to write a tool that will let me export contacts from our ERP (Manage2000) that runs on UniData (v. 6.1) and can then import them into AD/Exchange.
The primary issue I'm having is that I don't know which fields (columns?) in the table (file?) are for what. I know that that there is a dictionary that has this information in it but I'm not sure how to get what I want out of it.
I found that there is a command LIST.METADATA in the current UniData documentation from Rocket but it seems that either the version of UniData that we are using is so old that it doesn't have this command in it or it was removed from the VOC file for some unknown reason.
Does anyone know how or have any tips to pull out the structure of a table so that I can know which fields are for what data?
Thanks in advance!
At TCL:
LIST DICT contact.master
Please note that the database file name (EX: contact.master) is case sensitive. I don't have a UniData instance at the moment to provide an example output. However, it should be similar to Universe's output:
Field......... Type & Field........ Conversion.. Column......... Output Depth &
Name.......... Field. Definition... Code........ Heading........ Format Assoc..
Number
AMOUNT.WEBB A 1 MR22 Amt WEBB 10R M
PANDAS.COST A 3 MD2Z Pandass Cost 10R M
CREDIT.EXP.DT A 6 D4/ Cred Exp Date 10R M
For the example above, you can generally tell the "data type" of the field by looking at the conversion code. "D4/" is the conversion code for a date. "MD2Z" is a numeric conversion code, which we can guess is for monetary amounts. I'm glossing over the power of conversion codes, so please make sure to reference Rocket's documentation for these codes to truly understand what these fields would output. If you don't have the documentation in front of you, you can also reference this site:
http://www.koretech.com/kr_help/KU2/30/en/KMK_Prog_Conversions.htm
If you wanted to use UniObjects and C# to retrieve the field names in a file, you could use the following code:
UniCommand fieldSelectCommand = activeSession.CreateUniCommand();
fieldSelectCommand.Command = "SELECT DICT contact.master";
fieldSelectCommand.Execute();
UniSelectList resultList = activeSession.CreateUniSelectList(0);
String[] allFieldNames = resultList.ReadListAsStringArray();
Having answered your question, I would also like to make a recommendation that you check out Rocket's U2 Toolkit for .NET if you're mostly going to be selecting data from the database instead of reading and manipulating individual records:
http://www.rocketsoftware.com/products/rocket-u2-toolkit-net
Not only does it present an ADO.NET way of accessing the database, it also has a better performance version of the UniObjects library under the U2.Data.Client.UO namespace.
The Dictionary, in my opinion, is a recommendation of how the schema should behave. However, there are cases when it's not 100% accurate. You could run "LIST CONTACT.MASTER TOXML TO MYFILE.XML" which would create an xml file what you could parse.
See https://u2devzone.rocketsoftware.com/accelerate/articles/u2-xml/u2-xml#section-0 for more information.
I'm looking for a way to set the _CheckinComment. If i try It like this:
Microsoft.SharePoint.Client.File myUploadFile = myList.RootFolder.Files.Add(fileCreationInformation);
ListItem myItem = myUploadFile.ListItemAllFields;
myItem["Title"] = Path.GetFileName(sDocPath);
myItem["_CheckinComment"] = "This is the comment";
myClientContext.Load(myItem);
myClientContext.Load(myUploadFile);
myClientContext.ExecuteQuery();
I get Microsoft.SharePoint.Client.ServerException: Invalid data has been used to update the list item. The field you are trying to update may be read only.
I want to change the _CheckinComment (InternalName) and not this:
myUploadFile.CheckIn("This is the comment", CheckinType.OverwriteCheckIn)
Who can help?
Per Microsoft, "_CheckinComment" is a read-only server field. So, that explains your error.
Although you didn't specify what you are attempting to do, I think I know as I had my own problem related to this. I think you were annoyed that you couldn't put a check-in comment on upload...and when you use the checkin() method it creates a new version. So, your upload spans two versions (the first is the upload itself with no check-in comment and the second is the addition of the check-in comment)--kind of messy.
The key for me was to use the Publish(string) [https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.client.file.publish.aspx] and Unpublish(string) methods. This allows you to set the check-in comment for the current file while promoting/demoting the current version to a major of minor. Assuming your document library has major and minor versions, you can apply it as follows:
Upload -> Publish(strComment) to create a major version with comment
Upload -> Publish("") -> Unpublish(strComment) to create a minor version with a comment.
I've got a question about adding a datamap to my current map in mappoint while importing data to a dataset.
So, i have an excel file that has the following columns in order: ID,Name,Adress,City,Country,PostalCode,Service,MoneyImport.
I'm creating a dataset to be used for the datamap:
object missing = System.Reflection.Missing.Value;
MapPoint.DataSet dataset = map.DataSets.ImportData(filename, missing,
MapPoint.GeoCountry.geoCountryItaly,
MapPoint.GeoDelimiter.geoDelimiterDefault,
MapPoint.GeoImportFlags.geoImportExcelSheet);
I'm using the "missing" value cause the MapPoint Application when running through the normal interface*(importing from the same excel file i use here)* recognises perfectly the datafields, so i don't have the need to specify their types by myself.
Then i'm tryin' to use this dataset in order to create the datamap i need. This map is supposed to display as shaded areas the "MONEYIMPORT field" on the map based on zoomlevel.
When using the normal mappoint interface it does it smoothly with no problem and no errors at all.
Object Import = 8;
MapPoint.Field GainedMoney = dataset.Fields.get_Item(Import);
This two lines are ment to let me add to the GainedMoney field the values of the 8th column of excel by exctracting 'em from the dataset.
So further i add the datamap:
MapPoint.DataMap datamap =dataset.DisplayDataMap(MapPoint.GeoDataMapType.geoDataMapTypeShadedArea, GainedMoney,
ShowDataBy: MapPoint.GeoShowDataBy.geoShowByZoomLevel,
DataRangeType: MapPoint.GeoDataRangeType.geoRangeTypeDefault,
DataRangeOrder: MapPoint.GeoDataRangeOrder.geoRangeOrderDefault,
ColorScheme: 13,
CombineDataBy: MapPoint.GeoCombineDataBy.geoCombineByAdd);
So the MapPoint object model decides to throw me an error that says that the type of area i'm tryin' to add to the map cannot be recognized, so it has been impossible to add it to the map.
I've checked several times if the attributes i pass to the .DisplayDataMap are correct, and they are identical to the ones i choose when creating the datamap through the user interface of mappoint, and still no result gained. Really i don't know anymore how to fix this.
If any of you would be able to help me and provide me a hint, please do so!
Thanks in advance,
George.
There are some articles on MP2Kmag.com to help with DisplayDataMap. In particular, the arrays you pass in as parameters are tricky. Also, the book Programming MapPoint in .NET was a big help to me in dealing with the DisplayDataMap method.
Problem.
I regularly receive a feed files from different suppliers. Although the column names are consistent the problem comes when some suppliers send text files with more or less columns in there feed file.
Furthermore the arrangement of these files are inconsistent.
Other than the Dynamic data flow task provided by Cozy Roc is there another way I could import these files. I am not a C# guru but i am driven torwards using a "Script Task" control flow or "Script Component" Data flow task.
Any suggestion, samples or direction will greatly be appreciated.
http://www.cozyroc.com/ssis/data-flow-task
Some forums
http://www.sqlservercentral.com/Forums/Topic525799-148-1.aspx#bm526400
http://www.bidn.com/forums/microsoft-business-intelligence/integration-services/26/dynamic-data-flow
Off the top of my head, I have a 50% solution for you.
The problem
SSIS really cares about meta data so variations in it tend to result in exceptions. DTS was far more forgiving in this sense. That strong need for consistent meta data makes use of the Flat File Source troublesome.
Query based solution
If the problem is the component, let's not use it. What I like about this approach is that conceptually, it's the same as querying a table-the order of columns does not matter nor does the presence of extra columns matter.
Variables
I created 3 variables, all of type string: CurrentFileName, InputFolder and Query.
InputFolder is hard wired to the source folder. In my example, it's C:\ssisdata\Kipreal
CurrentFileName is the name of a file. During design time, it was input5columns.csv but that will change at run time.
Query is an expression "SELECT col1, col2, col3, col4, col5 FROM " + #[User::CurrentFilename]
Connection manager
Set up a connection to the input file using the JET OLEDB driver. After creating it as described in the linked article, I renamed it to FileOLEDB and set an expression on the ConnectionManager of "Data Source=" + #[User::InputFolder] + ";Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=\"text;HDR=Yes;FMT=CSVDelimited;\";"
Control Flow
My Control Flow looks like a Data flow task nested in a Foreach file enumerator
Foreach File Enumerator
My Foreach File enumerator is configured to operate on files. I put an expression on the Directory for #[User::InputFolder] Notice that at this point, if the value of that folder needs to change, it'll correctly be updated in both the Connection Manager and the file enumerator. In "Retrieve file name", instead of the default "Fully Qualified", choose "Name and Extension"
In the Variable Mappings tab, assign the value to our #[User::CurrentFileName] variable
At this point, each iteration of the loop will change the value of the #[User::Query to reflect the current file name.
Data Flow
This is actually the easiest piece. Use an OLE DB source and wire it as indicated.
Use the FileOLEDB connection manager and change the Data Access mode to "SQL Command from variable." Use the #[User::Query] variable in there, click OK and you're ready to work.
Sample data
I created two sample files input5columns.csv and input7columns.csv All of the columns of 5 are in 7 but 7 has them in a different order (col2 is ordinal position 2 and 6). I negated all the values in 7 to make it readily apparent which file is being operated on.
col1,col3,col2,col5,col4
1,3,2,5,4
1111,3333,2222,5555,4444
11,33,22,55,44
111,333,222,555,444
and
col1,col3,col7,col5,col4,col6,col2
-1111,-3333,-7777,-5555,-4444,-6666,-2222
-111,-333,-777,-555,-444,-666,-222
-1,-3,-7,-5,-4,-6,-2
-11,-33,-77,-55,-44,-666,-222
Running the package results in these two screen shots
What's missing
I don't know of a way to tell the query based approach that it's OK if a column doesn't exist. If there's a unique key, I suppose you could define your query to have only the columns that must be there and then perform lookups against the file to try and obtain the columns that ought to be there and not fail the lookup if the column doesn't exist. Pretty kludgey though.
Our solution. We use parent child packages. In the parent pacakge we take the individual client files and transform them to our standard format files then call the child package to process the standard import using the file we created. This only works if the client is consistent in what they send though, if they try to change their format from what they agreed to send us, we return the file.