How can i get the following information after a git-pull with libgit2sharp:
Which files has been moved
Which files has been created
Which files has been deleted
The git-pull request it self works perfectly:
var result = repo.Network.Pull(new LibGit2Sharp.Signature("admin", "mail#......net", new DateTimeOffset(DateTime.Now)), options);
I already looked at the result of the Pull-Method, but this seems not to contain the needed information.
Thank you very much!
The MergeResult type exposes a Commit property which is not null when the merge was successful.
In order to find out what files have changed, one just have to leverage the repo.Diff.Compare() method to compare this Commit with its first parent.
Related
I have one app that I wrote already reading from and writing to the iCloud. I am essentially using the same code in my new app to do the same thing, but for some reason it will not work, giving me the following error, "Couldn't get container configuration from the server". Let me clarify, with this new app it does puts an entry in iCloud under manage storage, but instead of being under the name of my app, it is under adhoc.
Here is the line in my info.Plist,
Here is the line from my Entitlements,plist
lastly, here is my identifier defined on the apple developer site,
I have verified and reverified that everything is pointing to the correct thing so I am baffled. Any help would be much appreciated.
EDIT I guess what it is doing is writing file to my phone, but when it goes to save data to it, it fails with this message, Here is my call to save the data.
CKRecordID recordID = new CKRecordID(strDate);
await Task.Delay(200);
// Save it to iCloud
await pvc.SaveToiCloud(newRecord);
Here is my code to save the record,
public async Task SaveToiCloud(CKRecord newRecord)
{
ThisApp.PrivateDatabase.SaveRecord(newRecord, (record, err) =>
{
Edit:
I was thinking that possibly the number of nodes I had was too many, so I took out the "dist" one you see below, but that did not help. I thought maybe that was why I was seeing module name of adhoc under icloud on my phone, but I guess I was wrong.
Old:
New:
Edit
I have been doing more digging and found that this line of code is actually the one throwing the error.
File.WriteAllText(Path.Combine(filePath.Path, name + date), "Test");
The name and date contain correct values and the path looks fine to me... I guess... Don't know actually how it should look. Here is how the file path is getting set right above this call,
NSUrl filePath = NSFileManager.DefaultManager.GetUrlForUbiquityContainer(null).Append("Documents", true);
If anyone could offer any advice, I would be most appreciative.
So, I finally figured out the issue and it works like a champ now. The issue was the case in my bundle id in my info.plist. In CloudKit the DB name was all lower case, but my bundle Id in my info.plist, had AdHoc as my last node instead of adhoc as was in the CloudKit. You might say, what does the bundle Id have to do with the iCloud name, and actually I am not really sure, but I noticed that it was taking the case of that last node from my Bundle Id, not the case specified in the iCloud definition as I have shown above. Hope this helps someone who is struggling with a similar issue Have a great day!
I'm trying to update SDMPackageXML property of an AppModel application through C# code. SDMPackageXML is an XML property. I've to update only one node named AutoInstall in the
SDMPackageXML XML property. Here is my code:
ObjectGetOptions opt = new ObjectGetOptions(null, System.TimeSpan.MaxValue, true);
var path = new ManagementPath("SMS_Application.CI_ID=16777568");
ManagementObject obj = new ManagementObject(scope, path, opt);
obj.Get();
foreach (PropertyData property in obj.Properties)
{
if (property.Name == "SDMPackageXML")
{
//change the property value. Set AutoInstall to true
XmlDocument xml = new XmlDocument();
xml.LoadXml(property.Value.ToString());
var autoInstallTag = xml.GetElementsByTagName("AutoInstall");
autoInstallTag[0].InnerText = "false";
property.Value = xml.OuterXml;
}
}
obj.Put();
The problem is that obj.Put(); updates nothing on the SCCM server. Can someone help me please?
So similar to what I talked about in this answer the main problem here is that Microsoft uses a special method to serialize their XML. The deserialization still works with using the default classes but to serialize again there is no documentation as to how to (I'm pretty sure it is possible but I am not knowledgeable enough to do it)
Instead of documentation they provide wrapper classes for this which are shipped with the SCCM Console (Located in the bin directory of the Installation folder of the Console).
In this case this would be Microsoft.ConfigurationManagement.ApplicationManagement.dll. Unlike in powershell where the dependencies in the same path seem to be loaded as well you seem also to have to reference at least Microsoft.ConfigurationManagement.ApplicationManagement.TaskSequenceInstaller.dll as well.
There are also further dlls with names like Microsoft.ConfigurationManagement.ApplicationManagement.MsiInstaller.dll present however at least in my tests the two above were the only ones needed, but if you notice the deserialization failing with "InvalidPropertyException" errors you might need the dll matching your specific application type.
With those two dlls referenced you can write something like this (note I deserialized using the dll as well because why not if it is already loaded and it creates a nice application object to directly modify the properties. This is however technically not necessary. You could deserialize like in your example and only use the serialization part.
ManagementObject obj = new ManagementObject(#"\\<siteserver>\root\SMS\site_<sitecode>:SMS_Application.CI_ID=<id>");
Microsoft.ConfigurationManagement.ApplicationManagement.Application app = Microsoft.ConfigurationManagement.ApplicationManagement.Serialization.SccmSerializer.DeserializeFromString(obj["SDMPackageXML"].ToString(), true);
app.AutoInstall = true;
obj["SDMPackageXML"] = Microsoft.ConfigurationManagement.ApplicationManagement.Serialization.SccmSerializer.SerializeToString(app, true);
obj.Put();
Now one thing to keep in mind is that is can be a little tricky referencing the applications by their CI_ID because if you update the application the id for the currently valid version of the app changes (the old id still can be used to reference the older revision). So if you change the application gotten using the ID and then change it back with the same ID it will look like only the first change worked. I don't know if this is problematic for you (If you just get all IDs then change every application only once it should not matter) but if it does you can search for the application using their name plus isLatest = 'true' in the WQL query to always get the current one.
I have been given a task to use the TFS API in order to check which build has which changeset number, after deployment. I haven't worked with TFS before, so mainly I've been trying to Google things, to find the answer. I've been at it for 2 days now, so I'm hoping someone can nudge me in the right direction...
Here is what I have done so far:
Uri collectionUri = new Uri("mytfs/tfs/");
var server = TfsConfigurationServerFactory.GetConfigurationServer(collectionUri);
server.Authenticate();
server.EnsureAuthenticated();
var service = server.GetService<TswaClientHyperlinkService>();
var projectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("mytfs/tfs/collection"));
var cssService = projectCollection.GetService<ICommonStructureService3>();
var project = cssService.GetProjectFromName("project");
WorkItemStore workItemStore = projectCollection.GetService<WorkItemStore>();
WorkItemCollection workItemCollection = workItemStore.Query("SELECT * FROM WorkItems");
So in the workItemCollection object, I tried a few queries, but it seems it doesn't allow me to change database, use joins etc. just a simple select/from statement.
Am I on the right track - is this how I should be getting build and changeset number?? If yes, where can I see what tables I need to query?
The problem here is that you're thinking of this as a database. It's not a database. It's an object model that allows you to programmatically access various aspects of TFS through a well-defined API.
Work item queries are not SQL, they are WIQL (work item query language). The work item object will definitely have a link to the associated changeset, but it won't have a link to a build. Some work item types have a field for "fixed in" that will be automatically updated with the build, but not all of them, so it's not necessarily reliable.
To find particular builds, you'll need to use the IBuildServer service and search for a build spec.
After searching the googles for couple hours I found an answer to my question. I know this post Undo checkout TFS answers my question, however it doesn't answer all the questions I have. I want to achieve the same objective that the post asked about. How to only revert files that have been checked out if nothing was modified in that file? The answer to my question shouldn't be too hard to answer.
So what I'm doing is copying files from a server and overwriting them in my local workspace. I am checking out all the files being copied. However, if a file that was copied is not modified in anyway(server file and destination file are exact same), I'd like to undo the checkout of that file.
I know I'm to use the workspace.Undo() method and the gentleman said it worked for him. However he didn't show how he implemented it.
Here is the code I have with help from the link:
public static void CheckOutFromTFS(string filepath)
{
var workspaceInfo = Workstation.Current.GetLocalWorkspaceInfo(filepath);
if (workspaceInfo == null)
{
return;
}
var server = new TfsTeamProjectCollection(workspaceInfo.ServerUri);
var workspace = workspaceInfo.GetWorkspace(server);
workspace.PendEdit(filepath);
}
The answer given was to use the workspace.Undo() method. Do I add this method as the last line in CheckOutFromTFS() like so?
public static void CheckOutFromTFS(string filepath)
{
var workspaceInfo = Workstation.Current.GetLocalWorkspaceInfo(filepath);
if (workspaceInfo == null)
{
return;
}
var server = new TfsTeamProjectCollection(workspaceInfo.ServerUri);
var workspace = workspaceInfo.GetWorkspace(server);
workspace.PendEdit(filepath);
workspace.Undo();
}
Or is it done differently? I'm not sure if this Undo() will only revert files if there are no changes or just revert the checkout entirely and render the PendEdit() useless. Can someone help clarify this for me?
If you use a local workspace then all file that have no changes will automatically revert to not checked-out. You don't need to do anything at all. This works with VS 2012 or better with TFS 2012 or better. You'll need to convert you workspace to a local workspace first like this
So I found the answer to my question in various posts. I kinda took bits an pieces and combined them together to get my working solution. The use of the Undo() function with passing in the filepath actually does uncheckout the file regardless if it was modified or not. My workspace was also local but VS and TFS couldn't automatically revert those unmodified files for me so I took the below approach.
So what I decided to do was to just use the Team Foundation Power Tools "uu" command to undo the changes to unchanged files in the workspace. I created a batch file and entered the following command: echo y | tfpt uu . /noget /recursive. Since we will not show the shell during execution, I used the "echo y" command to automatically answer the question, "Do you wish to undo these redundant pending changes? (Y/N)". Including /noget is highly recommended since it prevents a forced 'get latest' of all your project's files which depending on the total number can take a extremely long time.
var startInfo = new System.Diagnostics.ProcessStartInfo
{
WorkingDirectory = projectRoot,
FileName = projectRoot + #"\undoUnchanged.bat",
UseShellExecute = false,
CreateNoWindow = true
};
Process process = Process.Start(startInfo);
process.WaitForExit();
process.Close();
After the script runs and the process.Close() executes you and double check if your unmodified files actually were unchecked out by hitting the refresh button on the Team Explorer window in your project. Hope someone else can find some use in this.
If I understand the question well and you actually need undo through C# code behind, I believe this shoul help you:
Undo checkout TFS
I started with the solution here http://social.technet.microsoft.com/wiki/contents/articles/20547.biztalk-server-dynamic-schema-resolver-real-scenario.aspx
which matches my scenario perfectly except for the send port, but that isn't necessary. I need the receive port to choose the file and apply a schema to disassemble. From their the orchestration does the mapping, some of it custom, etc.
I've done everything in the tutorial but I keep getting the following error.
"There was a failure executing the receive pipeline... The body part is NULL"
The things I don't get from the tutorial but don't believe they should be an issue are:
I created a new solution and project to make the custompipeline component (reference figure 19) and thus the dll file. Meaning it is on it's own namespace. However, it looks like from the tutorial they created the project within the main biztalk solution (ie the one with the pipeline and the orchestration) and thus the namespace has "TechNetWiki.SchemaResolver." in it. Should I make the custompipeline component have the namespace of my main solution? I'm assuming this shouldn't matter because I should be able to use this component in other solutions as it is meant to be generic to the business rules that are associated with the biztalk application.
The other piece I don't have is Figure 15 under the "THEN Action" they have it equal the destination schema they would like to disassemble to but then they put #Src1 at the end of "http://TechNetWiki.SchemaResolver.Schemas.SRC1_FF#Src1". What is the #Src1 for?
In the sample you've linked to, the probe method of the pipeline component is pushing the first 4 characters from the filename into a typed message that is then passed into the rules engine. Its those 4 characters that match the "SRC1" in the example.
string srcFileName = pInMsg.Context.Read("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties This link is external to TechNet Wiki. It will open in a new window. ").ToString();
srcFileName = Path.GetFileName(srcFileName);
//Substring the first four digits to take source code to use to call BRE API
string customerCode = srcFileName.Substring(0, 4);
//create an instance of the XML object
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.LoadXml(string.Format(#"<ns0:Root xmlns:ns0='http://TechNetWiki.SchemaResolver.Schemas.SchemaResolverBRE This link is external to TechNet Wiki. It will open in a new window. '>
<SrcCode>{0}</SrcCode>
<MessageType></MessageType>
</ns0:Root>", customerCode));
//retreive source code in case in our cache dictionary
if (cachedSources.ContainsKey(customerCode))
{
messageType = cachedSources[customerCode];
}
else
{
TypedXmlDocument typedXmlDocument = new TypedXmlDocument("TechNetWiki.SchemaResolver.Schemas.SchemaResolverBRE", xmlDoc);
Microsoft.RuleEngine.Policy policy = new Microsoft.RuleEngine.Policy("SchemaResolverPolicy");
policy.Execute(typedXmlDocument);
So the matching rule is based on the 1st 4 characters of the filename. If one isn't matched, the probe returns a false - i.e. unrecognised.
The final part is that the message type is pushed into the returned message - this is made up of the namespace and the root schema node with a # separator - so your #src1 is the root node.
You need to implement IProbeMessage near to class
I forgot to add IProbeMessage in the code of article. It is updated now.
but it is there in sample source code
Src1 is the the root node name of schema. I mentioned that in article that message type is TargetNamespace#Root
I recommend to download the sample code
I hope this will help you