Using the Sharepoint.Client version 16 package, we are trying to create a MigrationJob in c# and then subsequently want to see the status and logs of that migration job. We managed to provision the containers and queue using the ProvisionMigrationContainers and ProvisionMigrationQueue methods on the Site object. And we managed to upload some files and manifest XMLs. These XMLs still contain some errors in the ids and structure, so we expect the job to fail. However, we still expect the job to be created and output some messages and logs. Unfortunately the message queue seems to be empty and the logs are nowhere to be found (at least we can't find them). The Guid of the created migration job is the null guid: 00000000-0000-0000-0000-000000000000
According to https://learn.microsoft.com/en-us/sharepoint/dev/apis/migration-api-overview the logs should be saved in the manifest container as a blob. But how would you actually find the name of the log file? The problem is that everything has to be encrypted and it is not allowed to list the blobs in the blob storage (trying this leads to a 403 error).
So the main question is: how are we supposed to access the log files? And the bonus question: assuming that the command to create the migration job is correct, why are we getting the null guid? And last one: why is the queue empty? I could speculate that the migration job is never created and that's why the guid is all zeroes, but how are we supposed to know what is preventing the job from being created?
Here is the code that creates the Migration Job:
public ClientResult<Guid> CreateMigrationJob()
{
var encryption = new EncryptionOption
{
AES256CBCKey = encryptionProvider.Key
};
return context.Site.CreateMigrationJobEncrypted(
context.Web.Id,
dataContainer.Uri.ToString(),
metadataContainer.Uri.ToString(),
migrationQueue.Uri.ToString(),
encryption
);
}
context, dataContainer, metadataContainer have all been properly instantiated as members and have been used in other methods successfully. migrationQueue and encryption also look fine, but have not been used elsewhere. The encryption key has been used to upload and download files though and works perfectly fine there.
For completeness sake, here is the code we tried to use to check if there is anything in the queue:
public void GetMigrationLog()
{
while (migrationQueue.ApproximateMessageCount > 0) //debug code, this should be done async
{
Console.WriteLine(migrationQueue.GetMessage().AsString);
}
}
It outputs nothing, because the queue is empty. We would expect there to be at least an error message or a message that the logs were created (including the name of the log file).
PS: we realise that it should be possible to download the logs using DownloadToFileEncrypted(encryptionProvider, targetFile.ToString(), System.IO.FileMode.Create) but only if you already know the file name, which you cannot find, so that seems a bit silly.
When you call context.Site.CreateMigrationJobEncrypted in your code it returns a Guid. The name of the log file will Import-TheGuidThatWasReturned-ANumberThatStartsAt1ButIncremennts.log
So the first log file might be called.
Import-AE9525D9-3CF7-4D1A-A9E0-8AB0DF4F09B2-1.log
Using encryption should not stop you from reading the queue. You will only be unable to read the queue if you have configured your queue this way or you are using the tenancy default rather than you own.
Related
First: I have two BizTalk applications. First one does polling from SQL server and send to an MQ queue, works fine. The second processes a file and uses a dynamic send port. In the orchestration I have updated TOMQ(Microsoft.XLANGs.BaseTypes.Address)= QueuePath;
TOMQ(Microsoft.XLANGs.BaseTypes.TransportType)="MQSeries"; It grabs the file, processes it, sends the results to an output directory. (This works fine.) Then to MQ. MQ throws the error: Error encountered on opening Queue Manager name = .... Reason code = 2354. I checked and it is getting the correct queue path, but fails. Anyone have any suggestions. I've checked all I can think of.
I answered this question on this post: I have a BizTalk application with a dynamic send port that is set to "MQSeries". Can I programmatically set its properties?
Basically, I had to create a copy of my message, and add the MQSeries.dll as a reference to my project. I then I set the property like this.
DBMSGMQ = DBMSGOUT;
DBMSGMQ(MQSeries.TransactionSupported)="False";
I am trying to retrieve the deleted UUIDs from a Openldap server using a .Net Core Console Application.
I was able to see that a Sync Info Message was indeed sent by my Openldap server and that it contained the UUIDS of the present entries by using a Perl script and dumping the whole response.
I set up a Openldap server with the syncprov Overlay (see my previous question Can't get deleted items from OpenLDAP Server using Content Synchronization Operation (syncrepl)).
After re-reading the RFC4533 multiple times and the OpenLdap Syncrepl documentation and analysing the response, with my current configuration (No Accesslog) it is impossible to retrieve deleted entries, only a list of present entries. They are contained in the Sync Info Message. I wish to retrieve the information anyway so I can make a delta between what is sent and what is on my client.
Do you know how to catch the message in C#?
I tried using the DirectoryServices.Protocols and the Novell.Directory.Ldap libraries (separately). I must have missed something but don't know what exactly...
I used the Novell Code sample (the SearchPersist one and adding the corresponding control) available at https://www.microfocus.com/documentation/edirectory-developer-documentation/ldap-libraries-for-c-sharp/.
I can retrieve added/modified entries but not the Sync Info Message containing the present entries.
By digging a bit into the Novell Library, I found some useful classes for decoding ASN1 Objects.
By using the following code I am able to determine the type of the Intermediate Sync Info Message.
var decoder = new LBERDecoder();
(Asn1Tagged)decoder.decode(intermediateResponse.getValue());`
Then, depending on the Tag I am able to decode the message (using the method .decode(valueToDecode))
I'm having a weird issue with my Azure App Function and I can't find anything on this.
I republished my function without changing its code, but suddenly the function stopped working and I'm getting this message as soon as I navigate to the function's page on Azure:
Error:
Error retrieving master key.
If I navigate to the function's settings, I can see that no keys have been generated and that host.json file is emptpy. Browsing my functions' files using Kudu, however, shows that file contents are correct.
Two more things make this weirder:
The function correctly works locally
If I take the code for another function and I deploy it on this one, my function works correctly, meaning that it's not an issue related to my function's configuration but rather to its code
Do you guys have any pointer on this?
EDIT:
Let me add more details on this.
Let's say I have 2 solutions, A.sln and B.sln.
I also have 2 App Functions on Azure, let's say F_1 and F_2.
A.sln and B.sln have the very same structure, the only difference is in business logic.
Same applies for F_1 and F_2, their only differences are the related storage accounts, as each function has its own.
Currently A.sln is deployed on F_1 and B.sln on F_2, and the only one working is F_1.
If I deploy A.sln on F_2, F_2 starts working, so my idea is that there's something wrong in B.sln's code because A.sln works with the very same configuration.
The Function App has a reference to a Storage account in application settings AzureWebJobsDashboard, AzureWebJobsStorage and WEBSITE_CONTENTAZUREFILECONNECTIONSTRING (if you are running on a consumption plan). Either clearing out this storage or simply recreating it fixed the problem.
I would also recommend creating separate storage accounts for every Function app - at least as long as these hard-to-find bugs are present. It is a lot easier to fix these kind of issues when they are only affecting a single Function app.
I don't know if this is the case here, but I found out that in my case (new deployment of Function App v3) host.json is empty on Azure, if there is a comment line in it. Removing comments solved my problem and host.json file is now deployed properly.
One of the reasons could be, the key inside the storage account might have been rotated. So, the connection strings referenced inside the AzureWebJobsDashboard and AzureWebJobsStorage of the azure function will be different.
Solution: Go to the storage account referenced in AzureWebJobsDashboard and AzureWebJobsStorage -> Access Keys -> Copy the connection string under key1 and use this for the AzureWebJobsDashboard and AzureWebJobsStorage.
I am downloading files from a client's SFTP.
When I do it from Filezilla it always succeeds in the standard way.
On the other side, when I do it from our app, that uses Tamir SharpSSH library for SFTP communication, periods constantly emerge when our all download attempts for a file fail.
I know the app works as that code has not been changes for several months and it worked much more often then it did not, but the periods keep reemerging when for the whole day or more all file downloads fail only for the app.
The exception I get is Tamir.SharpSsh.jsch.SftpException . Obviously not very helpful.
My guess is the client is doing modifications on their side, or changing permissions, as their side is not live yet, but with the exception message I do not know.
Does anybody has some suggestion? Where could I look for the solution? What should I test/try?
Thank you for the time!
The real message was 'No such file'. The reason was, a slash has been omitted for the root folder path, in one of our config files.
When you open the exception variable in VS Watch you will see all info properties from standard exception are null or simply set to 'Tamir.SharpSsh.jsch.SftpException'.
But, an additional property was apparently added to Tamir.SharpSsh.jsch.SftpException class - "message" and that is where the real message is stored, while Exception.Message is pretty often set to just "Tamir.SharpSsh.jsch.SftpException" .
The issue is the additional property is private and is only visible by VS Watch or similar.
Since our exception propagation mechanism is based on logging Exception.Message I was most of the time getting "Tamir.SharpSsh.jsch.SftpException"
I've written a Windows Service in C#/VS2008/.NET3.5 that monitors some FTP directories with FileSystemWatchers and moves the files out to another location for processing. I noticed today that it throws errors stating "The parameter is incorrect" soon after the service has started up, but if we wait a few minutes the file gets copied without incident. I saw that the error message is often related to incorrect permissions, but I verified permissions on the directories (target and source) were correct and as I said the file move works just a few minutes later.
Here's a snippet of the code that gets called when the file is finished copying into the FTP directory being monitored:
//found the correct source path
string targetDir = dir.TargetDirectory;
string fileName = Path.GetFileName(e.FullPath);
errorlocation = "move file";
string targetFilePath = Path.Combine(targetDir, fileName);
if (File.Exists(targetFilePath))
{
File.Delete(targetFilePath);
}
File.Move(e.FullPath, Path.Combine(targetDir, fileName));
dir refers to and object with information about the directory the file was being loaded into. e is the FileSystemEventArgs. Targetdir is grabbed from the directory's settings in a custom configuration block in the app.config that tells the service where to copy the new files to.
I didn't include the code here, but I know it's failing on the File.Move (last line above) due to some EventLog entries I made to trace the steps.
Any idea as to why the move fails soon after the service startup, but works fine later?
Basic overview of the process in case it sheds some light: external vendors FTP us a number of files each day. When the file comes in, my code identifies who the file is coming from based off the FTP directory and then loads settings to pass on to SSIS jobs that will parse and save the files. There are maybe a dozen or so directories being monitored right now each of which has its own configuration setting for the SSIS job. Is it possible that the system gets confused as startup and just need some time to populate all the settings? Each source directory does have its own FileSystemWatcher on it.
Thanks for your help.
The first question I'd answer is, what are the values of these when it fails:
e.FullPath
targetDir
fileName
chances are one of those values isn't what you expect
I'm marking this answered because the problem went away. We haven't changed anything in the code, but it now works immediately after restart. The best theory we have is: since I posted this, the client I was working for moved offices and as part of the migration a lot of system and network policies were updated and server setting tweaked for the new environment. It's likely one (or more) of those changes fixed this issue.
Further support for this theory: prior to the move my development VM could not run web browsers. (I'd click to load the browser and it wouldn't work, sometimes it would appear briefly in Task Manager and then disappear.) After the office move, this problem no longer occurs.
So it was likely some network setting somewhere that caused issues. Sorry I can't be more specific.