I'm trying to create a DataDisk in Azure with C#. But nothing seems to work. However was able to do it with PowerShell.
I use Azure SDK.
Below is a piece of code that is suppose to create a disk and attach it.
var vm = azure.VirtualMachines.List().Where(x => x.Name.ToLower() == vmName.ToLower()).FirstOrDefault();
var disk = new DataDisk(
vm.StorageProfile.DataDisks.Count + 1,
DiskCreateOptionTypes.Empty,
vmName + "_disk");
disk.DiskSizeGB = 128;
disk.Validate();
vm.StorageProfile.DataDisks.Add(disk);
vm.StorageProfile.Validate();
vm.Update();
I don't have any errors. But nothing is created.
Could someone tell me what I'm doing wrong?
You're missing the Apply() method after vm.Update(). Please use vm.Update().Apply() instead of vm.Update() in your code.
Here is my test code, and works fine:
#your other code.
DataDisk disk = new DataDisk(vm.StorageProfile.DataDisks.Count + 1,
DiskCreateOptionTypes.Empty,
"ivandisk222");
disk.DiskSizeGB = 15;
disk.Validate();
vm.StorageProfile.DataDisks.Add(disk);
vm.StorageProfile.Validate();
vm.Update().Apply();
After the code completes, I can see the data disk is added to the vm:
Related
I am using Firesharp for my app in c# winforms. This database is also connected to my ReactJS website which deals with the same information.
I have noticed that when I make .SetAsync calls to my app on the website and then log in to my Winforms app, my WinForms app will automatically perform the last action I did on my website to my database which is a .setAsync() action which adds some user information to a list of other user's information. Now it will not stop. Anytime I log on to my c# app, it runs it.
It makes me think there is a queue of orders in firesharp?
here is my react code. From what I can tell, it is nothing out of the ordinary:
async function handleSubmit(e) {
e.preventDefault()
var date = moment().format("MM/DD/YYYY" )
setError("")
setLoading(true)
// grab user info first then use that.
await firebaseApp.database().ref("Users/" + currentUser.uid + "/UserData").on('value', snapshot => {
if (snapshot.val() != null) {
setContactObjects({
...snapshot.val()
})
firebaseApp.database().ref("Projects/" + projectGUIDRef.current.value + "/queueList/" + userIdRef.current.value).set({
"EntryTimeStamp": date + " " + moment().format("hh:mm:ss a"),
"IsSyncing": false,
"UserId": userIdRef.current.value,
"UserName": usernameRef.current.value,
})
}
})
history.push("/Demo")
setLoading(false)
}
here is my c# winforms code of where the code is executing. For some reason, when this executes, it also updates the EntryTimeStamp field of the react code and completely sets all the information even if I delete it. It also happens if I run .set().
updateLastLogin2(authLink);
private async void updateLastLogin2(FirebaseAuthLink authLink)
{
IFirebaseConfig config = new FireSharp.Config.FirebaseConfig
{
AuthSecret = this.authLink.FirebaseToken,
BasePath = Command.dbURL,
};
IFirebaseClient client = new FireSharp.FirebaseClient(config);
string newDateTime = DateTime.Now.ToString();
if (authLink.User.DisplayName.Contains(adUserId) && authLink.User.DisplayName.Contains(adUserId))
{
await client.SetAsync("Users/" + this.authLink.User.LocalId + "/UserData/DateLastLogin", newDateTime);
}
}
Any and all help is appreciated, I've been at this for a day and a half now.
I have never used fire-sharp but this is my guess
You are calling await firebaseApp.database().ref("Users/" + currentUser.uid + "/UserData").on('value' in your react, and then in your Csharp you are calling client.SetAsync("Users/" + this.authLink.User.LocalId .
What happens is the both listeners are listening to each other and then causing a loop.
In that case it's probably better to use once instead of on if you are just using it once.
In cases where you cannot use .once, then you should use .off to turn off the listener once you are done.
firebaseApp.database().ref("Users/" + currentUser.uid + "/UserData").once('value'`
You also shouldn't be using await here since ref().on creates a listener, it doesn't return a promise.
You should also move history.push("/Demo") into your firebase database callback function so it's called after you have set data
I'm developing an API in one of my C# library to upload the test result documents (such as log file, screenshots captured or any zip file) from the local folder to the desired test case number under the test plan on Test hub.
I'm working with TFS 2018.
Can anyone please help me with the code snippet to implement this functionality.
At present I'm able to establish the connection with the server with the below lines of code:
VssClientCredentials vssClientCred = new VssClientCredentials();
vssClientCred.Storage = new VssClientCredentialStorage();
VssConnection connection = new VssConnection(new Uri("TestHubServerURL"), vssClientCred);
TestManagementHttpClient tManageHttp = connection.GetClient<TestManagementHttpClient>();
TestResultDocument tdoc = new TestResultDocument();
TestResultDocument tRun = tManageHttp.PublishTestResultDocumentAsync(tdoc, ProjectName, TestRunID).Result;
But now I'm stuck, I'm not getting a way to to implement "PublishTestResultDocumentAsync" or do I need to use any other way to implement this functionality?
I tried googling out but didn't get any help with any examples.
Thank you all in advance.
I guess you meant to upload the test result doucuments to the test run result under the Runs tab in Test hub.
You can use CreateTestRunAttachmentAsync method to upload the test result doucuments to its test run. See below example:
string teamProjectCollectionUrl = "http://tfs2018:8080/tfs/DefaultCollection";
string Project = "projectName";
VssConnection _connection = new VssConnection(new Uri(teamProjectCollectionUrl), winCred);
TestManagementHttpClient tManageHttp = _connection.GetClient<TestManagementHttpClient>();
string path = "C:\\test\\image.png";
string stream = Convert.ToBase64String(File.ReadAllBytes(path));
TestAttachmentRequestModel att = new TestAttachmentRequestModel(stream, "pic.png", "", null);
var res = tManageHttp.CreateTestRunAttachmentAsync(att, Project, "runId").Result;
If you want to upload the test results document to the result of a specific test case. You can use CreateTestResultAttachmentAsync method.
var res = tManageHttp.CreateTestResultAttachmentAsync(att, Project, "runId", "resultId").Result;
If you were trying to upload test result documents to the desired test case number under the test plan on Test hub. You probably need to use the CreateAttachmentAsync and UpdateWorkItemAsync methods in Microsoft.TeamFoundation.WorkItemTracking.WebApi, since the Test Case is a type of Work Item.
I am trying to test some methods in my library using Google API. More specifically the Cloud Vision API. When I reference the library in LINQPad I get an error
FileNotFoundException: Error loading native library. Not found in any of the possible locations: C:\Users\\AppData\Local\Temp\LINQPad5_dgzgzeqb\shadow_fxuunf\grpc_csharp_ext.x86.dll,C:\Users\\AppData\Local\Temp\LINQPad5_dgzgzeqb\shadow_fxuunf\runtimes/win/native\grpc_csharp_ext.x86.dll,C:\Users\\AppData\Local\Temp\LINQPad5_dgzgzeqb\shadow_fxuunf../..\runtimes/win/native\grpc_csharp_ext.x86.dll
I have tried copying the dll into all of these locations as well as my LINQPad Plugins and the LINQPad folder. I have tried clearing Cancel and Clear query thinking I needed to reset it. I have also closed and reopened LINQPad thinking maybe it re-scans the directory on load. None of this has worked. Has LINQPad changed where to put dlls or am I missing something?
I am using Google.Cloud.Vision.V1
`
var file = new byte[128];
var _settingsCon = new SettingConnector();
var apiKey = Task.Run(() => _settingsCon.Get("Google:Key")).Result.Value;
var credential = Google.Apis.Auth.OAuth2.GoogleCredential.FromJson(apiKey);
var channel = new Grpc.Core.Channel(
ImageAnnotatorClient.DefaultEndpoint.ToString(),
credential.ToChannelCredentials());
var builder = new StringBuilder();
var image = Image.FromBytes(file);
var client = ImageAnnotatorClient.Create(channel);
var response = client.DetectDocumentText(image);
foreach (var page in response.Pages)
{
foreach (var block in page.Blocks)
{
foreach (var paragraph in block.Paragraphs)
{
builder.Append(paragraph);
}
}
}
builder.ToString().Dump();`
This is essentially the function. The file is a dummy file that would be passed in. It shouldn't matter cause it can't make the request any way. The Dump is used instead of return.
I am trying to copy a blob from one location to another and it seems like this method is obsolete. Everything I've read says I should use "StartCopy". However, when I try this it doesn't copy the blob. I just get a 404 error at the destination.
I don't seem to be able to find any documentation for this. Can anyone advise me on how to do this in the latest version of the API or point me in the direction of some docs.
Uri uploadUri = new Uri(destinationLocator.Path);
string assetContainerName = uploadUri.Segments[1];
CloudBlobContainer assetContainer =
cloudBlobClient.GetContainerReference(assetContainerName);
string fileName = HttpUtility.UrlDecode(Path.GetFileName(model.BlockBlob.Uri.AbsoluteUri));
var sourceCloudBlob = mediaBlobContainer.GetBlockBlobReference(fileName);
sourceCloudBlob.FetchAttributes();
if (sourceCloudBlob.Properties.Length > 0)
{
IAssetFile assetFile = asset.AssetFiles.Create(fileName);
var destinationBlob = assetContainer.GetBlockBlobReference(fileName);
destinationBlob.DeleteIfExists();
destinationBlob.StartCopyFromBlob(sourceCloudBlob);
destinationBlob.FetchAttributes();
if (sourceCloudBlob.Properties.Length != destinationBlob.Properties.Length)
model.UploadStatusMessage += "Failed to copy as Media Asset!";
}
I'm just posting my comment as the answer to make it easier to see.
It wasn't the access level of the container. It wasn't anything to do with StartCopy either. It turned out to be these lines of code.
var mediaBlobContainer = cloudBlobClient.GetContainerReference(cloudBlobClient.BaseUri + "temporarymedia");
mediaBlobContainer.CreateIfNotExists();
Apparently I shouldn't be supplying the cloudBlobClient.BaseUri, just the name temporarymedia.
var mediaBlobContainer = cloudBlobClient.GetContainerReference("temporarymedia");
There was no relevant error message though. Hopefully it'll save another Azure newbie some time in future.
var speechEngine = new SpVoiceClass();
SetVoice(speechEngine, job.Voice);
var fileMode = SpeechStreamFileMode.SSFMCreateForWrite;
var fileStream = new SpFileStream();
try
{
fileStream.Open(filePath, fileMode, false);
speechEngine.AudioOutputStream = fileStream;
speechEngine.Speak(job.Script, SpeechVoiceSpeakFlags.SVSFPurgeBeforeSpeak | SpeechVoiceSpeakFlags.SVSFDefault); //TODO: Change to XML
//Wait for 15 minutes only
speechEngine.WaitUntilDone((uint)new TimeSpan(0, 15, 0).TotalMilliseconds);
}
finally
{
fileStream.Close();
}
This exact code works in a WinForm app, but when I run it inside a webservice I get the following
System.Runtime.InteropServices.COMException was unhandled
Message="Exception from HRESULT: 0x80045003"
Source="Interop.SpeechLib"
ErrorCode=-2147201021
Does anyone have any ideas what might be causing this error? The error code means
SPERR_UNSUPPORTED_FORMAT
For completeness here is the SetVoice method
void SetVoice(SpVoiceClass speechEngine, string voiceName)
{
var voices = speechEngine.GetVoices(null, null);
for (int index = 0; index < voices.Count; index++)
{
var currentToken = (SpObjectToken)voices.Item(index);
if (currentToken.GetDescription(0) == voiceName)
{
speechEngine.SetVoice((ISpObjectToken)currentToken);
return;
}
}
throw new Exception("Voice not found: " + voiceName);
}
I have given full access to USERS on the folder C:\Temp where the file is to be written. Any help would be appreciated!
I don't think the System.Speech works in windows service. It looks like there is a dependency to Shell, which isn't available to services. Try interop with SAPI's C++ interfaces. Some class in System.Runtime.InteropServices may help on that.
Our naming convention requires us to use a non-standard file extension. This works fine in a Winforms app, but failed on our web server. Changing the file extension back to .wav solved this error for us.
Make sure you explicitly set the format on the SPFileStream object. ISpAudio::SetState (which gets called in a lower layer from speechEngine.Speak) will return SPERR_UNSUPPORTED_FORMAT if the format isn't supported.
I just got the webservice to spawn a console app to do the processing. PITA :-)