Get VRAM Total using OpenHardwareMonitor NuGet package - c#

I am currently trying to get the value of a GPU's total VRAM using the Nuget package OpenHardwareMonitor.
I know that it is possible to get the value through the use of the package, however, I have been trying for quite a while, and have not found the specific code for completing the task.
I am not looking for the answer of getting the total VRAM using any other means, such as WMI. I am just looking for an answer using OpenHardwareMonitor.
If you have the solution to this problem, that would be greatly appreciated!

The problem is that the NuGet package is built from an older version of the code. In the meantime additional sensors have been added that include details about total, free and used GPU memory (at least for NVidea GPU's). See this diff.
If that package ever gets updated, you should be able to find the memory details in the list of sensors:
var computer = new Computer();
computer.GPUEnabled = true;
computer.Open();
var gpu = computer.Hardware.First(x => x.HardwareType == HardwareType.GpuNvidia);
var totalVideoRamInMB = gpu.Sensors.First(x => x.Name.Equals("GPU Memory Total")).Value / 1024;
computer.Close();
Until then, a workaround would be to extract the memory information from the GetReport() result, where the GPU memory information looks like this:
Memory Info
Value[0]: 2097152
Value[1]: 2029816
Value[2]: 0
Value[3]: 8221004
Value[4]: 753168
Where Value[0] is the total GPU memory and Value[4] the amount of free GPU memory in kB. So with some regex magic, we can extract that information:
var pattern = #"Memory Info.*Value\[0\]:\s*(?<total>[0-9]+).*Value\[4\]:\s*(?<free>[0-9]+)";
var computer = new Computer();
computer.GPUEnabled = true;
computer.Open();
var gpu = computer.Hardware.First(x => x.HardwareType == HardwareType.GpuNvidia);
var report = gpu.GetReport();
var match = Regex.Match(report, pattern, RegexOptions.Singleline);
var totalVideoRamInMB = float.Parse(match.Groups["total"].Value) / 1024;
var freeVideoRamInMB = float.Parse(match.Groups["free"].Value) / 1024;
computer.Close();
Note that OpenHardwareMonitor only implements GPU memory information for NVidea GPU's.

Related

Google Text-To-Speech latency

I am creating a real-time voice application that involves the Google Text-To-Speech service. However, I am getting latencies of between 600-1100ms which is far too slow for my application. The audio is only around 3 seconds long, how can I improven this? (That latency is a measure of how long it take for me to send the request and then receive the audio).
UPDATE
The code I am using is:
//I call this at the start of my program
TTSclient = TextToSpeechClient.Create();
//This is the method that I call everytime I make a TTS call in my program
public static Google.Protobuf.ByteString MakeTTS(string text)
{
SynthesisInput input = new SynthesisInput
{
Text = text
};
VoiceSelectionParams voice = new VoiceSelectionParams
{
LanguageCode = "en-AU",
Name = "en-AU-Wavenet-A"
};
AudioConfig config = new AudioConfig
{
AudioEncoding = AudioEncoding.Linear16,
SampleRateHertz = 16000,
SpeakingRate = 0.9
};
var TTSresponse = TTSclient.SynthesizeSpeech(new SynthesizeSpeechRequest
{
Input = input,
Voice = voice,
AudioConfig = config
});
return TTSresponse.AudioContent;
}
Thanks
I recommend to check first the Latency median by API method in the metrics page of the TTS API. If you see there that the latency is between 600 to 1,100 ms, then I don't see much to do because all requests are done synchronously and since this is a shared resource, the SLA for those APIs only covers availability, not latency.
If the results you get there are way lower, then I can only think of two things that may slow down your results: the network's own latency or any additional processing being done. If the latest is the case, then you would have to try and error different settings for your request (for example, I would wonder if specifying a device profile, since this feature is currently in beta, would probably result in a slightly slower response).

Octopus client, getting version from project name in C#

First of, I am completely new to octopus client, used it for the first time just before posting this.
So, I've been landed with this project to update the version number on a webpage monitoring some of our octopus deployed projects. I have been looking around the octopus client and not really gotten anywhere. The best I have so far is:
OctopusServerEndpoint endPoint = new OctopusServerEndpoint(server, apiKey);
OctopusRepository repo = new OctopusRepository(endPoint);
var releases = repo.Releases.FindAll();
From these releases I can get the ProjectId and even the Version, the issue is that releases is 600 strong and I am only looking for 15 of them.
The existing code I have to work from used to parse the version from local files so that is all out the window. Also, the existing code only deals with the actual names of the projects, like "AWOBridge", not their ProjectId, which is "Projects-27".
Right now my only option is to manually write up a keyList or map to correlate the names I have with the IDs in the octopus client, which I of course rather not since it is not very extendable or good code practice in my opinion.
So if anyone has any idea on how to use the names directly with octopus client and get the version number from that I would very much appriciate it.
I'll be getting down into octopus client while waiting. Let's see if I beat you to it!
Guess I beat you to it!
I'll just leave an answer here if anyone ever has the same problem.
I ended up using the dashboardto get what I needed:
OctopusServerEndpoint endPoint = new OctopusServerEndpoint(server, apiKey);
OctopusRepository repo = new OctopusRepository(endPoint);
DashboardResource dash = repo.Dashboards.GetDashboard();
List<DashboardItemResource> items = dash.Items;
DashboardItemResource item = new DashboardItemResource();
List<DashboardProjectResource> projs = dash.Projects;
var projID = projs.Find(x => x.Name == projectName).Id;
item = items.Find(x => x.ProjectId == projID && x.IsCurrent == true);
The dashboard is great since it contains all the info that the web dashboard shows. So you can use Project, Release, Deployment and Environment with all the information they contain.
Hope this helps someone in the future!
I'm using LINQPad to run C# snippets for Octopus automation using the Octopus Client library and I have come up with following to get any version of a project making use of Regular expression pattern. It works quite well if you use Pre-release semantic versioning.
For example to get latest release for a project:
var project = Repo.Projects.FindByName("MyProjectName");
var release = GetReleaseForProject(project);
To get specific release use that has 'rc1' in the version for example (also useful if you use source code branch name in the version published to Octopus:
var release = GetReleaseForProject(project, "rc1");
public ReleaseResource GetReleaseForProject(ProjectResource project, string versionPattern = "")
{
// create compiled regex expression to use for search
var regex = new Regex(versionPattern, RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase);
var releases = Repo.Projects.GetReleases(project);
if (!string.IsNullOrWhiteSpace(versionPattern) && !releases.Items.Any(r => regex.IsMatch(r.Version)))
{
return null;
}
return (!string.IsNullOrWhiteSpace(versionPattern)) ? releases.Items.Where(r => regex.IsMatch(r.Version))?.First() : releases.Items?.First();;
}

OleDbAdapter performance issue

Currently we are using in our company OleDb in an older application.
I have started to profilng the application and dotTrace said me that this code is one of the bottlenecks. In total it takes about 18s for execution (avg 6ms).
m_DataSet = new DataSet("CommandExecutionResult");
m_DataAdapter.SelectCommand = m_OleDbCommand;
m_DataAdapter.Fill(m_DataSet, "QueryResult"); // <-- bottleneck
ReturnValue = m_DataSet.Tables[0].Copy();
m_InsertedRecordId = -1;
m_EffectedRecords = m_DataSet.Tables[0].Rows.Count;
I know, maybe there some ways to reduce the number of queries. BUT is there a way to get a DataTable from an access database without using the DataAdapter?

Roslyn slow startup time

I've noticed that the startup time for Roslyn parsing/compilation is a fairly significant one-time cost. EDIT: I am using the Roslyn CTP MSI (the assembly is in the GAC). Is this expected? Is there any workaround?
Running the code below takes almost the same amount of time with 1 iteration (~3 seconds) as with 300 iterations (~3 seconds).
[Test]
public void Test()
{
var iters = 300;
foreach (var i in Enumerable.Range(0, iters))
{
// Parse the source file using Roslyn
SyntaxTree syntaxTree = SyntaxTree.ParseText(#"public class Foo" + i + #" { public void Exec() { } }");
// Add all the references we need for the compilation
var references = new List<MetadataReference>();
references.Add(new MetadataFileReference(typeof(int).Assembly.Location));
var compilationOptions = new CompilationOptions(outputKind: OutputKind.DynamicallyLinkedLibrary);
// Note: using a fixed assembly name, which doesn't matter as long as we don't expect cross references of generated assemblies
var compilation = Compilation.Create("SomeAssemblyName", compilationOptions, new[] {syntaxTree}, references);
// Generate the assembly into a memory stream
var memStream = new MemoryStream();
// if we comment out from this line and down, the runtime drops to ~.5 seconds
EmitResult emitResult = compilation.Emit(memStream);
var asm = Assembly.Load(memStream.GetBuffer());
var type = asm.GetTypes().Single(t => t.Name == "Foo" + i);
}
}
I think one issue is using a memory stream, instead you should try using a dynamic module and ModuleBuilder instead. Overall the code is executing faster but still has a heavier first load scenario. I'm pretty new to Roslyn myself so I'm not sure why this is faster but here is the changed code.
var iters = 300;
foreach (var i in Enumerable.Range(0, iters))
{
// Parse the source file using Roslyn
SyntaxTree syntaxTree = SyntaxTree.ParseText(#"public class Foo" + i + #" { public void Exec() { } }");
// Add all the references we need for the compilation
var references = new List<MetadataReference>();
references.Add(new MetadataFileReference(typeof(int).Assembly.Location));
var compilationOptions = new CompilationOptions(outputKind: OutputKind.DynamicallyLinkedLibrary);
// Note: using a fixed assembly name, which doesn't matter as long as we don't expect cross references of generated assemblies
var compilation = Compilation.Create("SomeAssemblyName", compilationOptions, new[] { syntaxTree }, references);
var assemblyBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(new System.Reflection.AssemblyName("CustomerA"),
System.Reflection.Emit.AssemblyBuilderAccess.RunAndCollect);
var moduleBuilder = assemblyBuilder.DefineDynamicModule("MyModule");
System.Diagnostics.Stopwatch watch = new System.Diagnostics.Stopwatch();
watch.Start();
// if we comment out from this line and down, the runtime drops to ~.5 seconds
var emitResult = compilation.Emit(moduleBuilder);
watch.Stop();
System.Diagnostics.Debug.WriteLine(watch.ElapsedMilliseconds);
if (emitResult.Diagnostics.LongCount() == 0)
{
var type = moduleBuilder.GetTypes().Single(t => t.Name == "Foo" + i);
System.Diagnostics.Debug.Write(type != null);
}
}
By using this technique the compilation took just 96 milliseconds, on subsequent iterations it takes around 3 - 15ms. So I think you could be right in terms of the first load scenario adding some overhead.
Sorry I can't explain why it's faster! I'm just researching Roslyn myself and will do more digging later tonight to see if I can find any more evidence of what the ModuleBuilder provides over the memorystream.
I have came across the same issue using the Microsoft.CodeDom.Providers.DotNetCompilerPlatform package of ASP.net. It turns out this package launches csc.exe which uses VBCSCompiler.exe as a compilation server. By default the VBCSCompiler.exe server lives for 10 seconds and its boot time is of about 3 seconds. This explains why it takes about the same time to run your code once or multiple times. It seems like Microsoft is using this server as well in Visual Studio to avoid paying an extra boot time each time you run a compilation.
With the this package You can monitor your processes and will find a command line which looks like csc.exe /keepalive:10
The nice part is that if this server stays alive (even between two sessions of your application), you can get a fast compilation all the times.
Unfortunately, the Roslyn package is not really customizable and the easiest way I found to change this keepalive constant is to use the reflection to set non public variables value. On my side, I defined it to a full day as it always keep the same server even if I close and restart my application.
/// <summary>
/// Force the compiler to live for an entire day to avoid paying for the boot time of the compiler.
/// </summary>
private static void SetCompilerServerTimeToLive(CSharpCodeProvider codeProvider, TimeSpan timeToLive)
{
const BindingFlags privateField = BindingFlags.NonPublic | BindingFlags.Instance;
var compilerSettingField = typeof(CSharpCodeProvider).GetField("_compilerSettings", privateField);
var compilerSettings = compilerSettingField.GetValue(codeProvider);
var timeToLiveField = compilerSettings.GetType().GetField("_compilerServerTimeToLive", privateField);
timeToLiveField.SetValue(compilerSettings, (int)timeToLive.TotalSeconds);
}
When you call Compilation.Emit() it is the first time you actually need metadata, so the metadata file access occurs. After that, its cached. Though that should not account for 3secs just for mscorlib.
tldr: NGEN-ing roslyn dlls shaves off 1.5s off of the initial compilation/execution time (in my case from ~2s to ~0.5s)
Investigated this just now.
With a brand new console application and a nuget reference to Microsoft.CodeAnalysis.Scripting, the initial execution of a small snippet ("1+2") took about 2s, while subsequent ones were a lot faster - around 80ms (still a bit high for my taste but that's a different topic).
Perfview revealed that the delay was predominantly due to jitting:
Microsoft.CodeAnalysis.CSharp.dll: 941ms (3,205 methods jitted)
Microsoft.CodeAnalysis.dll 426ms (1,600 methods jitted)
I used ngen on Microsoft.CodeAnalysis.CSharp.dll (making sure to specify the /ExeCondig:MyApplication.exe because of the binding redirects in app.config) and got a nice performance improvement, the first-execution time fell to ~580ms.
This of course would need to be done on end user machines. In my case, I'm using Wix as the installer for my software and there's support for NGEN-ing files at install time.

Counter is single instance, instance name 'WebDev.WebServer40' is not valid for this counter category

I'm trying to use a Memory Performance Counter:
System.Diagnostics.PerformanceCounter theMemCounter =
new System.Diagnostics.PerformanceCounter("Memory", "Available MBytes",
System.Diagnostics.Process.GetCurrentProcess().ProcessName, true);
var memStart = theMemCounter.NextValue();
But in the second line i'm getting the following error:
Counter is single instance, instance name 'WebDev.WebServer40' is not valid for this counter category.
What is the problem?
Ottoni, I don't think you can specify a process to this particular Performance Counter, since it monitors the available memory on the whole system.
Maybe the perfcounter you're looking for is ".NET CLR Memory(INSTANCE)# Bytes in all Heaps" or some other in the .NET CLR Memory category, which is able to monitor memory usage for all or a specified .net application.
More info on this category here: http://msdn.microsoft.com/en-us/library/x2tyfybc.aspx
--EDIT
Solution:
System.Diagnostics.PerformanceCounter theMemCounter =
new System.Diagnostics.PerformanceCounter("Process", "Working Set",
System.Diagnostics.Process.GetCurrentProcess().ProcessName);
var memStart = theMemCounter.NextValue() / 1024 / 1024;

Categories

Resources