I am creating a real-time voice application that involves the Google Text-To-Speech service. However, I am getting latencies of between 600-1100ms which is far too slow for my application. The audio is only around 3 seconds long, how can I improven this? (That latency is a measure of how long it take for me to send the request and then receive the audio).
UPDATE
The code I am using is:
//I call this at the start of my program
TTSclient = TextToSpeechClient.Create();
//This is the method that I call everytime I make a TTS call in my program
public static Google.Protobuf.ByteString MakeTTS(string text)
{
SynthesisInput input = new SynthesisInput
{
Text = text
};
VoiceSelectionParams voice = new VoiceSelectionParams
{
LanguageCode = "en-AU",
Name = "en-AU-Wavenet-A"
};
AudioConfig config = new AudioConfig
{
AudioEncoding = AudioEncoding.Linear16,
SampleRateHertz = 16000,
SpeakingRate = 0.9
};
var TTSresponse = TTSclient.SynthesizeSpeech(new SynthesizeSpeechRequest
{
Input = input,
Voice = voice,
AudioConfig = config
});
return TTSresponse.AudioContent;
}
Thanks
I recommend to check first the Latency median by API method in the metrics page of the TTS API. If you see there that the latency is between 600 to 1,100 ms, then I don't see much to do because all requests are done synchronously and since this is a shared resource, the SLA for those APIs only covers availability, not latency.
If the results you get there are way lower, then I can only think of two things that may slow down your results: the network's own latency or any additional processing being done. If the latest is the case, then you would have to try and error different settings for your request (for example, I would wonder if specifying a device profile, since this feature is currently in beta, would probably result in a slightly slower response).
Related
I do a query on a path then add new data on the same path then read again with the same query and the new data is not in the result. I can see the new data in my FB console and if I restart my app, it will show. It's like I'm reading from cached data. What is wrong?
public static void GetScores(string readDbPath)
{
FirebaseDatabase.DefaultInstance.GetReference(readDbPath).OrderByChild("score")
.LimitToLast(Constants.FIREBASE_QUERY_ITEM_LIMIT)
.GetValueAsync().ContinueWith(task =>
{
if (task.IsFaulted)
{
// Handle the error...
Debug.LogError("FirebaseDatabase task.IsFaulted" + task.Exception.ToString());
}
else if (task.IsCompleted)
{
DataSnapshot snapshot = task.Result;
// Do something with snapshot...
List<Score> currentScoreList = new List<Score>();
foreach (var rank in snapshot.Children)
{
var highscoreobject = rank.Value as Dictionary<string, System.Object>;
string userID = highscoreobject["userID"].ToString();
int score = int.Parse(highscoreobject["score"].ToString());
currentScoreList.Add(new Score(score, userID));
}
OnStatsDataQueryReceived.Invoke(currentScoreList); // catched by leaderboard
}
});
}
It's very likely that you're using Firebase's disk persistence, which doesn't work well with Get calls. For a longer explanation of why that is, see my answer here: Firebase Offline Capabilities and addListenerForSingleValueEvent
So you'll have to choose: either use disk persistence, or use Get calls. The alternative to calling Get would be to monitor the ValueChanged event. In this case your callback will be invoked immediately when you change the value in the database. For more on this, see the Firebase documentation on listening for events.
This post was deleted stating it was just additional infos on my question. If fact it is the solution which is to use GoOffline() / GoOnline().
Thanks Frank for the detailed answer.
I tried with
FirebaseDatabase.DefaultInstance.SetPersistenceEnabled(false)
but the problem stayed the same. Using listeners is not what I want since every time a player would send a score, every player on the same path would receive refreshed data so I'm worried about b/w costs and performance.
The best solution I just found is to call before using a get
FirebaseDatabase.DefaultInstance.GoOnline();
then right after I get a response I set
FirebaseDatabase.DefaultInstance.GoOffline();
So far no performance hit I can notice and I get what I want, fresh data on each get. Plus persistence if working off line then going back.
This program is an audio visualizer for an rgb keyboard that listens to windows' default audio device. My audio setup is a bit more involved, and I use way more than just the default audio device. For instance, when I play music from Winamp it goes through the device Auxillary 1 (Synchronous Audio Router) instead of Desktop Input (Synchronous Audio Router) which I have set as Default. I'd like to be able change the device that the program listens to for the visualization.
I found in the source where the audio device is declared; Lines 32-36 in CSCoreAudioInput.cs:
public void Initialize()
{
MMDevice captureDevice = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console);
WaveFormat deviceFormat = captureDevice.DeviceFormat;
_audioEndpointVolume = AudioEndpointVolume.FromDevice(captureDevice);
}
The way that I understand it from the documentation, the section MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console) is where Windows gives the application my default IMMEndpoint "Desktop Input."
How would I go about changing DefaultAudioEndpoint?
Further Reading shows a few ways to get an IMMDevice, with DefaultAudioEnpoint being one of them. It seems to me that I'd have to enumerate the devices, and then separate out Auxillary 1 (Synchronous Audio Router) using PKEY_Device_FriendlyName. That's a bit much for me, as I have little to no C# experience. Is there an easier way to go about choosing a different endpoint? Am I on the right track? or am I missing the mark completely?
Also, what is the difference between MMDevice and IMMDevice? The source only seems to use MMDevice while all the Microsoft documentation references IMMDevice.
Thanks.
I DID IT!
I've found why the program uses MMDevice rather than IMMDevice. The developer has chosen to use the CSCore Library rather than Windows' own Core Audio API.
From continued reading of the CSCore MMDeviceEnumerator Documentation, it looks like I'll have to make a separate program that outputs all endpoints and their respective Endpoint ID Strings. Then I can substitute the DefaultAudioEndpoint method with the GetDevice(String id) method, where String id is the ID of whichever Endpoint I chose from the separate program.
To find the the Endpoint I wanted, I wrote this short program to find all the info I wanted:
static void Main(string[] args)
{
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
Console.WriteLine($"\nNumber of active Devices: {collection.GetCount()}");
int i = 0;
foreach (MMDevice device in collection){
Console.WriteLine($"\n{i} Friendly name: {device.FriendlyName}");
Console.WriteLine($"Endpoint ID: {device.DeviceID}");
i++;
}
Console.ReadKey();
}
This showed me that the Endpoint I wanted was item number 3 (2 in an array) on my list, and instead of using GetDevice(String id) I used ItemAt(int deviceIndex).
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
MMDevice captureDevice = collection.ItemAt(2);
However in this case, the program was not using captureDevice to bring in the audio data. These were the magic lines:
_capture = new WasapiLoopbackCapture(100, new WaveFormat(deviceFormat.SampleRate, deviceFormat.BitsPerSample, i));
_capture.Initialize();
I found that WasapiLoopbackCapture uses Windows' default device unless changed, and the code was using DefaultAudioEndpoint to get the properties of the default device. So I added
_capture.Device = captureDevice;
//before
_capture.Initialize();
And now the program properly pulls the audio data off of my non-default audio endpoint.
I had been asked to solve a similar type of problem this week. Although there are a few librarys to do this I was specifically asked to do this for "non ish" programmers so I developed this in PowerShell.
Powershell default audio device changer - Github
Maybe you can alter it to your needs.
Hi I have written a C# client/server application using the Zeroc Ice communication libary (v3.4.2).
I am transferring a sequence of objects from the server which are then displaying them in the client in a tabular format. Simple enough.
I defined following slice types
enum DrawType { All, Instant, Raffle };
struct TicketSoldSummary {
int scheduleId;
DrawType dType;
string drawName;
long startDate;
long endDate;
string winningNumbers;
int numTicket;
string status;
};
sequence<TicketSoldSummary> TicketSoldSummaryList;
interface IReportManager {
[..]
TicketSoldSummaryList getTicketSoldSummary(long startTime, long endTime);
};
When I call this method it usually works fine, but occasionally (approx 25% of the time) the caller gets a Ice::MemoryLimitException. We are usually running 2-3 clients at a time.
I searched on the Internet for answers and I was told to increase Ice.MessageSizeMax, which I did. I have increased MessageSizeMax right up to 2,000,000 Kb, but it made no difference, I just did a test with 31,000 records (approximately 1.8 Megs of data) and still get Ice.MemoryLimitException. 1.8 Megs is not very big!
Am I doing something wrong or is there a bug in Zeroc Ice?
Thanks so much to anyone that can offer some help.
I believe MessageSizeMax needs to be configured on the client as well as the server side. Also have tracing enabled with max value (3) and check the size of the messages (on the wire)
Turn on Ice.Warn.Connections on the server side and see the logs. Also make sure the client max message size gets applied correctly. I set Ice.MessageSizeMax on the client as below,
Ice.Properties properties = Ice.Util.createProperties();
properties.setProperty("Ice.MessageSizeMax", "2097152");//2gb in kb
Ice.InitializationData initData = new Ice.InitializationData();
initData.properties = properties;
Ice.Communicator communicator = Ice.Util.initialize(initData);
So I'm running this code
public static void ConvertToWma(string inFile, string outFile, string profileName)
{
// Create a WMEncoder object.
WMEncoder encoder = new WMEncoder();
ManualResetEvent stopped = new ManualResetEvent(false);
encoder.OnStateChange += delegate(WMENC_ENCODER_STATE enumState)
{
if (enumState == WMENC_ENCODER_STATE.WMENC_ENCODER_STOPPED)
stopped.Set();
};
// Retrieve the source group collection.
IWMEncSourceGroupCollection srcGrpColl = encoder.SourceGroupCollection;
// Add a source group to the collection.
IWMEncSourceGroup srcGrp = srcGrpColl.Add("SG_1");
// Add an audio source to the source group.
IWMEncSource srcAud = srcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO);
srcAud.SetInput(inFile, "", "");
// Specify a file object in which to save encoded content.
IWMEncFile file = encoder.File;
file.LocalFileName = outFile;
// Choose a profile from the collection.
IWMEncProfileCollection proColl = encoder.ProfileCollection;
proColl.ProfileDirectory = AssemblyInformation.GetExecutingAssemblyDirectory();
proColl.Refresh();
IWMEncProfile pro;
for (int i = 0; i < proColl.Count; i++)
{
pro = proColl.Item(i);
if (pro.Name == profileName)
{
srcGrp.set_Profile(pro);
break;
}
}
// Start the encoding process.
// Wait until the encoding process stops before exiting the application.
encoder.SynchronizeOperation = false;
encoder.PrepareToEncode(true);
encoder.Start();
stopped.WaitOne();
}
And I get a COMException (0x80004005) when encoder.PrepareToEncode gets executed.
Some notes:
1) The process is spawned by an ASP.NET web service so it runs as NETWORK SERVICE
2) inFile and outFile are absolute local paths and their extensions are correct, in addition inFile definitely exists (this has been a source of problems in the past)
3) The program works when I run it as myself but doesn't work in the ASP.NET context.
This says to me its a security permission issue so in addition I've granted Full Control to the directory containing the program AND the directories containing the audio files to NETWORK SERVICE. So I really don't have any idea what more I can do on the security front. Any help?
Running WM Encoder SDK based app in windows service is not supported. It uses hidden windows for various reasons, and there isn't a desktop window in service. DRM would certainly fail with no user profile. Besides, even when you make your service talk to WME instance on a user's desktop, Microsoft only supports 4 concurrent requests per machine because the global lock in WME (I know, not pretty programming, but WME is old). For more scalable solutions, consider Windows Media Format SDK.
You may want to move your WM Encoder based app to Expression Encoder SDK as WM Encoder's support is ending.
I am working on an application which reads eventlogs(Application) from remote machines. I am making use of EventLog class in .net and then iterating on the Log entries but this is very slow. In some cases, some machines have 40000+ log entries and it takes hours to iterate through the entries.
what is the best way to accomplish this task? Are there any other classes in .net which are faster or in any other technology?
Man, I feel your pain. We had the exact same issue in our app.
Your solution has a branch depending on what server version you're running on and what server version your "target" machine is running on.
If you're both on Vista or Windows Server 2008, you're in luck. You should look at System.Diagnostics.Eventing.Reader.EventLogQuery and System.Diagnostics.Eventing.Reader.EventLogReader. These are new in .net 3.5.
Basically, you can build a query in XML and ship it over to run on the remote computer. Maybe you're just searching for events of a specific type, or maybe just new events from a specific point in time. The search runs on the remote machine, and then you just get back the matching events. The new classes are much faster than the old .net 2.0 way, but again, they are only supported on Vista or Windows Server 2008.
For our app when the target is NOT on Vista/Win2008, we downloaded the raw .evt file from the remote system, and then parsed the file using its binary format. There are several sources of data about the event log format for .evt files (pre-Vista), including link text and an article I recall on codeproject.com that had some c# code.
Vista and Windows Server 2008 machines use a new .evtx format that is a new format, so you can't use the same binary parsing approach across all versions. But the new EventLogQuery and EventLogReader classes are so fast that you won't have to. It's now perfectly speedy to just use the built-in classes.
Event Log Reader is horribly slow... too slow. WTF Microsoft?
Use LogParser 2.2 - Search for C# and LogParser on the Internet (or you can use the log parser commands from the command line). I don't want to duplicate the work already contributed by others.
I pull the log from the remote system by having the log exported as an EVTX file. I then copy the file from the remote system. This process is really quick - even with a network that spans the planet (I had issues with having the log exported to a network resource). Once you have it local, you can do your searches and processing.
There are multiple reasons for having the EVTX - I won't get into the reasons why we do this.
The following is a working example of the code to save a copy of the log as an EVTX:
(Notes: "device" is the network host name or IP. "LogName" is the name of the log desired: "System", "Security", or "Application". outputPathOnRemoteSystem is the path on the remote computer, such as "c:\temp\%hostname%.%LogName%.%YYYYMMDD_HH.MM%.evtx".)
static public bool DumpLog(string device, string LogName, string outputPathOnRemoteSystem, out string errMessage)
{
bool wasExported = false;
string errorMessage = "";
try
{
System.Diagnostics.Eventing.Reader.EventLogSession els = new System.Diagnostics.Eventing.Reader.EventLogSession(device);
els.ExportLogAndMessages(LogName, PathType.LogName, "*", outputPathOnRemoteSystem);
wasExported = true;
}
catch (UnauthorizedAccessException e)
{
errorMessage = "Unauthorized - Access Denied: " + e.Message;
}
catch (EventLogNotFoundException e)
{
errorMessage = "Event Log Not Found: " + e.Message;
}
catch (EventLogException e)
{
errorMessage = "Export Failed: " + e.Message + ", Log: " + LogName + ", Device: " + device;
}
errMessage = errorMessage;
return wasExported;
}
A good Explanation/Example can be found on MSDN.
EventLogSession session = new EventLogSession(Environment.MachineName);
// [System/Level=2] filters out the errors
// Where "Log" is the log you want to get data from.
EventLogQuery query = new EventLogQuery("Log", PathType.LogName, "*[System/Level=2]");
EventLogReader reader = new EventLogReader(query);
for (EventRecord eventInstance = reader.ReadEvent();
null != eventInstance;
eventInstance = reader.ReadEvent())
{
// Output or save your event data here.
}
When waiting 5-20 minutes with the old code this one does it in less than 10 seconds.
Maybe WMI can help you:
WMI with C#
Have you tried using the remoting features in powershell 2.0? They allow you to execute cmdlets (like ones to read event logs) on remote machines and return the results (as objects, of course) to the calling session.
You could place a Program at those machines that save the log to file and sends it to your webapplication i think that would be alot faster as you can do the looping local but im not sure how to do it so i cant ive you any code :(
I recently did such thing via WCF callback interface however my clients interacted with the server through WCF and adding a WCF Callback was easy in my project, full code with examples is available here
Just had the same issue and want to share my solution. It makes a search through application, system and security eventlogs from 260 seconds (using EventLog) about a 100 times faster (using EventLogQuery).
And this in a way where it is possible to check if the event message contains a pattern or any other check without the requirement of FormatDescription().
My trick is to use the same mechanism as PowerShells Get-WinEvent does and then pass it through the result check.
Here is my code to find all events within last 4 days where the event message contains a filter pattern.
string[] eventLogSources = {"Application", "System", "Security"};
var messagePattern = "*Your Message Search Pattern*";
var timeStamp = DateTime.Now.AddDays(-4);
var matchingEvents = new List<EventRecord>();
foreach (var eventLogSource in eventLogSources)
{
var i = 0;
var query = string.Format("*[System[TimeCreated[#SystemTime >= '{0}']]]",
timeStamp.ToUniversalTime().ToString("o"));
var elq = new EventLogQuery(eventLogSource, PathType.LogName, query);
var elr = new EventLogReader(elq);
EventRecord entryEventRecord;
while ((entryEventRecord = elr.ReadEvent()) != null)
{
if ((entryEventRecord.Properties)
.FirstOrDefault(x => (x.Value.ToString()).Contains(messagePattern)) != null)
{
matchingEvents.Add(entryEventRecord);
i++;
}
}
}
Maybe that the remote computers could do a little bit of computing. So this way your server would only deal with relevant information. It would be a kind of cluster using the remote computer to do some light filtering and the server would the the analysis part.