Debug.Log() is too slow to output a lot of variables quickly - c#

I'm used to output variables to the console in Java, a quick and dirty way to debug and often showed me what was wrong with my code. Unfortunately using Debug.Log in Unity takes a lot of resources somehow. I'm currently generating a simple map in about 80ms, but since there are some problems I'd just like to output some variables, now this makes my map generate in a minute or two. Doing something like this in the update method makes Unity unresponsive and can crash without saving scene edits.

Debug.Log() is incredibly slow in Unity, and can really throw a wrench in performance if you're printing data at high rates (such as in a loop). A big part of the problem is all the extra processing that goes into generating those messages - aside from converting the given value to a string (which has negligible impact on performance), it also has to generate a stack trace so it can link back to the line of code where the logging occurred (see this Unity Answer), which is comparatively slow.
In games, it's a common practice to create an in-game debug output (think of the console in Bethesda games where you can see game events and enter developer commands). Skipping the overhead from Debug.Log() by writing the text you need to output into that output display saves a lot of processing power, and also lets you view debug output with less hassle on mobile devices.
The simplest solution here (short of just not logging so frequently) is to make a Canvas object, add a Text object to it for your debug output, then add a script with a few static methods to append to the Text content so you can reference them (and write to your display) from anywhere in your code.
Alternatively, you can write the output to a file at regular intervals if the text will be too lengthy/generated too quickly to read in real-time, then review it after the code runs.

Related

Media Foundation: ReadSample - Access Violation Exception

Context: I'm looking at the effects of down sampling, then up sampling video files. I'm using Media Foundation .NET to expose MF in C#. Program currently goes through following process:
Take a high res video and read in each frame (SourceReader & ReadSample)
Down sample using custom code that manipulates at the byte level
Write the down sampled data to a new, lower res video file (using SinkWriter)
Repeat for a range of resolutions supported by Media Foundation
Read down sampled videos back in and up sample to the next higher resolution in the list below, again using custom code that manipulates each byte
Write the new data to a higher res file (again using SinkWriter)
Resolutions I'm using are:
2560,1440
2346,1320
2134,1200
1920,1080
1706,960
1494,840
1280,720
1068,600
854,480
640,360
428,240
214,120
Current situation: This works almost perfectly. I run through the down sample process and have 11 down sampled video files (one at each resolution in the list above), plus the original 1440p video. I then read in each of those 11 videos and up sample. It works for 10 of them.
The problem: when I try to take the (1280,720) video to up sample to (1494,840), I get:
System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt.'
... when I try to read in the first frame. I can't figure out why. The SourceReader configures fine (at least, no error returns). I do things like Marshal.Copy to get the sampled frame data into managed memory space, which I initially assumed was the problem. Code doesn't get that far though - just errors as soon as I try to read the first frame sample. ReadSample is in a Try...Catch block, but exception remains unhandled, so no other error information returned.
I don't want to just start pasting in unhelpful code, so please let me know what is useful to see and I'll add to the question. Most of the code has been taken from MS tutorials for SourceReader and SinkWriter. Also worth keeping in mind that this works in most situations, so code isn't 'broken' as such.
I've tried compiling in Release and Debug, x86 and x64. Also tried Suppressing JIT Optimisation in Visual Studio options.
Any ideas on where to look next?
Turns out this is a problem with the Media Foundation .NET interface, not the underlying MF framework. I built a small test program in C++ that implemented key parts of the code and it went through fine.
Not sure why Media Foundation .NET was causing problems, but the solution was just to set attribute:
MF_SOURCE_READER_ENABLE_ADVANCED_VIDEO_PROCESSING
rather than
MF_SOURCE_READER_ENABLE_VIDEO_PROCESSING
With advanced processing on, it behaves properly.

Directshow - Stop and close a file (but reuse graph and some filters)

I am building a multistream video player. I am currently having issues trying to close a file. Effectively I may have 1 to 4 video files playing at any one time. When I am playing 4 files, then the next sequence only has one, I can't seem to repaint the video panel correctly after removing the source file filter.
I must say that I am building and managing the graph manually (to get some extra speed), including connecting all filters/renderers etc. I have looked into GMFBridge, but ultimately I ran into issues keeping the render graph and file graph in sync all the time (issues such as fast playback (catching up due to timecoding) and having to run/pause/stop/step the mediacontrol on both render and file graphs simultaneously (failed playback sometimes)). From memory the render graph needs to be configured correctly and my scenario didn't fit in exactly with the sample provided (need to playback seemlessly but still required the individual time coding for each file - not merged into one large file).
I reuse the IFilterGraph2/VMR/DirectSound objects throughout the lifetime of the application. The only thing that changes is the SourceFilter and neccessary decoders/demuxes.
So the process is:
Build graph
Add Renderer
Attempt to play a file - according to file type, add the source filter and demux/decoders etc (remove any obsolete filters)
Connect the filters together (manually connect the pins)
Seek/play etc
Once done, unload the current source file by calling Graph.RemoveFilter(), but leave the renderers in the graph and disconnect all pins.
I have experienced the following error:
COM+ exception when closing the file (and calling VMR.RepaintVideo())
EDIT: The error is this:
COM+ is required for this operation, but is not installed (Exception from HRESULT: 0x8004020C)
I do call VMR.SetVideoClippingWindow() once when adding the renderer to the graph.
Is there any way to unload the file, without disposing the filtergraph, and repaint/clear the video window? for that matter, is there any way to repaint the video when there is no source file filter in the graph?
I don't think you have any significant speed gain if you stop the graph and even disconnect pins
The error is not really COM+, the codes overlap and this error has a different meaning (what is the code exactly?)
The only way to eliminate all artifacts and smoothly swap the files and make it quick is to split the pipeline into parts and have video renderer in the filter graph you never stop and disconnect. This takes you back to bridging, or instead to similar technique of synchronization streams between upstream file graph and downstream presentation graph.
UPD. The error is 0x8004020C VFW_E_BUFFER_NOTSET "No buffer space has been set.", use ShowHresult to decode codes, this tool has in particular priority to DirectShow codes when it hits overlapped code blocks.

Tapi3Lib Adding a new line at runtime

I am having some trouble with the interop.tapi3lib.dll (which can be DL here:dllLink)
For a reporting program i'm writing, i want to monitor all of the devices available by the tapi for their calls. Now this is working nicely when i fire up the program, although i suspect the dll is written with the purpose of modifying calls on a single extension, with very little code i can see all of the activity perfectly.
The problem arrises when a user logs out (or in) a phone (I'm using this for a cisco Callmanager). At that time i am able to capture the tapi_object which in turn can be used to determine which line is removed and added (old number and new number) but i can't register the new address for sending events.
The exception when i try:
Value does not fall within the expected range.
because the tapiclass was created before this address was available i suspect.
At the moment i have done a test which creates a single tapiclass for each line individual and 1 tapiclass for monitoring the tapiobject event, but this is eating 10 times the memory for our company's configuration (20 phones) so i dont even want to test this at the target site (+300 phones). The other option (for i can think of) is to dispose the 'old' tapiclass and create a new one after, however i'm a bit concerned with either loosing events between, getting double events between and pingpong when multiple users log in/out (creating the class takes a couple of seconds with my program)
So, what i would really like is the option to
tapi.RegisterCallNotifications(ad, true, true, TAPI3Lib.TapiConstants.TAPIMEDIATYPE_AUDIO, 2);
for newly available lines.
Bit of background for answers :)
-I am fairly new to C#, completly new to COM-interop and i know the principles of C++, but ive never written anything in it.
Any help would greatly be appriciated. (also any comments about interop and such)
Hmm, turns out I was wrong. Adding the line for notification is possible and does not throw the exception. I think i didn't remove the old line before adding the new in my old sample.

Text files to test the functionality of a search engine

In the purpose of practicing for an upcoming programming contest, I'm making a very basic search engine in C# that takes a query from the user (e.g. "Markov Decision Process") and searches through a couple of files to find the most relevant one to the query.
The application seems to be working (I used a term-document matrix algorithm).
But now I'd like to test the functionality of the search engine to see if it really is working properly. I tried to take a couple of Wikipedia articles and saving them as .txt files and testing it out, but I just can't see if it's working fast enough (even with some timers).
My question is, is there a website that shows a couple of files to test a search engine on (along with the logically expected result)?
I'm testing with common sense so far, but it would be great to be sure of my results.
Also, how can I get a collection of .txt files (maybe 10 000+ files) about various subjects to see if my application runs fast enough?
I tried copying a few Wikipedia articles, but it would take way too much time to do. I also thought about making a script of some sort to do it for me, but I really don't know how to do that.
So, where can I find a lot of files with separated subjects?
Otherwise, how can I benchmark my application?
Note: I guess a simple big .txt file where each line represents a "file" about a subject would do the job too.
One source of text files would be Project Gutenberg. They supply CD/DVD images if you want to download thousands of files at once. (The page doesn't state it, but I would imagine they are in txt format inside the CD/DVD iso.)
You can get wikipedia pages by using a recursive function and loading the html from every page linked to by one set page.
if you have some experience with c# this should help you:
http://www.csharp-station.com/HowTo/HttpWebFetch.aspx
then loop through the text and collect all the instances of the text: "<a href=\""
and recursively call that method. You should also use a counter to limit the number of recursions.
Also, to prevent OutOfMemory exceptions you should stop the method when it reaches multiples of some number of iterations and write everything to a file. Then flush the old data from a string
You can use the datasets from GroupLens Research's site.
Some samples: movies, books

.NET program ends suddenly

Currently I'm programming an application to record data. The data will be stored clustered to a file.
This data can be analyzed by the user or the program displaying the data. By analyzing large amount of data the program ends suddenly, i.e. there is no exception, any other error message or any process at the task manager just no more program.
By analyzing the program with perfmon I found lots of i/o (460 events/s and 15MB/s) at this moment as expected. Is there any limit reading data from different places of a file? (I'm seeking positions and read complete clusters.)
Make sure you're wrapping your code with a try..catch. Then set a break point in the catch. (#Paolo makes a good point, be sure the try..catch is in the thread that is doing the work.)
Also, you could try setting visual studio to break on all exceptions. "Debug" / "Exceptions" / Select relevant "Thrown" check boxes.
Also, try checking the Event Viewer for some hints.
Finally, you can also do Debug.WriteLine or Trace.WriteLine in certain places (esp if running on a system w/o visual studio) and monitor output with Sysinternals DebugView
Note: Be sure to make code production qual (i.e., add logging, program defensively, etc) after/while finding the source of the issue.
Use try..catch.
Subscribe to AppDomain.CurrentDomain.UnhandledExceptions.
Use NLog.
Watch the process' working set.

Categories

Resources