Mono .Net/NUnit projects, Mock of Interface with auto-property prevents debugging - c#

Using Mono 4.6.1 and Xamarin Studio [Community] running on MBP with OSX 10.11.6
I have a Solution, with two projects; one is the .NET project, the other NUnit test project for the first. When I mock an interface which uses auto properties (using Moq or NSubstitute) it causes mono to crash (SIGSTOP) during debugging.
public interface IExample
{
string Name { get; }
}
[TestFixture]
public class Test
{
[Test]
public void TestCase()
{
var example = Substitute.For<IExample>();
example.Name.Returns("Hat");
Console.WriteLine(example.Name);
}
}
If I put a breakpoint on the first line which creates the example substitute the system waits as expected. If I step over that line after about 1-2 seconds the system crashes (details below). NB: Running the test causes the test to pass because it doesn't take long enough for the background failure to trigger.
Application Output of failure:
Loaded assembly: /Applications/Xamarin Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.UnitTesting/NUnit2/NUnitRunner.exe
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/System/4.0.0.0__b77a5c561934e089/System.dll
Loaded assembly: /Applications/Xamarin Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.UnitTesting/NUnit2/nunit.core.interfaces.dll
Loaded assembly: /Applications/Xamarin Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.UnitTesting/NUnit2/nunit.core.dll
Loaded assembly: /Applications/Xamarin Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.UnitTesting/NUnit2/nunit.framework.dll
Loaded assembly: /Applications/Xamarin Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.UnitTesting/NUnit2/nunit.util.dll
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/System.Runtime.Remoting/4.0.0.0__b77a5c561934e089/System.Runtime.Remoting.dll
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/System.Configuration/4.0.0.0__b03f5f7f11d50a3a/System.Configuration.dll
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/System.Xml/4.0.0.0__b77a5c561934e089/System.Xml.dll
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/Mono.Security/4.0.0.0__0738eb9f132ed756/Mono.Security.dll
Thread started: #2
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/System.Core/4.0.0.0__b77a5c561934e089/System.Core.dll
Thread started: #3
Thread started: <Thread Pool> #4
Thread started: <Thread Pool> #5
Thread started: <Thread Pool> #6
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/System.Drawing/4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/nunit.util/2.4.8.0__96d09a1eb7f44a77/nunit.util.dll
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/nunit.core.interfaces/2.4.8.0__96d09a1eb7f44a77/nunit.core.interfaces.dll
Thread started: #7
Loaded assembly: /Users/jeremy.connor/dev/MonoAPBug/MonoAPBug.Tests/bin/Debug/MonoAPBug.Tests.dll
Thread started: EventPumpThread #8
Thread started: TestRunnerThread #9
Resolved pending breakpoint at 'Test.cs:13,1' to void MonoAPBug.Tests.Test.TestCase () [0x00001].
Loaded assembly: /Users/jeremy.connor/dev/MonoAPBug/MonoAPBug.Tests/bin/Debug/MonoAPBug.exe
Loaded assembly: /Users/jeremy.connor/dev/MonoAPBug/MonoAPBug.Tests/bin/Debug/NSubstitute.dll
Loaded assembly: /Library/Frameworks/Mono.framework/Versions/4.6.1/lib/mono/gac/System.ServiceModel/4.0.0.0__b77a5c561934e089/System.ServiceModel.dll
Loaded assembly: DynamicProxyGenAssembly2
Loaded assembly: DynamicProxyGenAssembly2
mono(10800,0xb0319000) malloc: *** error for object 0x77c2a0: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Stacktrace:
Native stacktrace:
0 mono 0x00171e46 mono_handle_native_sigsegv + 342
1 mono 0x001c5091 sigabrt_signal_handler + 145
2 libsystem_platform.dylib 0x91d4579b _sigtramp + 43
3 ??? 0xffffffff 0x0 + 4294967295
4 libsystem_c.dylib 0x98fdcc38 abort + 156
5 libsystem_malloc.dylib 0x96e51292 free + 433
6 mono 0x0032b266 mono_error_cleanup + 102
7 mono 0x001a7bda type_commands_internal + 2970
8 mono 0x0019b97d debugger_thread + 5261
9 mono 0x003340ca inner_start_thread + 474
10 libsystem_pthread.dylib 0x96e06780 _pthread_body + 138
11 libsystem_pthread.dylib 0x96e066f6 _pthread_body + 0
12 libsystem_pthread.dylib 0x96e03f7a thread_start + 34
Debug info from gdb:
(lldb) command source -s 0 '/tmp/mono-gdb-commands.TQtGVg'
Executing commands in '/tmp/mono-gdb-commands.TQtGVg'.
(lldb) process attach --pid 10800
Process 10800 stopped
* thread #1: tid = 0x211a96, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'tid_50b', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
frame #0: 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10
libsystem_kernel.dylib`__psynch_cvwait:
-> 0x9386f3ea <+10>: jae 0x9386f3fa ; <+26>
0x9386f3ec <+12>: calll 0x9386f3f1 ; <+17>
0x9386f3f1 <+17>: popl %edx
0x9386f3f2 <+18>: movl 0xf8cdc2f(%edx), %edx
Executable module set to "/Library/Frameworks/Mono.framework/Versions/4.6.1/bin/mono".
Architecture set to: i386-apple-macosx.
(lldb) thread list
Process 10800 stopped
* thread #1: tid = 0x211a96, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'tid_50b', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
thread #2: tid = 0x211a98, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'SGen worker'
thread #3: tid = 0x211a9a, 0x938684d6 libsystem_kernel.dylib`semaphore_wait_trap + 10, name = 'Finalizer'
thread #4: tid = 0x211a9b, 0x9386fd5e libsystem_kernel.dylib`__workq_kernreturn + 10
thread #5: tid = 0x211a9c, 0x938707fa libsystem_kernel.dylib`kevent_qos + 10, queue = 'com.apple.libdispatch-manager'
thread #6: tid = 0x211a9d, 0x9386fcee libsystem_kernel.dylib`__wait4 + 10, name = 'Debugger agent'
thread #7: tid = 0x211aa1, 0x9386fd5e libsystem_kernel.dylib`__workq_kernreturn + 10
thread #8: tid = 0x211aa5, 0x9386fd5e libsystem_kernel.dylib`__workq_kernreturn + 10
thread #9: tid = 0x211aa8, 0x9386e852 libsystem_kernel.dylib`__accept + 10, name = 'tid_2507'
thread #10: tid = 0x211aa9, 0x9386f646 libsystem_kernel.dylib`__recvfrom + 10, name = 'tid_2b0b'
thread #11: tid = 0x211aaa, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'tid_330b'
thread #12: tid = 0x211aab, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'Threadpool worker'
thread #13: tid = 0x211aac, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'Threadpool worker'
thread #14: tid = 0x211aad, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'Timer-Scheduler'
thread #15: tid = 0x211aaf, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'EventPumpThread'
thread #16: tid = 0x211ab0, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'TestRunnerThread'
(lldb) thread backtrace all
* thread #1: tid = 0x211a96, 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'tid_50b', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
* frame #0: 0x9386f3ea libsystem_kernel.dylib`__psynch_cvwait + 10
frame #1: 0x96e07538 libsystem_pthread.dylib`_pthread_cond_wait + 757
frame #2: 0x96e09276 libsystem_pthread.dylib`pthread_cond_wait$UNIX2003 + 71
frame #3: 0x0030a955 mono`mono_os_cond_timedwait [inlined] mono_os_cond_wait(cond=0x7a138238, mutex=0x7a13820c) + 12 at mono-os-mutex.h:107 [opt]
frame #4: 0x0030a949 mono`mono_os_cond_timedwait(cond=<unavailable>, mutex=<unavailable>, timeout_ms=<unavailable>) + 185 at mono-os-mutex.h:122 [opt]
frame #5: 0x0030a6cb mono`_wapi_handle_timedwait_signal_handle(handle=0x00001600, timeout=<unavailable>, alertable=<unavailable>, poll=<unavailable>, alerted=0xbff5e390) + 507 at handles.c:1555 [opt]
frame #6: 0x0030a4c8 mono`_wapi_handle_timedwait_signal(timeout=4294967295, poll=0, alerted=0xbff5e23c) + 56 at handles.c:1476 [opt]
frame #7: 0x0031eb5f mono`wapi_WaitForMultipleObjectsEx(numobjects=<unavailable>, handles=<unavailable>, waitall=<unavailable>, timeout=<unavailable>, alertable=<unavailable>) + 1775 at wait.c:620 [opt]
frame #8: 0x002673d4 mono`mono_thread_manage [inlined] wait_for_tids_or_state_change(timeout=4294967295) + 82 at threads.c:3053 [opt]
frame #9: 0x00267382 mono`mono_thread_manage + 322 at threads.c:3258 [opt]
frame #10: 0x00138dc7 mono`mono_main(argc=<unavailable>, argv=<unavailable>) + 8855 at driver.g.c:2187 [opt]
frame #11: 0x000a4141 mono`main [inlined] mono_main_with_options(argc=6, argc=6, argc=6, argv=0xbff5e944, argv=0xbff5e944, argv=0xbff5e944) + 33 at main.c:28 [opt]
frame #12: 0x000a4120 mono`main(argc=6, argv=0xbff5e944) + 1184 at main.c:177 [opt]
frame #13: 0x000a3c75 mono`start + 53
Got a SIGABRT while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application.

Related

System.Timers.Timer sometimes stops and then resumes after a while

Let's have a class that every 5 seconds refreshes a key in Redis. (We call it "dead man switch").
The problem is, once per several days it just stops emitting the Elapsed event for a brief period of time, from seconds to 1-2 minutes.
using System;
using System.Timers;
using Microsoft.Extensions.Logging;
using StackExchange.Redis;
namespace My
{
public class Test : IDisposable
{
private readonly ILogger<Test> _logger;
private readonly IDatabase _redis;
private readonly System.Timers.Timer _timer;
public Test(ILogger<Test> logger, IDatabase redis)
{
_logger = logger;
_redis = redis;
_timer = new Timer {Interval = 5000};
_timer.Elapsed += Beat;
_timer.Start();
}
private void Beat(object sender, ElapsedEventArgs e)
{
_logger.LogInformation("Pushing DMS.");
_redis.StringSet("1234", "OK", TimeSpan.FromSeconds(10), When.Always, CommandFlags.FireAndForget);
_logger.LogInformation("DMS Pushed.");
}
public void Dispose()
{
Console.WriteLine("Disposing.");
_timer?.Dispose();
}
}
}
Log:
...
[08:08:40 INF] Pushing DMS.
[08:08:40 INF] DMS pushed.
[08:08:45 INF] Pushing DMS.
[08:08:45 INF] DMS pushed.
[08:08:50 INF] Pushing DMS.
[08:08:50 INF] DMS pushed.
[08:09:17 INF] Pushing DMS. #<-- Here's a 27s gap
[08:08:17 INF] DMS pushed.
[08:08:23 INF] Pushing DMS.
[08:08:23 INF] DMS pushed.
...
These events do not queue up - it just creates a gap.
This class is a part of bigger project running in a kubernetes cluster.
I've googled up a theory that it can be caused by threadpool starvation so I added debug logging of threadpool state, but it IMO does not bring up any relevant information:
private void Beat(object sender, ElapsedEventArgs e)
{
var ptc = Process.GetCurrentProcess().Threads.Count;
ThreadPool.GetMaxThreads(out var maxWt, out var maxCpt);
ThreadPool.GetAvailableThreads(out var wt, out var cpt);
var current = ThreadPool.ThreadCount;
var pending = ThreadPool.PendingWorkItemCount;
var threadId = Thread.CurrentThread.ManagedThreadId;
_logger.LogDebug(
$"Pushing DMS.\nTP state: Thread#: {threadId} \t#Current: {current}\t #Pending: {pending}\t#WorkerT: {wt}/{maxWt}\t #CompletionT: {cpt}/{maxCpt}\n#ProcessT: {ptc}");
_redis.StringSet("1234", "OK", TimeSpan.FromSeconds(5), When.Always, CommandFlags.FireAndForget);
_logger.LogInformation("DMS Pushed.");
}
Resulting log:
[02:31:13 INF] Pushing DMS.
TP state: Thread#: 92 #Current: 8 #Pending: 0 #WorkerT: 32760/32767 #CompletionT: 1000/1000
#ProcessT: 46
[02:31:13 INF] DMS Pushed.
[02:31:18 INF] Pushing DMS.
TP state: Thread#: 47 #Current: 11 #Pending: 0 #WorkerT: 32760/32767 #CompletionT: 1000/1000
#ProcessT: 46
[02:31:18 INF] DMS Pushed.
[02:31:23 INF] Pushing DMS.
TP state: Thread#: 47 #Current: 11 #Pending: 0 #WorkerT: 32761/32767 #CompletionT: 1000/1000
#ProcessT: 46
[02:31:23 INF] DMS Pushed.
[02:31:28 INF] Pushing DMS.
TP state: Thread#: 92 #Current: 8 #Pending: 0 #WorkerT: 32760/32767 #CompletionT: 1000/1000
#ProcessT: 46
[02:31:28 INF] DMS Pushed.
# HERE COMES THE GAP
[02:33:01 INF] Pushing DMS.
TP state: Thread#: 103 #Current: 8 #Pending: 0 #WorkerT: 32760/32767 #CompletionT: 1000/1000
#ProcessT: 46
[02:33:01 INF] DMS Pushed.
[02:33:06 INF] Pushing DMS.
TP state: Thread#: 55 #Current: 10 #Pending: 0 #WorkerT: 32762/32767 #CompletionT: 1000/1000
#ProcessT: 46
[02:33:06 INF] DMS Pushed.
[02:33:11 INF] Pushing DMS.
TP state: Thread#: 103 #Current: 10 #Pending: 0 #WorkerT: 32762/32767 #CompletionT: 1000/1000
#ProcessT: 46
[02:33:11 INF] DMS Pushed.
Does anyone have an idea what can cause this behavior?

getting stuck when started on editor with no any logs (perhaps because of time string parsing)

I'm making a 3D game on unity, and want to add the energy increasing functionality.
I tried the below code which made my game stuck when trying to test on editor
void Start()
{
// to calculate time since player left app to be used for energy increment
currentTime = DateTime.Now.ToString();
lastTime = PlayerPrefs.GetString("lastTime", currentTime);
if (Application.platform == RuntimePlatform.Android || Application.platform == RuntimePlatform.IPhonePlayer)
{
timeDT = Convert.ToDateTime(currentTime , null);
CurrentTimeDT = Convert.ToDateTime(lastTime, null);
timeSpan = timeDT.Subtract(CurrentTimeDT);
timeDifference = int.Parse(timeSpan.TotalSeconds.ToString());
} else if (Application.platform == RuntimePlatform.WindowsEditor)
{
lastTime = lastTime.Substring(10);
currentTime = currentTime.Substring(10);
timeDT = DateTime.ParseExact(currentTime, "h:mm:ss tt" , null);
CurrentTimeDT = DateTime.ParseExact(lastTime,"h:mm:ss tt", null);
timeSpan = timeDT.Subtract(CurrentTimeDT);
timeDifference = int.Parse(timeSpan.TotalSeconds.ToString());
}
energy = PlayerPrefs.GetInt("energy", 5); //get last saved energy
energy += Mathf.Abs(timeDifference / 300); //add one energy every 5 minutes since the player left app
if (energy > 5) // the maximum energy amount
{
energy = 5;
}
I expected it to increase energy by 1 for each 5 minutes since the player left the game. But I got the unity program stuck with no any outputs. I tried building the game for android I got some logging from logcat.
08-21 04:15:52.153 1747-1760/? I/ActivityManager: START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.Serv4Me.SlidingBall/com.unity3d.player.UnityPlayerActivity bnds=[8,618][149,848]} from uid 1000 on display 0
08-21 04:15:52.153 1747-1760/? V/WindowManager: addAppToken: AppWindowToken{2efb58cd token=Token{24ade64 ActivityRecord{c6b2ff7 u0 com.Serv4Me.SlidingBall/com.unity3d.player.UnityPlayerActivity t39}}} to stack=1 task=39 at 0
08-21 04:15:52.157 2042-2042/? W/ContextImpl: Calling a method in the system process without a qualified user: android.app.ContextImpl.sendBroadcast:1341 android.content.ContextWrapper.sendBroadcast:382 com.vphone.launcher.Stats.recordLaunch:129 com.vphone.launcher.Launcher.c:3766 com.vphone.launcher.Launcher.onClickAppShortcut:3718
08-21 04:15:52.161 1747-1770/? V/WindowManager: Adding window Window{28d195fc u0 Starting com.Serv4Me.SlidingBall} at 3 of 6 (after Window{759b336 u0 com.vphone.launcher/com.vphone.launcher.Launcher})
08-21 04:15:52.201 2042-2252/? W/ContextImpl: Calling a method in the system process without a qualified user: android.app.ContextImpl.bindService:1770 android.content.ContextWrapper.bindService:539 com.google.android.gms.common.stats.zza.zza:-1 com.google.android.gms.common.stats.zza.zza:-1 com.google.android.gms.ads.identifier.AdvertisingIdClient.zzc:-1
--------- beginning of main
08-21 04:15:52.202 2042-2042/? D/yeshen: launcher onpause
08-21 04:15:52.231 1747-2657/? I/ActivityManager: Start proc 10837:com.Serv4Me.SlidingBall/u0a47 for activity com.Serv4Me.SlidingBall/com.unity3d.player.UnityPlayerActivity
08-21 04:15:52.242 10837-10837/? D/houdini: [10837] Initialize library(version: 5.0.7b_x.48396 RELEASE)... successfully.
08-21 04:15:52.543 10837-10837/? D/houdini: [10837] Added shared library /data/app/com.Serv4Me.SlidingBall-1/lib/arm/libmain.so for ClassLoader by Native Bridge.
08-21 04:15:52.619 1747-2032/? V/WindowManager: Adding window Window{5e10f3d u0 com.Serv4Me.SlidingBall/com.unity3d.player.UnityPlayerActivity} at 3 of 7 (before Window{28d195fc u0 Starting com.Serv4Me.SlidingBall})
08-21 04:15:52.654 1747-1760/? V/WindowManager: Adding window Window{3ef00883 u0 SurfaceView} at 3 of 8 (before Window{5e10f3d u0 com.Serv4Me.SlidingBall/com.unity3d.player.UnityPlayerActivity})
08-21 04:15:52.728 1747-1770/? I/ActivityManager: Displayed com.Serv4Me.SlidingBall/com.unity3d.player.UnityPlayerActivity: +526ms
08-21 04:15:52.728 1747-2034/? W/ContextImpl: Calling a method in the system process without a qualified user: android.app.ContextImpl.sendBroadcast:1327 com.android.server.InputMethodManagerService.hideCurrentInputLocked:1992 com.android.server.InputMethodManagerService.windowGainedFocus:2082 com.android.internal.view.IInputMethodManager$Stub.onTransact:221 com.android.server.InputMethodManagerService.onTransact:873
08-21 04:15:52.738 2042-2042/? D/yeshen: launcher onstop
08-21 04:15:52.738 2042-2042/? D/Tinker.DefaultAppLike: onTrimMemory level:20
08-21 04:15:52.741 2042-2277/? W/DebugConnManager: getNetworkInfo() on networkType 1
08-21 04:15:52.823 10837-10855/? I/Unity: SystemInfo CPU = ARMv7 VFPv3 NEON, Cores = 2, Memory = 2022mb
08-21 04:15:52.823 10837-10855/? I/Unity: SystemInfo ARM big.LITTLE configuration: 2 big (mask: 0x3), 0 little (mask: 0x0)
08-21 04:15:52.825 10837-10855/? I/Unity: ApplicationInfo com.Serv4Me.SlidingBall version 1.0 build 52251d08-2db4-4bc0-b627-11ed2dc44951
08-21 04:15:52.825 10837-10855/? I/Unity: Built from '2019.2/staging' branch, Version '2019.2.1f1 (ca4d5af0be6f)', Build type 'Release', Scripting Backend 'mono', CPU 'armeabi-v7a', Stripping 'Disabled'
08-21 04:15:53.267 10837-10855/? E/EGL_emulation: [eglGetConfigAttrib] Bad attribute idx 12513
08-21 04:15:53.267 10837-10855/? E/EGL_emulation: tid 10855: eglGetConfigAttrib(761): error 0x3004 (EGL_BAD_ATTRIBUTE)
08-21 04:15:53.267 10837-10855/? E/EGL_emulation: [eglGetConfigAttrib] Bad attribute idx 12514
08-21 04:15:53.267 10837-10855/? E/EGL_emulation: tid 10855: eglGetConfigAttrib(761): error 0x3004 (EGL_BAD_ATTRIBUTE)
08-21 04:15:53.267 10837-10855/? E/EGL_emulation: [eglGetConfigAttrib] Bad attribute idx 1
08-21 04:15:53.267 10837-10855/? E/EGL_emulation: tid 10855: eglGetConfigAttrib(761): error 0x3004 (EGL_BAD_ATTRIBUTE)
08-21 04:15:53.307 10837-10855/? D/Unity: GL_EXT_debug_marker GL_OES_EGL_image GL_OES_EGL_image_external GL_OES_depth24 GL_OES_depth32 GL_OES_element_index_uint GL_OES_texture_float GL_OES_texture_float_linear GL_OES_compressed_paletted_texture GL_OES_compressed_ETC1_RGB8_texture GL_OES_depth_texture GL_EXT_texture_format_BGRA8888 GL_APPLE_texture_format_BGRA8888 GL_OES_texture_half_float GL_EXT_robustness GL_OES_texture_half_float_linear GL_OES_packed_depth_stencil GL_OES_vertex_half_float GL_OES_texture_npot GL_OES_rgb8_rgba8 GL_EXT_color_buffer_float ANDROID_gles_max_version_3_1 GL_OES_vertex_array_object
08-21 04:15:54.693 7816-7869/? E/PlayCommon: [290] afxf.d(308): Failed to connect to server for server timestamp: java.net.UnknownHostException: Unable to resolve host "play.googleapis.com": No address associated with hostname
08-21 04:15:54.753 2690-2690/? W/ChimeraUtils: Non Chimera context
08-21 04:15:54.845 7816-7869/? I/PlayCommon: [290] afxf.d(124): Connecting to server: https://play.googleapis.com/play/log?format=raw&proto_v2=true
08-21 04:15:54.847 7816-7869/? E/PlayCommon: [290] afxf.d(287): Failed to connect to server: java.net.UnknownHostException: Unable to resolve host "play.googleapis.com": No address associated with hostname
08-21 04:15:58.283 1747-1817/? D/ConnectivityService: releasing NetworkRequest NetworkRequest [ id=58, legacyType=-1, [ Capabilities: INTERNET&NOT_RESTRICTED&TRUSTED] ]
08-21 04:15:58.290 2042-2344/? D/ConnectivityManager.CallbackHandler: CM callback handler got msg 524296
08-21 04:15:58.902 1747-2032/? W/SensorService: sensor 00000000 already enabled in connection 0xa15fb460 (ignoring)
I solved the problem by deleting the PlayerPrefs from both the editor and android device which has the wrong datetime string parse. hope this helps someone else.

Can Mass Transit scale automagically?

I am planning to use RabbitMQ in one of our projects at work.
I am evaluating different kinds of clients and wondering whether MassTransit can answer one of our needs regarding scalability.
I wrote a simple code below:
public class MyMessage
{
public string SomeText { get; set; }
}
public static class Program
{
public static async Task Main(params string[] args)
{
var counter = 0;
var bus = Bus.Factory.CreateUsingRabbitMq(busFactoryConfigurator =>
{
var host = busFactoryConfigurator.Host(new Uri("rabbitmq://localhost"), hostConfigurator =>
{
hostConfigurator.Username("guest");
hostConfigurator.Password("guest");
});
busFactoryConfigurator.ReceiveEndpoint(host, "test_queue", endpointConfigurator =>
{
endpointConfigurator.Handler<MyMessage>(async context =>
{
var countDoku = counter;
counter++;
await Console.Out.WriteLineAsync(countDoku.ToString() + ": started " + Thread.CurrentThread.ManagedThreadId);
await Task.Delay(500);
await Console.Out.WriteLineAsync(countDoku.ToString() + ": done " + Thread.CurrentThread.ManagedThreadId);
await Console.Out.WriteLineAsync($"Received: {context.Message.SomeText}");
});
});
});
await bus.StartAsync();
Parallel.ForEach(Enumerable.Repeat(42, 5000), async _ => await bus.Publish(new MyMessage {SomeText = "Hi"}));
Console.ReadKey();
await bus.StopAsync();
}
}
It's far from being the perfect benchmark (e.g. Console, Parallel.ForEach just use to throw as many async operations in parallel as possible) but putting that aside I've noticed something a bit embarrassing:
0: started 14
1: started 4
2: started 15
3: started 18
4: started 4
5: started 14
6: started 6
7: started 15
12: started 15
13: started 6
10: started 4
11: started 5
8: started 18
9: started 14
14: started 15
15: started 6
0: done 6
Received: Hi
5: done 14
6: done 5
Received: Hi
3: done 13
Received: Hi
12: done 8
Received: Hi
Received: Hi
7: done 18
Received: Hi
2: done 4
Received: Hi
4: done 15
Received: Hi
1: done 18
Received: Hi
11: done 18
Received: Hi
8: done 5
Received: Hi
9: done 14
Received: Hi
10: done 6
Received: Hi
13: done 4
Received: Hi
14: done 8
Received: Hi
15: done 15
Received: Hi
16: started 15
17: started 15
18: started 8
19: started 4
20: started 8
21: started 15
22: started 4
23: started 5
24: started 18
25: started 5
26: started 6
31: started 14
28: started 18
29: started 5
30: started 4
27: started 13
18: done 14
Received: Hi
17: done 13
Received: Hi
16: done 5
The handling part cannot process more than around 15 items at the same time...
I was wondering this is an issue with my benchmark code or a limitation in MassTransit configuration? Should I use an actor framework to better dispatch the load of items received from the queue in order to process more items at the same time?

c# thread safe logging issue not using singleton

I'm creating a logging class in C# and I need it to be thread safe. I've implemented TextWriter.Synchronized and locks but I'm getting a very strange issue with the locks where they seem to not work.
I don't want to use a singleton or a static class because I want to be able to have multiple instances of this logging class at any given time and I want to synchronize the threads based on the log's filename. So if I have 30 threads with 3 different instances of the Log class all using the same log file, it will synchronize properly and not have any issues. Below is what I've come up with so far. I've left out some of the code that is irrelevant, like the constructor and close/dispose.
public class Log : IDisposable
{
public enum LogType
{
Information,
Warning,
Error
}
private FileStream m_File;
private TextWriter m_Writer;
private string m_Filename;
//this is used to hold sync objects per registered log file
private static SortedList<string, object> s_SyncObjects = new SortedList<string, object>();
//this is used to lock and modify the above variable
private static readonly object s_SyncRoot = new object();
public void WriteLine(Log.LogType MsgType, string Text)
{
//this is the problem i think, the lock isn't functioning correctly
//see below this code for an example log file with issues
lock (Log.s_SyncObjects[this.m_Filename])
{
this.m_Writer.WriteLine(DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss:fffffff") + " " + MsgType.ToString() + ": " + Text);
}
return;
}
public void Open(string Filename)
{
//make filename lowercase to ensure it's always the same
this.m_Filename = Filename.ToLower();
this.m_File = new FileStream(Filename, FileMode.Append, FileAccess.Write, FileShare.ReadWrite);
this.m_Writer = TextWriter.Synchronized(new StreamWriter(this.m_File) { AutoFlush = true });
//lock the syncroot and modify the collection of sync objects
//this should make it so that every instance of this class no matter
//what thread it's running in will have a unique sync object per log file
lock (Log.s_SyncRoot)
{
if (!Log.s_SyncObjects.ContainsKey(this.m_Filename))
Log.s_SyncObjects.Add(this.m_Filename, new object());
}
}
}
To test this I'm creating 3 instances of the logger pointing to the same log file, creating 30 threads and assigning each thread one of the loggers (in order 1,2,3,1,2,3), then I run all 30 threads until I press q.
This works great for writing line by line in a log file and keeping the times the writes happen in the correct order but here is what I get in the log file. It seems that the thread overwrites a portion of the log file and it seems to happen with different instances of the logger on different threads, never the same instance of the logger on different threads. The log file below has the time the entry was created, the logger ID (1 based), the thread ID (0 based) and the message "test".
08/27/2012 11:47:34:3469116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3469116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3469116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3469116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3469116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3469116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3479116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3479116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3479116 Information: LOGID: 1, THREADID: 9, MSG: test
08/27/2012 11:47:34:3479116 Information: LOGID08/27/2012 11:47:34:3479116 Information: LOGID: 3, THREADID: 23, MSG: test
08/27/2012 11:47:34:3479116 08/27/2012 11:47:34:3509118 Information: LOGID: 1, THREADID: 0, MSG: test
08/27/2012 11:47:34:3509118 Information: LOGID: 1, THREADID: 0, MSG: test
08/27/2012 11:47:34:3509118 Information: LOGID: 1, THREADID: 0, MSG: test
08/27/2012 11:47:34:3509118 Information: LOGID: 1, THREADID: 0, MSG: test
08/27/2012 11:47:34:3509118 Information: LOGID: 1, THREADID: 0, MSG: test
Notice that 2 of the lines have been mangled. I'm guessing this is due to the locks not working properly, or my misuse of the locks. I would also prefer not to use queuing or any kind of singleton. This behavior doesn't seem to happen if I change the lock within WriteLine to the m_SyncRoot variable and make it non-static. I have no idea why that works but to me it seems like that is not what I want to do. I also don't want to lock on a static m_SyncRoot alone because then if I have 3 instances of the logger pointing to 3 different log files then each one will block the other for no reason.
I'm so lost on this, am I completely screwing this up?
In case anyone needs it, here is the test class for generating the threads
public class LogTest
{
private Log m_Log1;
private Log m_Log2;
private Log m_Log3;
private Thread[] m_Threads;
private const int THREAD_COUNT = 30;
private bool m_Done;
public LogTest()
{
this.m_Log1 = new Log();
this.m_Log2 = new Log();
this.m_Log3 = new Log();
this.m_Log1.Open("test.txt");
this.m_Log2.Open("test.txt");
this.m_Log3.Open("test.txt");
this.m_Threads = new Thread[THREAD_COUNT];
this.m_Done = false;
}
public void run()
{
for (int i = 0; i < THREAD_COUNT; i++)
{
Thread th = new Thread(new ParameterizedThreadStart(this.LogThread));
this.m_Threads[i] = th;
}
for (int i = 0; i < THREAD_COUNT; i++)
{
int logId = 1;
Log temp = this.m_Log1;
if ((i % 3) == 1)
{
temp = this.m_Log2;
logId = 2;
}
else if ((i % 3) == 2)
{
temp = this.m_Log3;
logId = 3;
}
this.m_Threads[i].Start(new object[] { logId, i, temp });
}
ConsoleKeyInfo key = new ConsoleKeyInfo();
while ((key = Console.ReadKey()).KeyChar != 'q')
;
this.m_Done = true;
}
private void LogThread(object state)
{
int loggerId = (int)((object[])state)[0];
int threadId = (int)((object[])state)[1];
Log l = (Log)((object[])state)[2];
while (!this.m_Done)
{
l.WriteLine(Log.LogType.Information, String.Format("LOGID: {0}, THREADID: {1}, MSG: {2}", loggerId, threadId, "test"));
}
}
}
EDIT: edited to change static m_ to s_ as suggested and added the AutoFlush property to the StreamWriter; setting it to true... still does not work.
I figured out the problem!
The thread synchronization works as it should and so does the TextWriter.Synchronized() so the problem isn't really the threads at all. Take this into account:
I create 3 instances of the Log class and point them all to "test.txt"
Log log1 = new Log();
Log log2 = new Log();
Log log3 = new Log();
log1.Open("test.txt"); //new file handle as instance member
log2.Open("test.txt"); //new file handle as instance member
log3.Open("test.txt"); //new file handle as instance member
In each call to Open() I am opening a new file handle to the same file, so I have 3 unique and separate file handles. Each file handle or Stream has it's own file pointer which seeks along the stream as I read or write.
So, if we have the following:
log1.WriteLine("this is some text"); //handled on thread 1
log2.WriteLine("testing"); //handled on thread 2
If Thread 1 starts to write to the file and completes the file contents will be
this is some text
When Thread 2 starts to write, because the file handles and streams are unique the current location of log1's file pointer is at 16 and log2's is still at 0 so after log2 is done writing, the resulting log file will read:
testing some text
So, all I need to do is make sure I open only 1 unique FileStream per log file and do the synchronization like I have been. Works great now!
I think your lock is working fine, but according to the documentation, TextWriter.Flush doesn't actually do anything, so it isn't actually flushing the buffer before you release the lock. Here's the [link]. 1
It looks like you can fix the problem by using AutoFlush on the streamwriter in the Open method.
this.m_Writer = TextWriter.Synchronized(new StreamWriter(this.m_File){AutoFlush=true})

What does MaxDegreeOfParallelism do?

I am using Parallel.ForEach and I am doing some database updates, now without setting MaxDegreeOfParallelism, a dual core processor machine results in SQL client timeouts, where else quad core processor machine somehow does not timeout.
Now I have no control over what kind of processor cores are available where my code runs, but is there some settings I can change with MaxDegreeOfParallelism that will probably run less operations simultaneously and not result in timeouts?
I can increase timeouts but it isn't a good solution, if on lower CPU I can process less operations simultaneously, that will put less load on cpu.
Ok I have read all other posts and MSDN too, but will setting MaxDegreeOfParallelism to lower value make my quad core machines suffer?
For example, is there anyway to do something like, if CPU has two cores, then use 20, if CPU has four cores then 40?
The answer is that it is the upper limit for the entire parallel operation, irrespective of the number of cores.
So even if you don't use the CPU because you are waiting on IO, or a lock, no extra tasks will run in parallel, only the maximum that you specifiy.
To find this out, I wrote this piece of test code. There is an artificial lock in there to stimulate the TPL to use more threads. The same will happen when your code is waiting for IO or database.
class Program
{
static void Main(string[] args)
{
var locker = new Object();
int count = 0;
Parallel.For
(0
, 1000
, new ParallelOptions { MaxDegreeOfParallelism = 2 }
, (i) =>
{
Interlocked.Increment(ref count);
lock (locker)
{
Console.WriteLine("Number of active threads:" + count);
Thread.Sleep(10);
}
Interlocked.Decrement(ref count);
}
);
}
}
If I don't specify MaxDegreeOfParallelism, the console logging shows that up to around 8 tasks are running at the same time. Like this:
Number of active threads:6
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:6
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
Number of active threads:7
It starts lower, increases over time and at the end it is trying to run 8 at the same time.
If I limit it to some arbitrary value (say 2), I get
Number of active threads:2
Number of active threads:1
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Number of active threads:2
Oh, and this is on a quadcore machine.
For example, is there anyway to do something like, if CPU has two cores, then use 20, if CPU has four cores then 40?
You can do this to make parallelism dependent on the number of CPU cores:
var options = new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount * 10 };
Parallel.ForEach(sourceCollection, options, sourceItem =>
{
// do something
});
However, newer CPU's tend to use hyper-threading to simulate extra cores. So if you have a quad-core processor, then Environment.ProcessorCount will probably report this as 8 cores. I've found that if you set the parallelism to account for the simulated cores then it actually slows down other threads such as UI threads.
So although the operation will finish a bit faster, an application UI may experience significant lag during this time. Dividing the `Environment.ProcessorCount' by 2 seems to achieve the same processing speeds while still keeping the CPU available for UI threads.
It sounds like the code that you're running in parallel is deadlocking, which means that unless you can find and fix the issue that's causing that, you shouldn't parallelize it at all.
Something else to consider, especially for those finding this many years later, is depending on your situation it's usually best to collect all data in a DataTable and then use SqlBulkCopy toward the end of each major task.
For example I have a process that I made that runs through millions of files and I ran into the same errors when each file transaction made a DB query to insert the record. I instead moved to storing it all in a DataTable in memory for each share I iterated through, dumping the DataTable into my SQL Server and clearing it between each separate share. The bulk insert takes a split second and has the benefit of not opening thousands of connections at once.
EDIT:
Here's a quick & dirty working example
The SQLBulkCopy method:
private static void updateDatabase(DataTable targetTable)
{
try
{
DataSet ds = new DataSet("FileFolderAttribute");
ds.Tables.Add(targetTable);
writeToLog(targetTable.TableName + " - Rows: " + targetTable.Rows.Count, logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
writeToLog(#"Opening SQL connection", logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
Console.WriteLine(#"Opening SQL connection");
SqlConnection sqlConnection = new SqlConnection(sqlConnectionString);
sqlConnection.Open();
SqlBulkCopy bulkCopy = new SqlBulkCopy(sqlConnection, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.FireTriggers | SqlBulkCopyOptions.UseInternalTransaction, null);
bulkCopy.DestinationTableName = "FileFolderAttribute";
writeToLog(#"Copying data to SQL Server table", logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
Console.WriteLine(#"Copying data to SQL Server table");
foreach (var table in ds.Tables)
{
writeToLog(table.ToString(), logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
Console.WriteLine(table.ToString());
}
bulkCopy.WriteToServer(ds.Tables[0]);
sqlConnection.Close();
sqlConnection.Dispose();
writeToLog(#"Closing SQL connection", logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
writeToLog(#"Clearing local DataTable...", logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
Console.WriteLine(#"Closing SQL connection");
Console.WriteLine(#"Clearing local DataTable...");
targetTable.Clear();
ds.Tables.Remove(targetTable);
ds.Clear();
ds.Dispose();
}
catch (Exception error)
{
errorLogging(error, getCurrentMethod(), logDatabaseFile);
}
}
...and for dumping it into the datatable:
private static void writeToDataTable(string ServerHostname, string RootDirectory, string RecordType, string Path, string PathDirectory, string PathFileName, string PathFileExtension, decimal SizeBytes, decimal SizeMB, DateTime DateCreated, DateTime DateModified, DateTime DateLastAccessed, string Owner, int PathLength, DateTime RecordWriteDateTime)
{
try
{
if (tableToggle)
{
DataRow toInsert = results_1.NewRow();
toInsert[0] = ServerHostname;
toInsert[1] = RootDirectory;
toInsert[2] = RecordType;
toInsert[3] = Path;
toInsert[4] = PathDirectory;
toInsert[5] = PathFileName;
toInsert[6] = PathFileExtension;
toInsert[7] = SizeBytes;
toInsert[8] = SizeMB;
toInsert[9] = DateCreated;
toInsert[10] = DateModified;
toInsert[11] = DateLastAccessed;
toInsert[12] = Owner;
toInsert[13] = PathLength;
toInsert[14] = RecordWriteDateTime;
results_1.Rows.Add(toInsert);
}
else
{
DataRow toInsert = results_2.NewRow();
toInsert[0] = ServerHostname;
toInsert[1] = RootDirectory;
toInsert[2] = RecordType;
toInsert[3] = Path;
toInsert[4] = PathDirectory;
toInsert[5] = PathFileName;
toInsert[6] = PathFileExtension;
toInsert[7] = SizeBytes;
toInsert[8] = SizeMB;
toInsert[9] = DateCreated;
toInsert[10] = DateModified;
toInsert[11] = DateLastAccessed;
toInsert[12] = Owner;
toInsert[13] = PathLength;
toInsert[14] = RecordWriteDateTime;
results_2.Rows.Add(toInsert);
}
}
catch (Exception error)
{
errorLogging(error, getCurrentMethod(), logFile);
}
}
...and here's the context, the looping piece itself:
private static void processTargetDirectory(DirectoryInfo rootDirectory, string targetPathRoot)
{
DateTime StartTime = DateTime.Now;
int directoryCount = 0;
int fileCount = 0;
try
{
manageDataTables();
Console.WriteLine(rootDirectory.FullName);
writeToLog(#"Working in Directory: " + rootDirectory.FullName, logFile, getLineNumber(), getCurrentMethod(), true);
applicationsDirectoryCount++;
// REPORT DIRECTORY INFO //
string directoryOwner = "";
try
{
directoryOwner = File.GetAccessControl(rootDirectory.FullName).GetOwner(typeof(System.Security.Principal.NTAccount)).ToString();
}
catch (Exception error)
{
//writeToLog("\t" + rootDirectory.FullName, logExceptionsFile, getLineNumber(), getCurrentMethod(), true);
writeToLog("[" + error.Message + "] - " + rootDirectory.FullName, logExceptionsFile, getLineNumber(), getCurrentMethod(), true);
errorLogging(error, getCurrentMethod(), logFile);
directoryOwner = "SeparatedUser";
}
writeToRawLog(serverHostname + "," + targetPathRoot + "," + "Directory" + "," + rootDirectory.Name + "," + rootDirectory.Extension + "," + 0 + "," + 0 + "," + rootDirectory.CreationTime + "," + rootDirectory.LastWriteTime + "," + rootDirectory.LastAccessTime + "," + directoryOwner + "," + rootDirectory.FullName.Length + "," + DateTime.Now + "," + rootDirectory.FullName + "," + "", logResultsFile, true, logFile);
//writeToDBLog(serverHostname, targetPathRoot, "Directory", rootDirectory.FullName, "", rootDirectory.Name, rootDirectory.Extension, 0, 0, rootDirectory.CreationTime, rootDirectory.LastWriteTime, rootDirectory.LastAccessTime, directoryOwner, rootDirectory.FullName.Length, DateTime.Now);
writeToDataTable(serverHostname, targetPathRoot, "Directory", rootDirectory.FullName, "", rootDirectory.Name, rootDirectory.Extension, 0, 0, rootDirectory.CreationTime, rootDirectory.LastWriteTime, rootDirectory.LastAccessTime, directoryOwner, rootDirectory.FullName.Length, DateTime.Now);
if (rootDirectory.GetDirectories().Length > 0)
{
Parallel.ForEach(rootDirectory.GetDirectories(), new ParallelOptions { MaxDegreeOfParallelism = directoryDegreeOfParallelism }, dir =>
{
directoryCount++;
Interlocked.Increment(ref threadCount);
processTargetDirectory(dir, targetPathRoot);
});
}
// REPORT FILE INFO //
Parallel.ForEach(rootDirectory.GetFiles(), new ParallelOptions { MaxDegreeOfParallelism = fileDegreeOfParallelism }, file =>
{
applicationsFileCount++;
fileCount++;
Interlocked.Increment(ref threadCount);
processTargetFile(file, targetPathRoot);
});
}
catch (Exception error)
{
writeToLog(error.Message, logExceptionsFile, getLineNumber(), getCurrentMethod(), true);
errorLogging(error, getCurrentMethod(), logFile);
}
finally
{
Interlocked.Decrement(ref threadCount);
}
DateTime EndTime = DateTime.Now;
writeToLog(#"Run time for " + rootDirectory.FullName + #" is: " + (EndTime - StartTime).ToString() + #" | File Count: " + fileCount + #", Directory Count: " + directoryCount, logTimingFile, getLineNumber(), getCurrentMethod(), true);
}
Like noted above, this is quick & dirty, but works very well.
For memory-related issues I ran into once I got to around 2,000,000 records, I had to create a second DataTable and alternate between the 2, dumping the records to SQL server between alternation. So my SQL connections consist of 1 every 100,000 records.
I managed that like this:
private static void manageDataTables()
{
try
{
Console.WriteLine(#"[Checking datatable size] toggleValue: " + tableToggle + " | " + #"r1: " + results_1.Rows.Count + " - " + #"r2: " + results_2.Rows.Count);
if (tableToggle)
{
int rowCount = 0;
if (results_1.Rows.Count > datatableRecordCountThreshhold)
{
tableToggle ^= true;
writeToLog(#"results_1 row count > 100000 # " + results_1.Rows.Count, logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
rowCount = results_1.Rows.Count;
logResultsFile = "FileServerReport_Results_" + DateTime.Now.ToString("yyyyMMdd-HHmmss") + ".txt";
Thread.Sleep(5000);
if (results_1.Rows.Count != rowCount)
{
writeToLog(#"results_1 row count increased, # " + results_1.Rows.Count, logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
rowCount = results_1.Rows.Count;
Thread.Sleep(15000);
}
writeToLog(#"results_1 row count stopped increasing, updating database...", logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
updateDatabase(results_1);
results_1.Clear();
writeToLog(#"results_1 cleared, count: " + results_1.Rows.Count, logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
}
}
else
{
int rowCount = 0;
if (results_2.Rows.Count > datatableRecordCountThreshhold)
{
tableToggle ^= true;
writeToLog(#"results_2 row count > 100000 # " + results_2.Rows.Count, logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
rowCount = results_2.Rows.Count;
logResultsFile = "FileServerReport_Results_" + DateTime.Now.ToString("yyyyMMdd-HHmmss") + ".txt";
Thread.Sleep(5000);
if (results_2.Rows.Count != rowCount)
{
writeToLog(#"results_2 row count increased, # " + results_2.Rows.Count, logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
rowCount = results_2.Rows.Count;
Thread.Sleep(15000);
}
writeToLog(#"results_2 row count stopped increasing, updating database...", logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
updateDatabase(results_2);
results_2.Clear();
writeToLog(#"results_2 cleared, count: " + results_2.Rows.Count, logDatabaseFile, getLineNumber(), getCurrentMethod(), true);
}
}
}
catch (Exception error)
{
errorLogging(error, getCurrentMethod(), logDatabaseFile);
}
}
Where "datatableRecordCountThreshhold = 100000"
The Parallel.ForEach method starts internally a number of Tasks, and each of these tasks repeatedly takes an item from the source sequence and invokes the body delegate for this item. The MaxDegreeOfParallelism can set an upper limit to these internal tasks. But this setting is not the only factor that limits the parallelism. There is also the willingness of the TaskScheduler to execute the tasks that are spawned by the Parallel.ForEach.
The spawning mechanism works by each spawned task replicating itself. In order words the first thing that each task do is to create another task. Most TaskSchedulers have a limit on how many tasks can execute concurrently, and when this limit is reached they queue the next incoming tasks without executing them immediately. So eventually the self-replicating pattern of Parallel.ForEach will stop spawning more tasks, because the last task spawned will be sitting idle in the TaskScheduler's queue.
Let's talk about the TaskScheduler.Default, which is the default scheduler of the Parallel.ForEach, and schedules the tasks on the ThreadPool. The ThreadPool has a soft and a hard limit. The soft limit is when the demand for work is not satisfied immediately, and the hard limit is when the demand for work is never satisfied until an already running workitem completes. When the ThreadPool reaches the soft limit, which is Environment.ProcessorCount by default, it spawns more threads to satisfy the demand at a frequency of one new thread per second¹. The soft limit can be configured with the ThreadPool.SetMinThreads method. The hard limit can be found with the ThreadPool.GetMaxThreads method, and is 32,767 threads in my machine.
So if I configure the Parallel.ForEach in my 4-core machine with MaxDegreeOfParallelism = 20, and the body delegate keeps the current thread busy for more than one second, the effective degree of parallelism will start with 5, then it will gradually increase during the next 15 seconds until it becomes 20, and it will stay at 20 until the completion of the loop. The reason that it starts with 5 instead of 4 is because the Parallel.ForEach uses also the current thread, along with the ThreadPool.
If I don't configure the MaxDegreeOfParallelism, it will be the same as configuring it with the value -1, which means unlimited parallelism. In this case the ThreadPool availability will be the only limiting factor of the actual degree of parallelism. As long as the Parallel.ForEach runs, the ThreadPool will be saturated, in order words it will be in a situation where the supply will be constantly surpassed by the demand. Each time a new thread is spawned by the ThreadPool, this thread will pick the last task scheduled previously by the Parallel.ForEach, which will immediately replicate itself, and the replica will enter the ThreadPool's queue. Provided that the Parallel.ForEach will run for sufficiently long time, the ThreadPool will reach its maximum size (32,767 in my machine), and will stay at this level until the completion of the loop. This assuming that the process will not have crashed already because of lack of other resources like memory.
The official documentation for the MaxDegreeOfParallelism property states that "generally, you do not need to modify this setting". Apparently it has been this way since the introduction of the TPL with .NET Framework 4.0 (2010). At this point you may have started questioning the validity of this advice. So do I, so I posted a question on the dotnet/runtime repository, asking if the given advice is still valid or it's outdated. I was surprised to receive the feedback that the advice is as valid as ever. Microsoft's argument is that limiting the MaxDegreeOfParallelism to the value Environment.ProcessorCount may cause performance regression, or even deadlocks in some scenarios. I responded with a couple of examples demonstrating the problematic behavior that might emerge when an unconfigured Parallel.ForEach runs in an async-enabled application, where other things are happening concurrently with the parallel loop. The demos were dismissed as unrepresentative, because I used the Thread.Sleep method for simulating the work inside the loop.
My personal suggestion is: whenever you use any of the Parallel methods, specify always explicitly the MaxDegreeOfParallelism. In case you buy my arguments that saturating the ThreadPool is undesirable and unhealthy, you can configure it with a suitable value like Environment.ProcessorCount. In case you buy Microsoft's arguments, you can configure it with -1. In any case everyone who sees your code, will be hinted that you made a conscious and informed decision.
¹ The injection rate of the ThreadPool is not documented. The "one new thread per second" is an experimental observation.
it sets number of threads to run in parallel...

Categories

Resources