I am trying to monitor the amount of Remote procedure call (RPC) interface count to stop my program before the infamous 256 interface limit.
Right now i am trying to use the following code to get all assembly but i cannot figure out which property gives the actual count or that actually tell me that the interface is taking one of the crucial 256 limits.
foreach (var assemblyName in System.Reflection.Assembly.GetExecutingAssembly().GetReferencedAssemblies())
{
// get the assembly
var assembly = System.Reflection.Assembly.Load(assemblyName.ToString());
// get all types in the assembly
var types = assembly.DefinedTypes.ToList();
}
the types seems good (not sure) but i think i see all classes and interface but there is no property telling me one of them is taking a spot in the interface limit.
Can i have the information i am looking for somehow from the list of assemblies or is there another way to get it ?
Here is the RPC call limit document from Microsoft
I am aware that this error might happen because there is not enough ram available but as far as i know there is around 10-11 gb free on the computers when the issues occur and they all have access to 32 gb of page file.
After more tests
I have finally manage to use the ComVisible property to figure out which DLL are applicable and it worked. Now that i know which DLL can be problematic i am still stuck at where can i look to find how many RPC interface are currently loaded into my program.
Taking some step back made me think that this kind of information should be somewhere in the current process and not within the DLL reference itself. As the count must be the count of the currently instantiated distinct interface. There is so little information about this kind of problem that it is very very difficult to figure it out.
On the bright side note i figured out how to replicate easily and it's not a specific DLL that throw the error. I made dummy ones with a single class with a string value and when called at the good time it throws the error.
It is probably replicable in a simple test project by creating 260 small unmanaged dll with a different class in each called Class1 in dll1.dll and Class2 in dll2.dll and so on. Then add reference to each 260 dll and instantiate the class in each of them. It will crash on the instantiation of Class256 in dll256.dll. Change the order however you want and it will crash anytime on the 256th instantiation.
I use multiple manufacturer DLL that has between 10 and 100 classes each that are managed and unmanaged and i can only call a few of them before it completely crash the application. So far .NET dll (even those that are not mine) does not seems to count toward this limit. Do not take this as "it actually does not count", I am not sure. It just seems to not affect the count in my primary tests.
New things came up
We absolutely needed to bypass this limit and still know one has a project of that scale and can figure this out. Out of hope we decided to temporary create separate EXE and pass bunch of commands unsafely in a local text file, make the EXE run and read the text file to know what to do. That EXE has a couple DLL reference that the main project has so it has it's own RPC count limit of 256 that doesn't count toward the main project limit.
Then the EXE run what it need and save the result in the another text file that we wait for it to be created and read it. This is ugly and required months of coding but it work~ish. Main problem is that anti-virus and windows defender blocks all these EXE especially those that access the web and download files so we had to manually contact each of our clients and manually green list all these new EXE files and boy that is long and difficult.
We are still looking for a solution were we can up the limit on windows registry or count the amount of RPC instantiated so we can force the user to shutdown and restart the application as he used more. Why are we still looking for a solution ? Well it's easy, now it completely killed our agile points for each bug. By charts our usual 2 pts went to 8 or 13 pts.
Related
I have set up a local WCF service, self hosted in a console application using NetNamedPipeBinding for client access.
To make calls to the service I reference a library.dll where I have the following method:
public static string GetLevel(Point p)
{
ChannelFactory<IService> pipeFactory = new ChannelFactory<IService>(new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/PTS_Service"));
IService pipeProxy = pipeFactory.CreateChannel();
string result = pipeProxy.GetLevel(p);
((IClientChannel)pipeProxy).Close();
pipeFactory.Close();
}
The GetLevel() command returns a string from a list stored in the service, based on the Z coordinate of Point(X,Y,Z) p.
This works and gives speeds of 8ms total if the method is called from the above console app.
However when the same method from the library.dll is called from another app.exe or plugin.dll (loaded by an external program) the times increase drastically. I've stop watched the above 5 lines of code:
consoleHost.exe : 0 - 3 - 6 - 7 - 8
app.exe : 89 - 155 - 248 - 259 - 271
plugin.dll : 439 - 723 - 1210 - 1229 - 1245
Shouldn't the times be the same, not dependent on who makes the call to library.dll?
EDIT
Since I've cancelled out all methods to just retrieving a string from a running service, I believe the problem lies in the first creation run of the channelFactory, all subsequent calls in the same app/plugin run are equal in time.
I understand the first call is slower, but as I see this is around 30ms in a new app and around 900ms in my plugins, I believe there is another thing causing this.
I have found a question with similar delays:
First WCF connection made in new AppDomain is very slow to which the solution was to set LoaderOptimizationAttribute to MultiDomain. Could it be possible everytime the plugin runs it has to JIT-compile instead of use native code?
I tried adding this code above main in consoleHost.exe but see no gain in the plugin run time. Could this be because of the external program in between and is there a way around this? Say could my plugin create a new Appdomain whenever it wants to access the service and call from within this new Appdomain the above method from my library.dll or does this make no sense?
EDIT2
I recorded the time spent in JIT compiling with a profiling program as suggested in the comments, this gives 700ms for JIT compiling and total execution time of 800ms for the plugin.
I used ngen to precompile the library.dll to create a native image .ni.dll. I see in process explorer that this image is loaded by the external program, though there is no time gain in the plugin? As I understand there shouldn't be a reason the plugin would still JIT compile or am I doing something wrong?
I also noticed when debugging in VS that the console & app only do some loading of assemblies, the plugin loads and unloads everytime it creates or modifies a plugin instance. I believe this is the way plugins work and should not explain the difference in first execution time?
The communication should not depend on a caller, but the way the calls are done.
The most time expensive operation is creating a Channel.
Should the Proxy once created, then every next call will be done with a an average similar speed. (Of cause if the callers are using the service from the same place: same Machine in the Network. In your case should be the same, while in your case you use the localhost)
Some performance increase can be also archived by service configuration (SingleInstance should be faster than PerCall).
Another point to pay attention is to exam the possible locks in your service method. It can happen than some service clients are waiting for a call, while the service is busy.
if the service call is not an async one, try to make it async and use it.
After some further investigating: the external program prevented the sharing of loaded assemblies/JIT compilations through a setting in a .ini file when the process is started. Fortunately this could be disabled so sharing also becomes possible in the plugin.
After altering this setting (1 line in the .ini to No instead of Yes!) the time reduced to 30ms and every next call 3ms or less.
It is entirely possible I'm going about this entirely the wrong way, but here is what I'm doing:
I have a device that is communicating with a DLL over a COM port. This works fine for a single program, however I need multiple programs to be running, each updating with the state of the device.
Since I can't work out how to share access to a COM port, my solution is that each DLL checks for the existance of a timestamped file. If the file is there, the DLL goes into 'slave' mode and just reads the state of the device from the file. If the file doesn't exist or is over 30ms old, the DLL appoints itself 'master', claims the COM port, and writes the file itself.
(The device can also be sent instructions, so the master will have to handle collecting and sending the slaves' requests somehow, but I can deal with that later. Also I might want the DLLs to be aware of each other, so if one is causing problems then the user can be told about it - again, will get to this later.)
My immediate problem now is, where to store this file/possible collection of files? It must:
be somewhere that doesn't force the programs using the DLLs to have admin privileges or anything else that means they can't "just run".
be always the same place, not somewhere that might change based on some outside factor. Googling has shown me things like, "Environment.SpecialFolder.CommonDocuments" (in C#) but if that location is sometimes C:\ProgramData and other times C:\Users\Public then we'll get two parallel masters, only one of which can claim the COM.
work on as many Windowses as possible (all the way back to XP if I can get away with it) because no one wants to maintain more versions than necessary.
preferably, be somewhere non-technical users won't see it. Quite apart from confusing/scaring them, it just looks unprofessional, doesn't it?
Two sidenotes:
I looked at using the Registry and learned there's a time cost to reading it. The DLLs need to be reading with a period of maybe 10ms, so I assume the Registry is a bad idea.
After we get this working on Windows, we need to turn around and address Android, OSX, Linux, Mac, etc, etc, so I'm trying to bear that in mind when deciding how we structure everything.
EDIT:
I should add for context that we're releasing the DLL and letting other devs create apps that use it. We want them to work as frictionlessly as possible, without for example requiring the user install anything first, or the dev needing to jump through a bunch of hoops to be sure it will work.
I've been working on an internal developer tool on and off for a few weeks now, but I'm running into an ugly stumbling block I haven't managed to find a good solution for. I'm hoping someone can offer some ideas or guidance on the best ways to use the existing frameworks in .NET.
Background: the purpose of this tool is to load multiple different types of log files (Windows Event Log, IIS, SQL trace, etc.) to the same database table so they can be sorted and examined together. My personal goal is to make the entire thing streamlined so that we only make a single pass and do not cache the entire log either in memory or to disk. This is important when log files reach hundreds of MB or into the GB range. Fast performance is good, but slow and unobtrusive (allowing you to work on something else in the meantime) is better than running faster but monopolizing the system in the process, so I've focused on minimizing RAM and disk usage.
I've iterated through a few different designs so far trying to boil it down to something simple. I want the core of the log parser--the part that has to interact with any outside library or file to actually read the data--to be as simple as possible and conform to a standard interface, so that adding support for a new format is as easy as possible. Currently, the parse method returns an IEnumerable<Item> where Item is a custom struct, and I use yield return to minimize the amount of buffering.
However, we quickly run into some ugly constraints: the libraries provided (generally by Microsoft) to process these file formats. The biggest and ugliest problem: one of these libraries only works in 64-bit. Another one (Microsoft.SqlServer.Management.Trace TraceFile for SSMS logs) only works in 32-bit. As we all know, you can't mix and match 32- and 64-bit code. Since the entire point of this exercise is to have one utility that can handle any format, we need to have a separate child process (which in this case is handling the 32-bit-only portion).
The end result is that I need the 64-bit main process to start up a 32-bit child, provide it with the information needed to parse the log file, and stream the data back in some way that doesn't require buffering the entire contents to memory or disk. At first I tried using stdout, but that fell apart with any significant amount of data. I've tried using WCF, but it's really not designed to handle the "service" being a child of the "client", and it's difficult to get them synchronized backwards from how they want to work, plus I don't know if I can actually make them stream data correctly. I don't want to use a mechanism that opens up unsecured network ports or that could accidentally crosstalk if someone runs more than one instance (I want that scenario to work normally--each 64-bit main process would spawn and run its own child). Ideally, I want the core of the parser running in the 32-bit child to look the same as the core of a parser running in the 64-bit parent, but I don't know if it's even possible to continue using yield return, even with some wrapper in place to help manage the IPC. Is there any existing framework in .NET that makes this relatively easy?
WCF does have a P2P mode however if all your processes are local machine you are better off with IPC such as named pipes due to the latter running in Kernel Mode and does not have the messaging overhead of the former.
Failing that you could try COM which should not have a problem talking between 32 and 64 bit processes. - Tell me more
In case anyone stumbles across this, I'll post the solution that we eventually settled on. The key was to redefine the inter-process WCF service interface to be different from the intra-process IEnumerable interface. Instead of attempting to yield return across process boundaries, we stuck a proxy layer in between that uses an enumerator, so we can call a "give me an item" method over and over again. It's likely this has more performance overhead than a true streaming solution, since there's a method call for every item, but it does seem to get the job done, and it doesn't leak or consume memory.
We did follow Micky's suggestion of using named pipes, but still within WCF. We're also using named semaphores to coordinate the two processes, so we don't attempt to make service calls until the "child service" has finished starting up.
What I am trying to achieve is protect a customer-developed video game(with Unity3D) executable with a license, that expires after a certain period of time. The thing is, my team has no longer any access to the game executable, so recompilation of the game executable is out of the question. That is why I decided to embed the game executable as a resource into a program "wrapper", that first checks if the client license is valid (through on-line verification - I used System.Net and System.Net.Http to achieve that). After the license server returns an "OK" response to my desktop game "wrapper", I would like to start the game executable in memory, without writing it to the disk.
Here is the sample code for loading my executable from a resource, but it gives me "badimageformatexception" during runtime:
Stream stream = GetType().Assembly.GetManifestResourceStream("LicenseServerClient.testBinary.exe");
byte[] bytes = new byte[(int)stream.Length];
stream.Read(bytes, 0, bytes.Length);
Assembly assembly = Assembly.Load(bytes);//Assembly.Load(bytes);// The debbuger stops here, throwing a "badimageformatexception"
assembly.EntryPoint.Invoke(null, new object[0]);
Now, if you think that there is a better way to achieve the whole licensing architecture under these constraints, please feel free comment and give feedback.
Can you use your Unity3d executable as a reference in your project? First, attempting to do so would answer the question of whether the binary is even a valid assembly as far as .NET is concerned. Based on the error message, I doubt it is and nothing you can do will change that (Unity is based on Mono, but as I understand it, standalone Unity programs are not managed code assemblies).
If you can reference it from your own project, then you ought to be able to load the assembly in-memory from a resource. Hence my skepticism about that being possible.
Given the assumed incompatibility, a better approach is probably to write the binary back to disk (i.e. as a temp file), and then just execute it as an external process. Hosting a non-managed-code binary inside your managed process is likely to be infeasible.
Of course, doing so will make it easier for people to bypass your wrapper. If they examine the running processes, it won't be hard to trace back the game process to the file you wrote, and they can just make a copy of the file at that point.
But then, your current intended approach isn't much harder to hack. It's not going to be hard for someone with just a little more computing skills to use the right tool to find the Assembly.Load() call, find the resource you're running the game from, and then extracting that resource as the standalone game without the wrapper.
Bottom line: without modifying the game code itself, if you intend to execute the game's code on the client's machine, there's not much you can do to enforce your license programmatically. You'll be able to slow down the most naïve users, but that's about it. (Of course, that's true for even the most sophisticated copy protection, with only a relatively small degree of difference, so I guess you might consider any license-enforcement scheme that has any effect at all to be "good enough" :) ).
When I try to compile an assembly in VS 2008, I got (occasionally, usually after 2-3 hours of work with the project) the following error
Metadata file '[name].dll' could not be opened --
'Not enough storage is available to process this command.
Usually to get rid of that I need to restart Visual Studio
The assembly I need to use in my project is BIG enough (> 70 Mb) and probably this is the reason of that bug, I've never seen some thing like this in my previous projects. Ok, if this is the reason my question is why this happens and what I need to do to stop it.
I have enough of free memory on my drives and 2Gb RAM (only ~1.2 Gb are utilized when exception happens)
I googled for the answers to the questions like this.
Suggestions usually related to:
to the number of user handlers that is limited in WinXP...
to the physical limit of memory available per process
I don't think either could explain my case
For user handlers and other GUI resources - I don't think this could be a problem. The big 70Mb assembly is actually a GUI-less code that operates with sockets and implements parsers of a proprietary protocols. In my current project I have only 3 GUI forms, with total number of GUI controls < 100.
I suppose my case is closer to the fact that in Windows XP the process address space is limited with 2 GB memory (and, taking into account memory segmentation, it is possible that I don't have a free segment large enough to allocate a memory).
However, it is hard to believe that segmentation could be so big after just 2-3 hours of working with the project in Visual Studio. Task Manager shows that VS consumes about 400-500 Mb (OM + VM). During compilation, VS need to load only meta-data.
Well, there are a lot of classes and interfaces in that library, but still I would expect that 1-2 Mb is more then enough to allocate metadata that is used by compiler to find all public classes and interfaces (though it is only my suggestion, I don't know what exactly happens inside CLR when it loads assembly metadata).
In addition, I would say that entire assembly size is so big only because it is C++ CLI library that has other um-managed libraries statically linked into one DLL. I estimated (using Reflector) that .NET (managed) code is approx 5-10% of this assembly.
Any ideas how to define the real reason of that bug? Are there any restrictions or recommendations as to .NET assembly size? (Yes I know that it worth thinking of refactoring and splitting a big assembly into several smaller pieces, but it is a 3rd party component, and I can't rebuilt it)
The error is misleading. It really should say "A large enough contiguous space in virtual memory could not be found to perform the operation". Over time allocations and deallocations of virtual memory space leads to it becoming fragmented. This can lead to situations where a large allocation cannot be filled despite there being a plenty total space available.
I think this what your "segmentation" is refering to. Without knowing all the details of everything else that needs to load and other activity which occupies the 2-3 hour period its difficult to say whether this really is the cause. However I would not put it into the category of unlikely, in fact it is the most likely cause.
In my case the following fix helped:
http://confluence.jetbrains.net/display/ReSharper/OutOfMemoryException+Fix
As Anthony pointed out, the error message is a bit misleading. The issue is less about how big your assembly is and more about how much contiguous memory is available.
The problem is likely not really the size of your assembly. It's much more likely that something inside of Visual Studio is fragmenting memory to the point that a build cannot complete. The usual suspects for this type of problem are
Too many projects in the solution.
Third party add-ins
If you have more than say 10 projects in the solution. Try breaking up the solution and see if that helps.
If you have any 3rd party addins, try disabling them one at a time and seeing if the problem goes away.
I am getting this error on one of my machines and surprisingly, this problem is not seen on other dev machines. May be something wrong with VS installation.
But I found an easier solution.
If I delete the .suo file of teh solution and re-open the solution again, it will start working smoothly.
Hope this will be useful for somebody in distress..
If you are just interested to make it work then restart your computer and it will work like a charm. I Had same kind of error in my application and then after reading all of the answer here at stackoverflow, I decided to first restart my computer before doing any other modifications. And it saved me a lot of time.
Another cause for this problem can be using too many typed datasets via the designer. or other types that can be instaniated via a designer like lots of databound controls on lots of forms.
I imagine your the sort of hardcore programmer though who wouldn't drag n' drop a DS! :D
in relation to your problem, Bogdan, have you tried to reproduce the problem w/o your c++ component loaded? If you can't then maybe its this. How are you loading the component? have you tried other techniques like late binding, etc? any difference?
Additional:
Yes you are right, the other culprits are lots of controls on the form. I once saw this same issue with a dev that had imported a very VB6 app over to .net. he had literally 100's of forms. He would get periodic crashing of the IDE after a couple of hours. I'm pretty sure it was thread exhaustion. It might be worth setting up a vanilla box w/ no addins loaded just to rule addins out, but my guess is you are just hitting the wall in terms of a combined limiation of VS and your box specs. Try running Windows Vista 64bit and install some extra RAM modules.
If memory usage and VM size is small for devenv.
Explicitly kill "ALL" instances of devenv.exe running.
I had 3 devenv.exe running where as I had two instances of Visual studion opened in front.
That was solution in my case.
I know it has been a long time since this was commented on but I ran into this exact issue today with a telerik dll in VS2010. I had never seen this issue before until today when I was making some setting changes in IE.
There is a setting in Tools/Folder Option/View in the Files and Folders section called "Launch folder windows in a separate process".
I am not sure the amount of memory used for each window when using this setting but until today I have never had this checked. After checking this option for misc reasons I started getting the "not enough storage is available to process this command". The telerik dll is an 18mb dll that we are using located in our library folder as a reference in our project.
Unchecking this resolved the problem.
Just passing along as another possible solution
I also faced the same problem.
Make sure that the windows os is with 64bit.
I switched to windows 64bit from windows 32bit. I problem got solved.
I had this same issue and in my case, the exception name was very misleading. The actual problem was that the DLL couldn't be loaded at all due to invalid path. The exception i was getting said "
I used DllImport attribute in C#, ASP.NET application with declaration like below and it was causing the exception:
[DllImport(#"Calculation/lib/supplier/SupplierModule.dll", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, EntryPoint = "FunctionName")]
Below is working code snippet:
[DllImport(#"Calculation\lib\supplier\SupplierModule.dll", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, EntryPoint = "FunctionName")]
The actual problem was using forward slashes in path, instead of back slashes. This cost me way too much to figure out, hope this will help others.