I am working on a cookbook window application in WPF which consist of one window and several userControls that are replacing each other with relayCommands using messages from MVVM Light.
The application works with a DB that is generated from the entityFramework. The problem that occurs after all but the first execution of the file is that the program shows many warings and errors such as this one:
Warning 1 Could not copy "...\cookbook\Cookbook.Services\Database1.mdf" to "bin\Debug\Database1.mdf". Beginning retry 1 in 1000ms. The process cannot access the file '...\cookbook\Cookbook.Services\Database1.mdf' because it is being used by another process. Cookbook.Services
In the ViewModelLocator I have this:
public ViewModelLocator()
{
ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default);
SimpleIoc.Default.Register<MainWindowViewModel>();
SimpleIoc.Default.Register<MainViewModel>();
SimpleIoc.Default.Register<FoodTypeViewModel>();
SimpleIoc.Default.Register<ShoppingCartViewModel>();
SimpleIoc.Default.Register<MenuViewModel>();
SimpleIoc.Default.Register<MenuListViewModel>();
SimpleIoc.Default.Register<MenuCalendarViewModel>();
SimpleIoc.Default.Register<ChooseFoodWindowViewModel>();
}
And also messages I am using to switch the userControls are creating new instances of ViewModels, such as:
BackToMainCommand = new RelayCommand(() =>
{
Messenger.Default.Send<ViewModelBase>(new MainViewModel());
},
() => true);
I have toyed with the ViewModels to make them singleton to make sure there are only single copies in the system, but SimpleIoc needs public constructors for registering. And also I don't know if that would even help my problem. Also what I didn't tell you is that the ViewModelLocator is used only in xaml so I don't even have its instance to clean the stuff. (I am probably using it wrong but I don't know how it should be used)
The problem is that I don't know how and where to clean all the ViewModels since they are beying created on many places I've mentioned and some of them are probably holding the *.mdf file.
As mentioned in the comments, you are getting the
Warning 1 Could not copy "...\cookbook\Cookbook.Services\Database1.mdf" to "bin\Debug\Database1.mdf". Beginning retry 1 in 1000ms.
The process cannot access the file '...\cookbook\Cookbook.Services\Database1.mdf' because it is being used by another process. Cookbook.Services
warning (and after sufficient retries error) message from the compiler in a build because, the process created for application that you were running/debugging:
has not yet completed, or
not closed all connections to the database file.
So when you build it again, its file handle is still open and you cannot copy over the open file.
It is difficult to establish from the code you have posted in your question what the direct cause of this is, but this line:
Messenger.Default.Send<ViewModelBase>(new MainViewModel());
clearly is problematic, because it returns a new instance, instead of the singleton lifecycle instance from the SimpleIoC container. Although still ugly from a proper DI perspective, you could change it to:
Messenger.Default.Send<ViewModelBase>(ServiceLocator.Current.GetInstance<MainViewModel>());
So it will not create a new instance of your MainViewModel, but return the one from the IoC container.
Furthermore, you may want to make sure that your database context is registered in your container, and injected into the view models that need it. Illustrating this (assuming your database context/service class is called MyDbContext, implements IMyDbContext, and takes a connection string as its constructor argument):
SimpleIoc.Default.Register<IMyDbContext>(() => new MyDbContext(GetMyConnectionString()));
Now, you must also ensure that on application exit, proper cleanup is performed so that Dispose is called on the IMyDbContext instance, and any other potential resources in your application that require disposal. If this is not already done, through MVVM Light, you can do that by reacting to the Application.Exit Event on your Application:
Your problem is probably caused by the way you use your DbContext. You did not present in your question how you handle so I will try to guess what happens on your side. You should always make sure that after using DbContext you immediately dispose it. It should not be kept for the whole application living time. I do not see that you are registering it with your IoC so I assume you just instantiates it somewhere within your ViewModels. In such case you should always have your DbContext objects within using() to assure they are disposed. If you will fullfil that you certainly should not have any connection open to your db when you close your application in ordinary way.
The other case is connected to debugging your application in VS. It is done by default with VS hosting process, so when you hit "stop debugging" button DbContexts with opened connections are not disposed and VS hosting process is not killed. To avoid such situations I would recommend you to try to disable VS hosting process. You can set it in project properties -> Debug -> and uncheck Enable the Visual Studio hosting process. However this may lower down a bit time in which your application starts to run when you debug it.
Related
We use multiple micro-service applications, each with it's own specific DbContext. Few entities are duplicated in all of those contexts by necessity, for example - the Region entity. I want to use a single service to perform CRUD on Regions in all db contexts. That's why I've used the default DbContext as dependency in this service, relying on the IoC container in each micro-service app to provide it's own specific, let's call it ExampleDbContext. Then I use Set<Region> to manipulate the entity.
I get an inconsistent behavior - sometimes it works, while most of the time I get:
InvalidOperationException: An attempt was made to use the context while it is being configured. A
DbContext instance cannot be used inside OnConfiguring since it is
still being configured at this point. This can happen if a second
operation is started on this context before a previous operation
completed...
From what I can gather the default culprit is simultaneous usage of the same DbContext instance. All our methods are async so it is certainly possible, but I find it unlikely, since the code that throws is called once on the application start inside the Program.Main method. Either way, I am not sure how to confirm it or rule that out.
What I find strange about this is that if I replace DbContext with an explicit ExampleDbContext this error is no longer thrown. Thus I am lead to believe that there is some difference in how the default DbContext is instantiated.
As a further note I think that ExampleDbContext is instantiated 3 times, because a breakpoint inside it's OnConfiguring method gets hit 3 times.
Edit
Turns out using the specific ExampleDbContext also throws, however much more rarely.
Edit 2
Alright, after another day debugging and commenting out code to pin-point the issue, I can confirm that it was indeed parallel calls to the same DbContextinstance.
One piece is still still missing from the puzzle:
- Why using DbContext instead of ExampleDbContext threw more frequently?
Overall I guess the moral of the story is "Take your time to eliminate the obvious cause of the issue". When that's off the tables think about other causes and only then post about asking help.
I have an application (C# + WPF) that attempts to wrest control of the graphical interface of any process passed to it as an input and resize/reposition for my own purposes.
It does its job rather well, I think. Upon expected termination (the base class inherits from IDisposable) the "captured" process is released - its parent is set to the original, its windowstyle is reset, etc. etc.
In fact, on testing, I can capture, release, recapture, and so on, the same process as many times as I want with no issues.
However, upon unexpected termination (say another process forcefully kills it), the process never regains its graphical interface! I can tell its still running but I can never set that process back to its original state.
It almost seems like the process doesn't respond to window-based Win32 API calls that set specific window features anymore (for example, I can get information with GetParent, GetWindowThreadProcessId, etc but calling ShowWindow or related results in nothing).
Any suggestions on why this is happening? I'm guessing that since I set the parent of the process to my WPF application (which then unexpectedly closes) it causes some issue in trying to recover the initial interface?
This is why it's happening (or, at least, an indication of why I had so much difficulty finding the issue out on my own); can I recover from it? And, if so, how?
Edit -
IInspectable makes a good point in the comments, question adjusted to make better sense for this particular application.
It seems I've gotten my answer; so, for the sake of completeness I'll post what I've gotten here in case anyone else has a similar issue.
According to the information provided by IInspectable in here and here (with more context in the comments), it seems that what I'm trying to do here (assign a new parent cross-process) is essentially unsupported behavior.
My Solution:
Recovering (at least at the point that I'm talking about - i.e. unexpected crashes or exits) probably isn't feasible, as we've already gone off the end in undetermined/unknown behavior. So I've decided to go for the preventative route.
Our current project already makes use of the Nancy framework to communicate across servers/processes so I'm going to "refine" our shutdown procedure a bit for my portion of the program to allow it to exit more gracefully.
In the case of a truely unexpected termination, I'm still at a loss. I could just restart the processes (actually services with a console output, in our case, but w/e) but my application is just a GUI/Interface and isn't very important when compared to the function these processes serve. I may make some sort of semaphore file that indicates whether a successful shutdown occurs and branch my code off so that it indicates that the processes are no longer visible until the next time they're restarted.
In my Event Sourced System, I have an endpoint that administration can use to rebuild read side Databases in the case of some read side inconsistency or corruption.
When this endpoint is hit, I would like to stall (or queue) the regular system commands so they cannot be processed. I want to do this so events are not emitted and read side updates are not made while rebuilding the data stores. If this happened, new (live) event updates could be processed in the middle of the rebuild and put the read side DB in an inconsistent state.
I was going to use a static class with some static properties (essentially mocking a global variable), but have read this is bad practice.
My questions are:
Why is this bad practice in OO design and C#?
What other solutions can I use to accomplish this class communication in place of this global variable?
Why is this bad practice in OO design and C#? (using global variables)
There is a lot of talks about this on the community, but Very briefly, it makes program state unpredictable..
What other solutions can I use to accomplish this class communication in place of this global variable?
You should not stop the command processing if you only need to rebuild a Readmodel. The Write model should go as usual because it doesn't need data from the read side (unless there are some Sagas also). The clients need the commands to be processed so the rebuilding should be done transparently.
Instead you should create another instance of the Readmodel that uses another (temporary) persistence (another database/table/collection/whatever) and use this to rebuild the state. Then, when the rebuilding is done you should replace the old/faulty instance with this new one.
For the transition to be as smooth as possible, the fresh Readmodel should subscribe to the event stream even before the rebuilding starts but it should not process any incoming event. Instead it should put them in a ring buffer, along with the events that are fetched from the Event store or Event log or whatever event source you are using.
The algorithm for processing events from this ring buffer should be the oldest one is processed first. In this way, the new events that are generated by the new commands are not processed until the old events (the one that were generated before the rebuilding started) are processed.
Now that you have a clean Readmodel that is processing the latest events (a catched-up Readmodel) you just need it to replace the faulty Readmodel somehow, i.e. you replace it in you composition root of your application (Dependency injection container). The faulty Readmodel could be now discarded.
This may be a long shot but I'm out of ideas.
I've got a VS C# solution with three projects in it. There's a class library project and then two application projects that depend on that class library. The class library in turn depends on a few other DLLs including the avalonedit dll from the sharpdevelop project.
One of the applications is building and running fine, including a use of my own control that wraps the avalonedit control. The other application is failing to run and it seems to be failing at the point when the avalonedit control is initialised via the XAML in my wrapping control.
The problem is that I don't see any errors in the debug output at all, all I see is the dll loaded message and then nothing. If I step into the constructor of my control the step never completes. The debugger says the app is running, but it is apparently spinning somewhere in the avalonedit dll when the underlying edit control is constructed by the XAML side.
I have to assume that there's some issue with difference in environment between the two projects but I'm kind of stumped as to how to proceed in tracking the problem down. Am I going to have to somehow get arrange matters so that I can put a break in the avalonedit source?
Edit: If I pause/break all it just drops back to the line calling my control constructor.
Sounds like a deadlock. Take a close look at all threads, their stack traces and synchronization primitives (locks, semaphores, etc.). Keep in mind: contended resources may not be explicit (for example, when you are inside static constructor waiting on something that tries to get access to a static field of the type being constructed you get a deadlock).
There are many ways to introduce a deadlock but no simple advice to handle it. You could also enable break on all exceptions in Visual Studio (Debug -> Exceptions... and tick CLR Exceptions).
If this does not help you could provide stack traces here and maybe somebody could spot the problem.
Here is the scenerio:
On application open the user is asked to open a "Project"
Inside a project there are processes, categories, tasks, documents and notes.
There are a few more, but lets keep it simple. Most of the items above are lists that are bound to grids or treeviews.
Each of them have a separate database table and are linked with values in the database.
So after a user opens a project I could do something like this:
Get Processes that belong to Project 'A'
Get Categories that belong to each process
Get Tasks that belong to each category
Get Documents that belong to Project 'A'
Get Notes that belong to Project 'A'
and so on
The issue is then if the user closes this Project I have to clear every bound control and clear any variables related, then be ready to open another Project.
So what I am looking for is some advice for the way to handle this type of situation efficiently.
I am using C# .Net, and the application is a Windows Forms application.
Well, any way you slice it, the memory allocations done on load will have to be cleaned up at some point.
Depending on how you do your data access, you may have to do it manually, or .NET's garbage collector may take care of it for you (most likely).
For this type of application (given the limited requirements you've written), I would normally implement it as an MDI application so you could have multiple projects open at once. All of the project loading/disposing code would be run through the child window, so any time a child window is closed, the memory gets cleaned up automatically.
Even if you don't/can't do MDI, I would strongly recommend that you follow the dispose-on-unload model instead of keeping the same form open and reinitializing it with new data. The latter model is very prone to erroneous behaviour because of lingering data and/or state (which is programmer error, but is really difficult or even impossible to track down when trying to reproduce a client's issue). I've seen this pattern a lot in WinForms and it isn't pretty when the form controls and business logic start to get complicated. Dump the form and start again.