Unable to view values of variables while debugging - c#

I'm trying to debug portions of the current application I'm working on, however when I try and check the value of a property/variable I get the error:
Cannot evaluate expression because a thread is stopped at a point where garbage collection is impossible, possibly because the code is optimized.
This is just a regular ASP.NET project. In some portions of the application I can view the properties and variables perfectly fine. I haven't figured out what's different about the blocks of code that I can and can not see the values of the variables in.

The problem was documented on an MSDN blog, as being a size limitation of certain types in certain situations, more details in the link. I believe it was 256 bytes and/or the total size/count of the number of arguments passed to a function. Sorry to say there does not seem to be a quick fix, but hopefully the MSDN blog entry will help you identify a way to solve your problem.

This article, Rules of Funceval gives a number of reasons why this can occur. If debugging is turned on and optimisation turned off already, there doesn't seem to be much else you can do about this problem.

Are you making release builds? Try changing the configuration to "debug" and see if it improves.

We have the same problem in two of our WinForm user controls. In both cases the user controls contain a lot of business logic (2000 and 3000 lines of code respectively) and make use of multiple fairly heavy objects (they have 30+ properties that get populated automatically from the database the first time when one of the properties are accessed). When you try to step through the (somewhat complicated) validation and saving methods, you get this same message when trying to access object properties.
We have come to the conclusion that the size and complexity of the user control combined with the size and complexity of the objects used and conditional database access just becomes too much for the debugger to handle and that we should probably just do some major refactoring to move most of the business logic out of the user control. It would be interesting to know if your problem arises from the same kind of situation and whether doing the said kind of refactoring actually does make a difference (we have not had the time and/or courage :) to do so).

Related

How to efficiently make changes to code (removing defunct code)

I've got an ASP.NET web application, that is essentially our intranet site. I made a lot of progress on the administration office's employee management pages. It ties into an SQL server database, and I'm using a three layered design (Objects, Logic, DataAccess). It was all reviewed and all of it was accepted, except! for the part that manages vacations and vacation histories.
My question, before I go into details is, how does one efficiently "untangle" code that is no longer necessary?
For example: previously I was treating each VacationDay as it's own entity with it's own history. Such that I could track the history of an individual day. To help in tracking, I have an enum called VacationDayAction, which includes options such as .Submitted, .RequestDenied, .CancellationRequested, and so on. This was in an attempt to provide meticulous detail for each day. It was then determined that we no longer need that. We do, however, still need VacationDays and all the basic functions of that (saving days, getting days, etc.), but now we no longer need any of the "history" related classes.
My problem is, when I right click a class that I no longer need in VS and go to "Show All References," I get a ton of results scattered across several pages. I need to get rid of all of them, without breaking the rest of the application. Is there not some kind of "smart" technique or method for easily untangling parts that are no longer necessary? This is particularly difficult because 90% of what I did was just fine, and needs to stay like it is. Yet scattered in that 90% is 10% of stuff that is no longer needed. I can't just go storming through with the delete key either, because with the removal of each reference, I need to be sure that any dependencies on that reference are also fixed in a way that they don't call stuff that isn't there anymore. And I still need the application is a compilable state, so that I can test along the way that the rest of the application didn't fall apart as a result of some deletion.
To give you an idea of my low level of experience, I started two years ago with having never used C#, ASP.Net, or Visual Studio. It blew my mind when, way after starting and as I was learning, someone taught me that I could use breakpoints. And then it really really blew my mind when I learned about multi-layered design. I'm wondering if there is not some technique or trick or feature that can help in scenarios like this, where you have to "untangle" and throw away unnecessary stuff.
This is not a simple question. In fact, I would say this is one of the major challenges for any systems developer; how to handle and get rid of old code which is not in use. There is lots of literature on this, and few really excellent answers. A good book may be "Working effectively with legacy code" by Michael Feathers, which deals with many related problems. It is no light read though, and will probably take some time to get through, but it will likely help you become a better coder, and better at these kinds of tasks in particular.
Maybe you can have a look at the Resharper tool? ( http://www.jetbrains.com/resharper/ ) It is a productivity tool which among other things shows "dead" code (unused code) in grey, and lets you remove it. It will also help you remove unused references from each class (again, they will be grayed out and let you remove them automatically).
Drawing diagrams where each major piece of code /component is a box with a line linking it to any related component might help you get a better overview; try to draw a hierarchy showing how different parts of the code are related and dependent.
The bottom line as far as I know, is that you just have to muddle through it, commenting out code a little at a time, then recompiling and testing it. If it still works, fine, now you can remove the commented out code completely. This would be easier if you had unit-tests covering your code, but I take it as a given that you don't, as is unfortunately often the case.

What is the profession / preferred way to handle input validation

I am new to C# and SQL. But over the last few years while learning both in college a question really begins to burn inside me. Here it is:
It seems to me that there are really two very generic ways to handle input validation (i.e. checking for required fields, and data in the correct ranges ect).
The first, and the way shown traditionally is: Once you develop your UI, and have connected it to a database back end in some manner. On the user interface, you check for correct input, such as blank text boxes, number ranges, or to ensure a radio or check box is selected ect.
The second, and the way shown in database development is: To set check constraints on fields such as no nulls allowed, unique values, and even ranges and required fields.
My dilema is this. Given that in modern languages like C# you can do general execption handling, and also given that major league fault tolerance is built into most databases like SQL Server with regard to handling data changes in respect to committing all or none. Details like this, and to this level, would be hard to program in anything but the simplest of programs.
So my question is, why not build all the requirements directly into the table at the database back end. Take advantage of the aformentioned fault tolerance, and just forget about programming if statements to ensure correct data is input, and instead just use a generic catch all execption handler if the data is not committed.
Perhaps that is how it is done, if so I would really like to know for sure. If not, why? My preference is to avoid writing code whenever possible. Less code, less debugging, and less problems when it comes to updating. So I would tend to go with that approach of letting the DB back end do the work. Is this the generally correct thing to do.
I know that general execption handling is considered "expensive" in terms of resources. But surley once you get past 5 or 10 if statements to handle different fields and their constraints, it must be more efficient code wise to just do a general execption handler. It certantly seems easier to understand overall. (At least the way I do it).
Thanks for your help with this.
OK, here is why you need it in both places.
First the integrity of the data should be paramount and data can be changed directly in database tables (deliberately through a script to say update a million prices or by accident or even by disgruntled or criminal employees trying to disrupt the database or steal from the company). Therefore is it reckless to avoid using constraints directly in the database and it leads to bad data.
Now at the user interface level, you want to prevent the user from wasting his time submitting bad data and you want to prevent the servers and networks from wasting their time trying to process it, so you write checks at that level. Plus you don't want the data in an inconsistent state if you need to insert to several tables and aren't using a transction (which you should be using but I would suspect it happens less often than it should.) Plus the users hate it when you try the insert and it fails and tells you that X is wrong and then they fix X and now Y is wrong but it was wrong before, the process just didn't get as far as Y before.
You do both.
Create constraints at the DB - level, and check for those constraints on the client level as well.
The validation on the DB makes sure that no invalid data gets in your DB, no matter how the data is inputted.
The validation at the client side improves the user-experience.
You generally can't build all the logic for checking into the database. Also not validating user input sufficiently is a good way to open yourself up to attack.
One way to write lesss guard code in every method is 'Code Contracts' a product of microsoft research.
All input should be validated both client and server side. Always.
Also with a giant catch it would be hard to tell which field was in error. So you would end up writing a lot of which field exploded code at the other end.
While I generally advocate putting as much in the database as possible (which means that you can have a high degree of confidence about the "raw" data as possible), that isn't always possible, even with the powerful constraints and triggers available in SQL.
In addition, there are high-level "integrity" things which may change over time, and it is not realistic to always have temporally-dynamic conditions in constraints. i.e. all HR records since 2007 must have a non-NULL birthdate, but prior ones are allowed to remain NULL, but any row cannot ever be set back to NULL.
My point is you can almost never put it all in the database.
Put the things in that you can, and put others at higher levels in the system. The database is a very important part of any system, but it isn't the only part. As long as its design helps it protect its perimeter and be able to provide reliable service and guarantee what it says it will guarantee so that other parts of the system can rely on their assumptions, then that's about the most you can ask for.
In addition to all answers made here, like that UI control improves drammaticaly UX for the user, and can completely change "image" of your app, that validation on DB is made for correct insert the data to DB, but on client it have to be done for correct insert of the client data.
Consider an example of standalone enterprise app. A client work at home, he filled 20 invoices late night on his notebook in Mongolia. The day after he came back, and sync it with his office SAP server. If the error will be figure out only during sync of the data, you can imagine what awful is this situation.
Just an example. There could a plenty of others, I'm sure.
Good luck.
Its 2 years later and I have a decent amount of experience now. I am not going to accept my answer as the right one as many here have done a great job and I am very happy with their answers. But I want to add another important consideration that looking back over my experience has not been highlighted here. I also use stack overflow for reference as I progress and I always find myself looking back over my questions and answers which is another reason I wanted to add this. Like a note to my future self.
While working at that company, I was asked to build an app that would do job abc. With this I also had to build part of the database. As I was finishing with the company I learned that they were writing another app which would use my database. Effectively my point is, that as many have pointed out, data is paramount, and you don't know how it is going to be accessed when you're gone.
I have also learned that there are 3 places that data needs to be verified:
on the actual database as explained
on the server side code behind which is not the same as the DB or client side validation
on the client side
There is another worry. With the advent of new tech like tablets and smart phones. This is yet another place where validation has to be implemented. The same rules for a 4th time (unless its a web app).
I later learned that prior to MVC we had CGI forms which had something to do with handling data over the network (I humbly admit ignorance on hardware side) but from what was explained to me it seems there may even be a 5th place to do validation (although I am open to being totally wrong about that).
I think the next guru in computer science will make a name for himself if he can find a way to abstract all that verification and validation to one place so that such rules don't have to be altered in a bunch of places.
worst case:
DB
Server side code
Client side code for web apps
What about if:
There may be a native client app (i.e. windows, linux or mac (at least 6 now))
There may be various phone apps (android, iPhone, and win phone to name 3, at least 9 now))
There may be some CGI or whatever
This totals 10+ places without much exaggeration and there are other operating systems.
Even for a simple age range this is getting to be messy, but what if they bring out some new email format, or other complicated validation, or you have to change a bunch of validation rules. Now you have to modify them across at least 3 or 4 places which in itself is bad.
The major problem with that is that you are modifying a lot of code and infrastructure that has been invested in, tested, and usually proven to work and delivered to the market...
As the number of client sides grow, modifying well tested code, can't be a good thing. I think this is going to be a major headache for the future. I wonder if there will be a design pattern or best practice to resolve it. If anyone knows of one, please tell me.

Checking-in Designer Generated code into TFS, issues

I just had a conversation with my manager relating to checkin\out policies on a project I'm currently working on. Basically I tried to edit a file that was already checked out by another developer and I couldn't - I asked my manager why we couldn't edit the same class at the same time and he gave this reason for turning that functionality off: We had a lot of problems with developers editing the same Form (or anything visual done in the designer) and then cheking it in. Merging the changes in the designer generated code was a lot of hassle...
As I'm writing this I'm struggling to see what problem they were having - surely they were getting the latest code before trying to check something in??
Have any of you come across problems with editing the same Form (or something in the designer) as another developer and then checking into TFS? If so how did your team get around the problem? Did you also turn off the ability for developers to work on the same class?
EDIT: The following post (found here) is exactly the problem my manager was describing. Anyone know of a simpler way to resolve the issue than the ones in that post?
I would argue that the solution to your problem would be to establish best practices for source code modification.
Discourage people from going into UI code and arbitrarily jiggling the components around in the designer. Any reasonable UI modifications should be easily mergeable. Your best bet is to try and educate people as to the best way to merge in any given source control system. Also, as helpful as the designer is, ignorance of what code is being automatically generated in the background will be significantly detrimental in the long-term.
People who insist on locking checked-out files for the reasons you stated in your post typically wait long periods of time to check their code in. Naturally, the more time passes, the more code gets modified, so it makes merging difficult for these people. Checking in early, often, and incrementally requires people to think about their changes in stages, and for some coders, this is a rather painful cultural/psychological adjustment.
I've just checked back through the histories of some of my .designer.cs files and I can't see any changes that would cause a merge problems. There were no wholesale rearrangements of code for example.
Another thing to consider is to make sure that everyone does a "get latest" at regular intervals then any individual merge/resolution isn't going to be that great thus minimising the chances of anything going wrong.
It might also be worth investigating a 3rd party merge tool. There are plenty around.
Now it could be that the changes I've done are simple compared to the ones you've got so you should take my anecdotal data with a pinch of salt.
It can cause problems (in general) when a lot of people are editing UI concurrently. The merge logic will do a fine job merging things, but in a lot of cases the UI is drawn according to how things are added to the form. Your UI can get messed up quickly.
I don't know if I would use this as an excuse to enforce exclusive checkouts across the board, though. I might go from a (non programmatic) policy standpoint that says shared checkout for business logic, but exclusive for UI changes.
I would couple that with a strong MVP, MVC, or MVVM approach, though, which should limit the number of people that have to touch the UI concurrently.
As others have alluded to, keep one of the seminal rules of SCM in mind: merge early and often, and your problems are reduced. (along with that is "always get latest before you start working on the code).

How can I see who in my team are writing the most code and who is writing the least using TFS?

I am having a problem with one of my team members output. He seems to be always 'busy' yet I am unable to see exactly what code he has done and he seems be delivering very little and it seems to take a long time to do so. I'd like to investigate further using TFS and was wondering if there is any functionality in TFS that shows what has been written by an individual or similar?
Just to clarify I am NOT spying I am trying to resolve situation. This is only a starting point. I un derstand that quantity of code does not equate to best programmer
thanks for any answers
Your best programmer may in fact write less code than your worst programmer, in fact really good programmers often write less code. Be careful about using this information to evaluate performance. Since you are using TFS, I assume you are also using the work item tracking. That is really a better way to evaluate performance than using lines of code. See which checkins cause the most problems, which fix the most defects, and how many rounds it takes for something to be truly fixed.
For me the simplest thing is to set up email alerts for checkins. You get the checkin comment, some work item info assuming they are associating/resolving on checkin, and list of changed files, as they happen. Lets you see what part of the code that dev is in and after a while you get a sense when "it's quiet. Too quiet" because someone isn't checking in. It doesn't take the place of forensics of what he did all month, but it keeps me feeling connected. It also gives me intuitive feelings like "he's in the reports, so I'll be able to show those to the user earlier in the cycle" or "jeez, he's doing all the stupid typos in error messages and other no-thinking things, and not tackling his real hard stuff" or even "he's doing his pri 2 stuff while he has a large pile of pri 1". All of these enable a 30 second hallway conversation to deliver a course correction as close in time to the problem as possible.
See the following blog post I put together a while ago:
Getting Started with the TFS Data Warehouse
This one talks you through getting code churn for each area of your codebase, but it would be easy to add team members into that as well to get a breakdown by team member who did the check-in.
But I agree with your question - this is not a good way to check on the productivity of your colleague. Instead I would talk with them to raise your concerns.
While I am away from TFS right now, you can view a list of checkins by user in Team Explorer, and in each of these you can see the files which have been changed and look at the diffs for each.
You can get this from the TFS Cube, if you have it set up. There are a large number of dimensions within Code Churn. Some of this is also available in the TfsWarehouse database as well.
If you do have the cube set up, just point Excel at it and have some fun playing around. Keep in mind, though, that the numbers can point you in the wrong direction. Use discretion.

C# penalty for number of lines of code?

Are there limits or performance penalties on the amount of code inside of my home.cs form?
I am writing a database application front-end in C# in Visual Studio 2008. The way things are lining up, I am using a tab-page way of changing the info shown to the end users, instead of using new forms.
Coming from VBA/MS Access, I remember that if you go over a certain number of lines of code, it would produce an error and not compile. Will C# do this in Visual Studio 2008, or will I suffer a performance hit? I know code readability could be a problem because everything would be in one place, but I can also see that as an advantage in some situations.
It's not the lines of code in your .cs files that you need to be worried about with regards to performance - it's the number of controls on your form at runtime that might cause problems. If it's just a few controls on a few tabs, you will have no problems. If it's hundreds of controls on lots of tabs, you may have performance problems (not to mention usability problems - I personally hate tab controls with more than one row of tabs).
Also, I don't think tabs are appropriate if the purpose of the UI is more wizard-like where you want the user to interact with all of the tabs in succession. Tabs are meant for presenting sets of options to the user, without requiring them to see all the options at once.
Finally, if the purpose of each tab is significantly different, I find that it's easier to encapsulate each bit of functionality as a separate form. With tabs, you could at least encapsulate each bit as usercontrols, and then have each tab on your form host one instance of a usercontrol.
The only problem i would foresee is that in the future its going to be very hard to maintain.
Try break the logic of that main form up as much as possible into classes so that when you need add something you can actually do it without having a fit.
If you are using tabs, you can still create custom user controls that will hold the content that goes in the tabs. Make a control per tab, and then you can keep your code for the different tabs separate. There is a walk-through on MSDN here.
In response to your comment above, about not showing the tabs, I would really re-think how you're approaching this. Why not simply have all of your user controls sitting on your main form, in a Panel if necessary, have them all set to Dock = DockStyle.Fill, and then change the Visible and Enabled properties based on which one you want to show? You may be making this harder on yourself than it needs to be.
More responses to comments - You may be looking for something like the CardLayout in Java. The source for the GNU Classpath version can be found here, it might give you some ideas on how to implement this.
"I know code readability could be a problem because everything would be in one place, but I can also see that as an advantage in some situations."
In my experience, this attitude will ultimately leave anyone who has to maintain your code in the future with quite a headache, as it's an accepted practice to modularize your code so that the pieces that may change or the ones that serve distinctly different purposes are separated away from each other.
With that said, I don't think there is a limit imposed by VS on the length of your files, but I think you will run into some seriously frustrating performance degradation as your files become longer, especially while switching between design and code views.
I would urge you to save your future self and his/her sanity and break your code up logically into separate files. You'll thank yourself later!
It shouldn't be a problem.
Just keep good coding practices in mind and modularise your code for readability and maintainability.
On the other hand, if you put too many controls on your form, then it will probably take longer to load. Factor that into your design if you want a snappy interface.
Sounds horrible but I don't see any reason why it would be a problem.
I have a form that in inherited with my new job that had over 30,000 Lines. It is completely cancerous. Please think before you code and modularize!

Categories

Resources