Unity scriptable objects references lost using GIT - c#

I'm currently working on a project where we use scritpable objects to store all configs on the game. The problem comes using git since sometimes we lose references stored on those SO.
For example I have a Config called A with a GameObject variable. On my branch "Branch1" the variable is null and on my branch "Branch2" is assigned to a prefab. I'm working on Branch1 and I move to Branch2 but the SO is not updated and the variable is still null although on Branch2 that variable actually had a value.
We have tried to reimport the asset but does not solve anything. The only thing that seems to work is a weird process. If we make a change to the SO after moving to BranchB, we save and then discard changes from git (we use sourcetree) the SO config now is fine as it should be on the current branch.
It's a weird behaviour and more weird solution we have that is leading to a lot of time lost and several errors.
Anyone has an idea of what is happening and knows how to solve it??
We are using Unity2019.1.14f1 and we use Odin for a custom inspector of the SO (maybe is relevant).

For anyone stumbling upon this question:
Save the project, under File->Save Project, and you will see your SO opening up for a commit in your preferred sourcecontrol software.
Kurt-Dekker (Thread linked below) goes into detail as to why just "Save as" and "Save" don't work, but "Save Project" does.
First answer by Kurt-Dekker in this thread: https://forum.unity.com/threads/scriptableobjects-are-not-updating-when-changed-on-the-filesystem-via-git.834109/

Related

Recovering inspector values from Unity build

I made the stupid mistake of moving a C# script out of the project and back in. This, of course, cleared all of the important inspector values back to their default values from the script.
All I need is to see those values again and re-enter them. I attempted decompiling the assembly-csharp.dll from my last build using dotPeak. While this did recover the correct classes and their fields, all of the fields are not defined. Where in a Unity build are these values stored, and is it possible to decompile them there?
Thanks in advance!
As far as I know there are stored in the scene file (.unity) or, if it's a prefab, in the prefab file (.prefab) (and if it's a prefab in the scene, it's stored in the prefab file with a list of modifications in the scene file). You might have success finding some values in there, but those are serialized and you can only really read them with Asset Serialization Mode "Force Text". It might also be that they lose their value when you open Unity between moving the script out and moving it back in.
Edit:
I missed the part that you wanted to read them from the build. I don't think that's possible, as they are normally serialized (if they are not in Streaming Assets). As scene files (which I think would contain the data) are also just files and not scripts that get compiled, I think the values are in one of the serialized files.
Also: You don't happen to have any kind of version control? Because then you could rollback to an old commit that contains that data, or if that's not possible look at the specific file in question.

Recovering code deleted by SQL Server Data Tools (Visual Studio)

I'm making a small package in SSIS (Integration Services), and in my control flow I have a couple of script tasks and some data flows reading data from XML files into the database.
I made some edits to the C# code in a script task in the built-in Visual Studio editor, and hit save. The star by the file name disappeared, indicating the file was saved. I closed the Visual Studio editor, saved the package, right-clicked the script task and chose "Execute task". It ran without errors, but the XML files it was supposed to create never appeared, so I opened the script in the Visual Studio editor again, and to my horror only the default script was there (e.g. only the main method and it only contained the Dts.TaskResult = (int)ScriptResults.Success; statement)!
I have been unable to find the code I had in that script task, and when I open the .dtsx file in a text editor my code is gone! The code in the other script task is there though.
I was paying careful attention, so there's no way I mistakenly deleted everything before closing the editor and saving the package.
So my question is: Has anyone else encountered this totally insane bug, and is there a way to recover the code; or do I just have to bite the bullet and recreate it from memory?
We rarely use any significant amount of C# code in these SSIS jobs, so we don't have any integrated VCS. I have been copying the code to a new file and manually adding it to my own Git repo, just to be safe, but I hadn't done that yet with this particular code.
And I just have to restate my frustration with such an amazingly bad bug in Visual Studio...
I am having a similar issue. It is reproducible. The code exists in a script task inside a foreach container. You can go in and view the code to confirm it exists as well as the variables set for read and readwrite. Next go to the foreach (parent container) and change a variable mapping. Go back to the script task and the code is gone. I have even made it as simple as no variables being used inside the script task and only putting in some comments. And it still happens and reverts back to the default code.

Deploying branches and maintaining configs in VS2010 / VSOnline

I'm trying to get one of our internal c# click once applications into VSOnline for source control to allow access for an external developer.
I think I've got it set up and working in the Source Control Editor, but am having trouble working through how to actually use the setup day to day.
I've got some git experience but zero TFS experience, but went with the TFS option as I thought it's more likely developers are familiar with it than git.
What I'm trying to achieve is 3 branches; Main/Trunk, Dev and Release and be able to deploy at least Release and Main. Release is for external clients, Main for internal clients.
At the moment my Source Control Explorer looks like;
DefaultCollection
-->Name of project
---->(Branch icon) Dev (created as a Branch from Main)
---->(Branch icon) Main
---->(Branch icon) Release (created as a Branch from Main)
2 things;
In terms of use I'm not really sure how to swap between the branches for coding / making changes? Do I just open the solution file for the branch I want to work on then save all changes as I go, then commit that as a changeset? Or is it a matter of manually checking the file out, working on it, then checking it back in again?
Given it's a ClickOnce app; each branch is deployed to a different IIS site, meaning diff app identies, paths and settings. Am I right in using branches for this or is there a better way? I'm worried about someone committing the wrong file and causing a mandatory uninstall/reinstall of the app.
Any pointers / docco greatly appreciated; just note I'm using VS2010.
Thanks,
Liam
How do I swap between branches
If you're used to GIT than the 'heavy weight' branching in TFVC can be a bit confusing. There is no real "Switching between branches" as you've encountered. You map a branch to a local folder and by opening the files there you're "working on that branch".
As Lee points out you can create separate workspaces for each branch, which will isolate the work areas for each. If you're using a Local Workspace, each workspace gets its own "/tf$" folder, the TFVC equivalent of the "/.git" folder.
There's a couple of documents on MSDN that explain this in a little more detail:
Set up TFVC
Create one or more workspaces
Optimize your workspaces
How do I check in
A changeset in TFVC is the equivalent of a commit in Git, it's a logical set of changed files that is committed/pushed as a whole, or not at all. But just as in Git, you can commit all the changes to your local work area at once, or you can exclude certain changes from the first commit and stick those in a second.
In TFVC you'd normally try to commit a logical set of files that fixed the bug, achieved some goal etc. Though it's still possible to check-out/check-in files individually, chances are much higher that you'll actually cause the sources in the main repository to be in an inconsistent state that way.
See
What is a Changeset
Check in your work
Shelving your work
As for your second question
Depending on how far you'd want to go, you could setup Team Build to actually build the application and to take the configuration from a specific location during the build process. That way you wouldn't have to store the configuration for your production environment with the development settings. Configuration files can contain sensitive information, you might not want to have them in Source Control, except for the development versions.
You can also store the config files in a special folder in each branch and make sure that each time you merge them, they're updated accordingly.
And you can, as Lee mentions, look into Config Transaformations. which apply some XSLT to your config file in the build process. That way you can have multiple config files stored in each branch and the selection of your "Configuration" in Visual Studio will define what the final config looks like.
See:
Tricks with app.config files and click once
The _PublishedApplication Nuget package
SlowCheetah
In terms of use I'm not really sure how to swap between the branches for coding / making changes?
I recommend creating separate workspaces for each branch. This way you won't accidentally check in release code when you are trying to check in dev code. Also, when you want to switch which branch of code you are working on, you switch your workspace. This should keep things "cleaner" and easier to work with.
Do I just open the solution file for the branch I want to work on then save all changes as I go, then commit that as a changeset? Or is it a matter of manually checking the file out, working on it, then checking it back in again?
You shouldn't have to manually check it out. If I remember correctly, it will default to auto check out when you start to make changes. You can check code in however big of chunks as you want. But make sure if you are checking in changes to ClassA.cs that reference needed changes in ClassB.cs, you check that in as well. You don't want to leave the source code in a broken state for the other developers.
If you start working on something and have to suspend that work to do some other task that rose in importance, shelve your work instead of letting your workspace get cluttered up with half done work that makes it difficult to manage check ins.
Given it's a ClickOnce app; each branch is deployed to a different IIS site, meaning diff app identies, paths and settings. Am I right in using branches for this or is there a better way?
I'd look into using web.config transformations for this. You'll still want multiple branches but to separate tested/completed/developing code from each other.

Visual studio not updating project immediately

I have a very odd situation where by changes committed to the repository by my colleagues when updated to my local copy of the software, Visual studio doesn't recognise them immediately, and reload. the result (and this is very odd) is that most of the times, I will save my changes without the reloaded projects and will overwrite my colleagues changes. It is so embarrassing that sometimes I am asked why I had to change a piece of code and in reality I hadn't.
Another thing is, when I check in some VS project level changes like when someone added a new class, or form or anything and continue to work in Visual studio, it will take me at least 5 to 10 minutes before I get the warning that there was some changes and be asked to reload the project etc...
I think there should be a setting somewhere in visual studio to trigger an automatic reload, but can't find it.
This affect me and another person so far but mine is the strangest as it can take up to 30 minutes before a project start reload.
Any Ideas welcome
This is my settings
If you are working using Source Control, you will need to synchronise your local workspace with the server ("get" the latest code) before any changes by your colleagues will be copied to your PC.
If you don't "get" the latest code before you make changes then you may have to merge your changes with somebody else's, which can be a difficult, time consuming or even dangerous process - especially if you use the default Visual Studio automatic merge process, which usually does the wrong thing, resulting in essentially corrupt code (making it look like you deleted your colleague's works, just as you are describing, for example).
The best way to work with source control is the "little and often" approach:
Get the latest source code before you start any new work, so that your PC is as up to date as possible.
It's usually a good practice to "get" the latest code frequently (e.g. I do it first thing every morning) so that any merge conflicts are flagged up and dealt with as early as possible. The longer you wait before merging the worse the merge process tends to get. (Caveat: Check with your build system that the current version of the code on the server is working before you get it - you don't want to get broken code onto your PC as it may stop you being able to work at all).
Arrange your work as many small incremental steps that can be safely checked in as they are completed (rather than working for 3 months on hundreds of files and then dumping it on the system as one massive change )
When you are ready to check in, get the latest code, rebuild, and re-test your changes to be sure they still work when integrated with the latest program code. Only check in if everything works well.
Also be aware that when you try to edit a file, the source control provider may automatically "get" the latest version of that file for you (which could cause Visual Studio to tell you it has reloaded the file, and perhaps explain why you say it sometimes takes a while to "update", as it doesn't happen until you start editing a new file that has been changed recently by someone else). If this is the case, then the truth is that you have not "updated" the entire set of source code, only one file - in this case you really need to get all the latest changes to the source code (if you don't you may find it is uncompilable or (even worse), compiles but exhibits undefined behaviours due to only part of the code being up to date)
Lastly, a very good practice when checking in your code is to go through the list of files you are checking in and diff them one by one against the latest server code to see what you have changed. This may sound laborious but it confers several benefits:
It reminds you what you did, which can sometimes be helpful for filling in the check-in comment to clearly describe all your changes and make sure you don't miss an important note.
You will easily spot anything that has been screwed up in the merge process - there will be chunks of code that appear to be created or deleted that you know you didn't touch. So you'll be able to discover and fix these problems before you check in rather than annoying your colleagues by "deleting" their changes.
I find this very useful for finding temporary debugging code that I have forgotten to take out before I check in.
Sometimes you may even do a double-take on a bit of code you are about to check in and think "huh? why did I do that?". And then you might decide to re-examine and possibly even rewrite the code you thought was good to go.
Final Note: The options you show in your edit only relate to changes that are made to the files on your PC by another program on your PC. If another user makes a change and checks it in to source control, these options will have no effect. It is only when your Source Control system copies those changes to your PC's hard drive that you might see Visual Studio reacting to those changes (depending on how well your source control system is integrated with VS).
If you're sure the problem is Visual Studio (e.g. the file really has changed on the disk but you don't see it in Visual Studio)
Make sure that the Detect when file is changed option is checked.
Tools > Options > Environment > Documents > Detect when file is changed outside the environment
Since you are sometimes getting an alert to reload your project due to external changes means you already have the settings required to detect file changes in Visual Studio.
However, reloading of project/solution will only be triggered if the .csproj (or .vbproj) or .sln file was changed.
By the way, are you using some version control system? It seems that you are just sharing the solution and editing simultaneously.

When editing Resources.resx file, Resources.Designer.cs fails to update because TFS doesn't check it out

I'm using TFS source control.
When I add a new resource key to my resource file - Resources.resx - and hit save, TFS checks out Resources.resx but doesn't check out Resources.Designer.cs. This causes the update to Resources.Designer.cs to fail with error:
The command you are attempting cannot be completed because the file 'Resources.Designer.cs' that must be modified cannot be changed. If the file is under source control, you may want to check it out; if the file is read-only on disk, you may want to change its attributes.
The error is correct in that the file IS read only and the file IS NOT checked out. I don't want to have to manually check out the designer every time I add/edit a resource key. Does anybody know of a solution or work around to this issue?
Note that I have TFS set up to "check out on save" as opposed to "check out on edit". This is deliberate to reduce the amount of unedited checkouts.
EDIT:
This happens in other file types also. For example, I am using RazorGenerator to create compiled MVC views. The same problem occurs if I try to edit the .cshtml without checking out the .generated.cs first.
UPDATE:
This issue occurs on all (as far as I've seen) files that have an autogenerated code-behind: .resx, .edmx, .aspx, .cshtml (when using RazorGenerator for compiled views), etc. I've decided that it's not worth the pain just for having "on edit: do nothing" set. I've decided to reset this to "on edit: checkout automatically". Thanks to everybody for your input. No thanks to TFS team for this FAIL.
Well, I did not think this counts as an answer so I wrote it in comment.
Checkout on save is only triggering when you save file, it does not trigger when file is autogenerated (autogenerate is not trigger for save which does checkout, as this file is edited by custom tool assigned to resx).
I'm afraid you will not get proper answer (the one which will solve your problem) besides that it is by design, but it may be worth opening a case on connect and ask to change this behavior.
Why do you want to reduce the amout of unedited checkouts? If a file is checked in without changes, TFS notices and it will not show in the checkin history of the file.
You can test this yourself by checking out a single file and immediately checking in. TFS will tell you there where no changes and the checkout is undone.
So maybe consider setting it back to checkout on edit? As mentioned in the other answer, this will solve your problems...
I think this is the problem
Note that I have TFS set up to "check out on save" as opposed to
"check out on edit". This is deliberate to reduce the amount of
unedited checkouts.
To avoid above problem, revert back to default settings. Then download TFS power tools.
Then use this command to revert changes which are checked out but contain no edits
tfpt uu /noget
Update: On changing above setting the issue no longer occurs. For details, refer below discussion in comments.
I have to work with TFS at work. I've seen to many miracles and we've spend a lot of time figuring out where the problem is. TFS is the choice of my company, but it's not my favorite.
TFS (especially when server is slow and you have regular network problems) is a disaster for me as a developer. VS looks for modification only over files in solution, and as you can see not all of them. When you use third party tools (fitnesse for integration tests or custom build steps) wich requires to modify files outside VS - you'll probably get the same error as you have.
But we found a solution. On my machine I use git. We've installed git-tfs.
And all you need to remember is three magic commands
git tfs fetch
git merge remotes/tfs/default
git tfs ct
That's it. You will never break company rules. And at the same time you will be free of that kind of weird problems. We've forgot about that nightmare.
EDIT: Local workspaces in the upcoming TFS 2012 will solve several issues, and TFS 2012 will become closer to SVN, but it will not be DVCS. MS invest in integration with external DVCS - please, welcome - Git-TF.

Categories

Resources