Check if all files in TFS folder are "latest" - c#

I´d like to check if all files in an specific tfs folder are "latest". If they are not (see picture) a get-latest will be executed. If they are all "latest" no get-latest will be performed.
The reason why I´d like to prevent always executing a get-latest is that it takes much time.
So what I need to know is whether all files are latest. I tried to get this information that way. Getting the latest-state worked well, but it took even more time than an get-latest itself.
Is there any way to get information about the latest-state of files hostet in tfs, without performing time-consuming operations?

If you want to know whether you're up-to-date without downloading files, then you can do a "preview get":
tf get /version:T /preview
This will tell you whether you're up-to-date or not, and if not, what you would need to download to become up-to-date.
But this strategy will not provide a benefit for speeding up your updates, it will be (net) slower than if you'd just done a get latest in the first place.
Get Latest (with preview) will ask the server what's necessary to do a get: the server will compute the differences between your workstation version and the server version and provide you with the list of files that you need to download.
Now if you were to turn around and do a Get Latest, then that will... ask the server what's necessary to do a get. It will recompute that list, and give you the list of files that you need to download. Now you'll actually download them and "complete" the Get by telling the server that you've updated your local files.
You've added an unnecessary round-trip and server computation to the mix.
Doing a "Get Latest" is the fastest possible route to updating your latest version information, whether you're already up-to-date or not.

Related

Prevent other processes and users from accessing a file

I have a windows service written in C# running on a machine and it creates and uses a number of files. Is there a way to prevent a user on the machine, administrators included, from messing with these files(moving, editing, renaming, deleting) from the code?
I know that StreamWriter can achieve this, but I don't want to keep the files open all the time without the need to actually access the data in them, but I can't seem to find any other way.
EDIT: Let me rephrase the question base don the comments below. Is there a way to setup ACL in a way that only my service can access the files? I would also accept if only services could access the files(I have seen mention of All Services security group in Microsoft Docs but I can't seem to actually find it on the system or in .net).
You can do it changing access privileges BUT I strongly suggest to simply keep them open (just be careful to flush the stream after each batch write).
In the first part I try to address directly your question ("How to prevent...") but in the second part I tried to outline a different approach (make your application resilient: keep a backup).
How to prevent...
Assuming that you're running on Windows to avoid other users to mess with them you should:
Set the hidden attribute. By default hidden files are hidden and many users won't even see them. If you can do it at directory level then even better.
Change ACL to deny Full access to Users and Administrators group. Better if you cherry pick and just leave Read permissions. By default Windows pick the most restrictive policy, even when an user belongs to two group, then this will effectively stop everyone to write that file (if you deny also Read permissions then they won't even be able to see its content but see later).
Create a special group (with the required permissions, and only those) with one single user. Be sure that user isn't automatically added to the Users group.
Change your application to impersonate that user when writing those files. If you left the Read permissions in-place then code for reading isn't affected.
Don't forget to check with different versions and editions of Windows (HomeUsers keep bouncing in my mind.) If your application is a Windows Service then things may be slightly easier, see eryksun's comment.
You can experiment with all these things simply using Windows Explorer, just find the right balance but don't forget that each single installation is a different world and only God knows what the environment is (but he doesn't know why).
Few obvious drawbacks:
An administrator can ALWAYS do what he wants then they may find those files and revert permissions. I think (I'm not sure) that System Installer has some special privileges to prevent this but I'm not sure (and I can't imagine how to do it).
Installation is way more complicate (and you will need one if you don't have). You may do it when application is executed first time but then you will need administrative privileges (just once but probably worse.)
Your code is more complex.
More setup means more things that may go wrong, balance this with the effort of your technical support team.
Updates (and tech support job) will be more complicate.
Users with certain privileges won't be affected (see another comment) but this is really a good thing and you shouldn't every try to circumvent it.
Backup is the key!
Don't forget that if they really want to break your application then they will just delete the application directory...
I think, but I don't know your specific use-case, that maybe you're approaching the problem from the wrong angle. If what you want to prevent the user to corrupt your data files (intentionally or not) then what you need is a BACKUP. Save a copy in a different location each time your write them, mark it as hidden and live happy. If they're not too big you may even save content directly inside Windows Registry. For encrypted/hashed/checksummed files your application can easily detect when they're broken or missing: just restore backup and you're done.
I don't want to keep the files open all the time
But keeping them open is a good way that closely follows your intent and requirements.
As long as it's not about hundreds or more, this seems the best option.
The other way is to set the security properties (ACL) but that is messy and requires a higher privilege.
Excluding the Admin is not totally possible and you should not really want that. Avoiding accidental delete or rename is doable, total control is not.
2 Other options are
Set some permissions in the locations here the files are so that no one can access them
If all of the files in question will be created by your application, you could check the options in CreateFile, where you can set the sharing options to 0x00000000 to "Prevent other processes from opening a file or device if they request delete, read, or write access."
If you want to use CreateFile I guess you will have to pinvoke it

Microsoft Sync Framework Change Detection Only

I am working with the Microsoft Sync Framework version 2.1 in C# and am trying to do what I believe to be a simple task. I would like to use the framework to detect changes to a folder full of files since the last run. I do not need to synchronize/replicate the folder to any destination, I merely want to know which files were added, modified, and removed since the last run. I believe the metadata file associated with the FileSyncProvider keeps track of this data, but I am having a hard time figuring out how to collect the changes, as all examples I have found show how to synchronize between two directories.
Is there a way to get a list of changes? Any guidance is appreciated.
Best,
Brett
you dont have to explicitly read the metadata, the API supports explicit change detection. have a look at the sample here

Hash of source codes at compile time in C#

Having a server that other devs use, I currently log the version of the dll they use. I do that by having the client that use Reflection to retrieve its version:
Assembly.GetEntryAssembly().GetName().Version.ToString();
It's nice, but since it come from dev that uses TFS and do themself the build, I can not see if they have the latest version of the sources. Is there a trick, like a compilation tag, that would easily allow a hash of the generating source code?
Note: I have try to send the MD5 of the dll (using assembly.Location), but it is useless since the hash value changes between 2 compilations (I suppose there is some compilation timestamp inside the generated dll).
This is most collaboraton issue then a coding.
In the moment that you find out that the version is old one.notify them about it.
If the real version is not old one, that means that developers before making buold did not increment the version ID, which is mistake.
In other words, ordanize it among people, and not relly on these kind of tools (if there is any). You trying to create a complicated tool, that will help you avoid mistakes, but humans will find a way to make them again.
So it's better to create solid relation structure among you, imo.
Create a tool on pre build event to hash/last-write-time your code files.
Write the result to a cs file or a embedded resource file.
The result file must exclude in above action.
For prevent skip build (up-to-date) feature not work,Compare the file before write.
And if youre opening the file in IDE will get a prompt `changed from out side' when build.
Seem there is no easy way to do it.

Seeking Advice on Design for Multiple Users working in the same file

I am at the earliest stage of writing a desktop application for use by multiple users. I am looking for advice on what is the best way to approach this.
The Spec
I will persist my Model in a file which would often be used on a mapped network drive. (It is for the design of roadways and other linear features like railways and streams.)
The various end users need to be able to connect to and edit the file simultaneously. For example, Billy Bob is working on the road named US321 while Rupert is working on I40. The models for each road live in the same file. End users can "claim" any road name, in which only the claimant can edit the given road. Rupert can't edit US321 while Billy Bob has it claimed, but Rupert can read US321 for reference. Once a user is finished editing the road data, he can release the claim and someone else could edit it.
Limitations on Serialization?
My understanding of Serialization is quite limited (see my profile). But it looks to me like there is a one-to-one correlation between objects and serialization files. So if I use serialization to implement this, it would not be possible to claim just a part of it nor would it be possible to update only a part of it. (Is this correct? If not, then I can use Serialization, right?)
The Solution I am Considering
I am considering using SQL Server Express, and I am interested in the community's warnings, corrections, or affirmations on this.
The end users would not have to know that I am using SQL Server Express in the background. (I would even change the file extension to something suitable to my app.) I would load roads into a list, and each road would be "claimable". Claiming a road would mark it in the database for the other instances of the app to react to accordingly, kind of like it is a shared MS Excel file that multiple people can edit simultaneously, but (in the analogy to Excel) being able to lock individual worksheets.
[Edit] See Micah Armantrout's very informative response, below. So now I am wondering about using Microsoft Access as the intermediating db app.
[Edit]
Conclusion
Thanks to everyone for their helpful answers and comments. Micah's answer was very helpful since I did not realize I would be limited to the file being controled by only one server. Although it makes perfect sense now, I had not anticipated it, and if I had gone that route, I would have run aground on it after many hours of working in that direction.
When I first read urbadave's idea, I dismissed it as something I had already considered and not liked. But after thinking it over, it is clearly the simplest approach. I just use a directory like it is a file, but with user-transparency to my my top level sub-objects. But there clearly is an appeal to having my whole model be encapsualted into a single file.
So this is what I have decided to do: Start with just writing to a directory, just as urbadave suggests. Then later test out putting it in a zip directory and using the ZipPackage class to pluck out and insert the individual serialized files (or XML files -- another decision I have to make some day).
Paul
SQL server will work for what you are looking for but if your going to have multiple users you need to have a machine setup to be a server. It will not do you any good to have sql server express installed on each machine It might be one of the users machines or an actual server with SQL server express you are going to need to set it up to be accessible outside of a current machine to do this follow this tutorial.
If you are using anything past windows XP SP2 you will need to open up the ports of the firewall follow these instructions this is also talked about in the link below.
http://blogs.msdn.com/b/sqlexpress/archive/2005/05/05/415084.aspx
As far as sharing data I mean seeing other peoples work. If you are not wanting to install sql server on a server you can use MS Access I would refer you to a article on which one to use when
http://www.techrepublic.com/article/should-you-use-sql-server-express-edition-or-microsoft-access-for-your-small-business-applications/6140859
While I have access to a nice database at work, most of my personal programming does not use a database. One of the tricks I've used in the past is that file extensions are meant to carry meaning. In your case, you can exploit file extensions to indicate claims and control writing to the master file.
You're right, you would want to serialize each road object to its own file. The Master File would be the serialization of a collection object that holds all of these individual road objects.
The users select and open these road files. Before opening the file, the user's app re-names the file, adding an extension (perhaps the user's id). This way, you can use directory scans to find claimed and unclaimed files.
The master file is only written to when the user releases their claim on the road they are working on. The user's app opens all the road files, assembles a master object using the road objects and then serializes this object into the master file. When finished, the users app releases the user's claim on the road file by renaming it.
Before writing to the master file, the user's app renames the file, indicating it is about to be written to. If an users app needs to write, it can check to see if the file is renamed, and wait for the file name to be restored to a writable name.
This is a sketch of how I would attack this spec. Good luck.

What files generated by Visual Studio should I commit?

The problem I'm facing is that it seems that some of the files generated by Visual Studio are not necessary for commits.
Aside from the obvious things not to commit, what other files should I not commit? Do I need to commit .manifest files, etc.?
A different way of saying it: what files are needed to recreate the project I'm working on, and what files can be auto-generated?
Thanks!
The files I usually don't commit are: *.suo and *.user. I commit most other files.
Binary files can be committed or not depending on your company policy. In theory you should be able to recreate them again from the source code, but in practice it is a good idea to have an exact copy of anything you have sent out to a customer. So at least for releases the binaries should be committed.
In general, its a bit difficult to specifically list the files as it depends a lot on what kind of project you have and tools if any you use for autogeneration of code.
In general, the .suo file is something that is user specific and shouldnt be checked in.
However, the easiest way that i can suggest to you is to
Dont checkin any file that you arent sure u need.
Take a copy of all files from your source control into a fresh location.
Build the solution.
If it builds, great. If not, you then add files till it does.
It is a bit trial and error, but most likely its going to be only a one time thing.
Other option is to actually find out for each type of unknown file exactly what it does and then decide whether it is needed or not and accordingly exclude / include.
For this, if you post the extensions of the files you arent sure of, either google / SO can help!!
Personally, i dont believe in commiting binaries at all, even for releases. Seems unnecessary to me as in our case, every release has a label associated with it. So getting the exact code that was released is just a question of getting the code associated with the label and building it.
Also, since deployment is usually via setup files, as long as you have the setup msi / exe (and as long as you are keeping backups of those for your releases) having all the binaries checked in into source control seems a bit of overkill

Categories

Resources