GitHub Continues Integration C# Windows - c#

I'm just starting my Exam project at my school. Studying Systems Development with C#.
We're using GitHub as our online repository, and it just recommended to set up "Continues Integration". I looked at it, and the idea seemed nice. Our course is based around Test Driven Development, so we already have tests in place.
I first looked at Travis, unfortunatly, I cannot figure out how to get this to work with Windows, nor Unit Tests.
Question is, is there a tool we can use to acheive continues integration with C# for windows platforms, for free?

It is "continuous" integration, and yes, it is a good idea to learn about it.
Unfortunately, the question is too broad to answer directly. There are many solutions to get a working CI running, both locally and in the cloud; both for free and not-so-free.
It is, for a hobby/uni project, also perfectly possible to roll your own, with a few cron jobs or commit hooks.
CI simply means that your test/build scripts are running automatically, ideally after each commit and for each separate branch, so you get early warning for bugs creeping in. The more expensive or complex systems just add a loaf of customizability, reports, history, notifications etc. on top.
I would suggest googling a bit more. You really should be able to find something including documentation on how to use it. Wikipedia should have a handy table which you can filter down on "free" and "windows".

Related

How to avoid rewriting a VB6 from scratch when migrating to .Net

In our company we develop and sell a VB6 application and we think it's about time to migrate it to .NET.
The main reasons are:
We expect VB6 runtime support to end at some point in time, and we do not want to start the migration just then since it's probably gonna be a lengthy process.
There is just 1 1/2 VB6 developers left. The half one being me.
More and more customers asking for features like cloud and mobile device support.
I know that rewriting an application from scratch is the least recommended way for migrating to .NET. I totally aggree with that! Throwing away over a decade of code feels just wrong and would be such a waste of money spent, that I have a hard time recommending and justifying it towards our management.
But right now I don't see another way to do it.
Let me tell you a little bit about the application:
Like I said it has been developed for over a decade. There have been numerous developers working on it, most of them rather unexperienced at that time. We have one developer left from the initial team. That application has been his first and biggest software project and by now he realizes that many of the architectural decisions made over last 15 years have been horribly wrong, others were right at that time but have not been refactored to meet changes made in other parts of the application and so have become wrong at some point in time. This application seems to be a showcase example of code rot.
We are talking about an application of about 150 KSLOC, all in one single executable. It uses about 15 external DLLs, some of them third party ActiveX controls, some of them are our own .NET assemblies.
Adding new features to the application is still possible and being done, but takes ages compared to our other .NET applications. The reason is that every little change in the codebase requires changes all over the place. The only reason why changes are possible at all is because that one developer simply knows most the dependencies and quirks of the application. As you might have guessed the rate of unexpected side effects and bugs is quite high.
My first thought about migrating that application was to first clean up and refactor, then migrate/convert possibly using tools from Artinsoft/Microsoft/WhoEver and then refactor again to get a nice and clean .NET application.
But I see some problems:
There seems to be no way of refactoring the old application. There is no automated testing whatsoever, not even a formal method for manual testing. Every little change requires manual testing by experienced users who just know where defects might hide.
on the other hand I have established a process and set of tools for testing of our .NET applications which gives us a solid base for making refactorings
Converting that code to .NET without major refactoring feels like: Garbage in, garbage out. Even though I hate calling the old application garbage because somehow it works and has proven itself useful.
Our management has a habit of explicitly demanding quick and dirty solutions, disregarding the effects it has on the productivity and against all recommendations from the development team which has at some point started to deny the existence of quick and dirty solutions in order to be able to do things right. That does not mean that we polish features, but we do include the time to write tests and do refactoring in our estimates. So knowing this, I suspect that once the code is converted to .NET and fixed to the point where the application starts and seems to work, the refactoring-phase will be canceled and the application will be shipped to some customers.
So. What I think is that, despite the fact that rewriting from scratch will take a lot of time and resources, it might still be our only option.
Am I missing an option? Do you see possibilities of not having to rewrite that application?
I suggest that you take a step back and read this paper by Brian Foote & Joseph Yoder (University of Illinois). It provides some architectural insight into the problem you have and options to solve it. It's titled 'Big Ball of Mud' (please don't laugh, it is a serious paper). Here is the abstract:
While much attention has been focused on high-level software
architectural patterns, what is, in effect, the de-facto standard
software architecture is seldom discussed. This paper examines the
most frequently deployed architecture: the BIG BALL OF MUD. A BIG BALL
OF MUD is a casually, even haphazardly, structured system. Its
organization, if one can call it that, is dictated more by expediency
than design. Yet, its enduring popularity cannot merely be indicative
of a general disregard for architecture.
These patterns explore the forces that encourage the emergence of a
BIG BALL OF MUD, and the undeniable effectiveness of this approach to
software architecture. In order to become so popular, it must be doing
something right. If more high-minded architectural approaches are to
compete, we must understand what the forces that lead to a BIG BALL OF
MUD are, and examine alternative ways to resolve them.
A number of additional patterns emerge out of the BIG BALL OF MUD. We
discuss them in turn. Two principal questions underlie these patterns:
Why are so many existing systems architecturally undistinguished, and
what can we do to improve them?
BTW, I think your best option is to use the current application as your Requirements and rewrite everything in VB.NET or C# using a proper design.
There are four main options when you have an application like this:
Do nothing: this is always an option, as everybody knows, if it ain't broke don't fix it. However this might not be an option for several reasons such as needing to comply with some security requirements at the company, or simply because one of the components doesn't work in new platforms.
Rewrite: This would be the dream, right? being able to get rid of all the bad practices and duplicated code and so on? Well, it might be that way, however you have to think all the risks involved in developing a new application from scratch. Do you have all the formal requirements? what about test cases? do your team know every little detail in the code or would you need to go line by line trying to figure out how why that if is there? Also, how many bugs do
Buy something off-the-shelf: Since you are an ISV this won't be an option.
Migrate: Of course you'll be bound by the programming practices you used for the original development but you'll get to a new platform faster, all your business logic will be automatically migrated, you can actually hire developers for the new platform and you can get rid of the legacy elements. From here you can also take advantage of all the tools available to refactor code, continuous integration, unit testing, etc.
Also, with an automatic migration you can actually go further than just WinForms. There are also tools that can take your C# code all the way to the web using a modern architecture.
Of course, I work for Mobilize.Net (previously Artinsoft) and this is my biased perspective.
We've been working on this for around 15 years and have seen dozens of clients who come to us after trying to re-write their application and fail after months or even years of struggling without being able to deliver a working application.

Why Coded UI Test Automation is important?

I am new to the subject of Testing an app. It's been a few days starting surfing over the Internet to find out some useful tutorials. Honestly, I could just found some good videos
giving me a big picture about how to make some Coded UI Test Automation, A Database Unit Test and also Unit testing using MS Visual Studio 2010.
But still there are lots of questions on my mind. For example, an automation of CUIT is just
recording what I did as running the application and testing on my own. So what..?
It just records my actions. This is actually me who tests the application traditionally.
I know and sure that there must be some reason I'm not aware of!
Can anyone please explain to me how a automated Coded UI Test is to help me?
On the other hand, there is a similar question about Database Unit Testing.
I've found a video tutorial on YouTube explaining an example of this. It just simply checks
if a stored procedure is to work properly! Obviously, as I'm running my application (Pressing F5) I will simply understand if an Insert SP is working perfectly or not!
Thus again, I can't get what the role of Database Unit Testing is?
I will appreciate in advance if anyone could give me an idea or any useful link.
Thank you,
One of the big advantages to having automated tests is that it gives you the confidence to change and fix things and to add features without worrying about the possibility that something will break, or that there will be unexpected side effects. It's easy and cheap to run pre-coded tests after every change, so you can make even the riskiest changes even late in the development cycle and still be confident that your application is still good.
Another advantage is this: suppose that some new code you write does break or change some existing functionality. Then, you have an easy way to discover a list exactly what has changed (just run the automated tests and see the results!), and then you can reason about these changes, and classify them as either bugs, actually desired side effects/changes, etc. Otherwise, development quickly becomes a mess of "one step forward, two steps back" - every checkin might fix one problem and introduce two new ones. Even if the developer is aware of the two new problems (which isn't always the case), despite best intentions they will simply forget to address these new problems later on.
You say that you can "just run an SP" or "just run" the UI yourself. But that doesn't scale... Again, ideally you'd want to be able to run 1000s of tests automatically after every change at no cost to you, which means they have to be automated. Also, you say that you know whether an SP is working just by running the application... but as your database and your application get more and more complex, it becomes less and less obvious what you have to do in your application to actually test your database properly. Also, what if later you need to create a 2nd application that uses the same database? (e.g. you now have a website, and later need to create some command-line tool for admins).
All this becomes much more important when there are multiple people working on the same piece of code, for obvious reasons. Without good automated test coverage, complicated pieces of code quickly become one-person's-domain ("Don't touch that code without talking to Joe!").
Of course, this does not mean that you should blindly apply all available test technologies to all projects, especially relatively "expensive" oves like CUIT (it's possibly expensive because if your UI changes a lot during the course of a project, this type of test can be harder to update). Instead, you should do a proper assessment of the real risk areas in your project (the "bug farms" if you will), and the right time in the cycle to introduce each type of testing - i.e. have an actual Test Plan. This last paragraph is my opinion, obviously there are different approaches to selecting what/how/when to test.
In relation to the note about the testing software recording your actions, this can be quite handy when trying to replicate a bug particularly when you first start writing the tests.
Like Eugene noted that when you get more than one coder things get more complicated, I would also like to add that when the software has to interact with other components (e.g. a server, other software packages) it gets very complicated very quickly. It is not always a safe bet to assume the other component is perfect. So the idea of automating your tests is that whilst you keep writing the software you can test against everything done before without you needing to do any work. For example I write a program that connects using Bluetooth but I add WiFi, I could use most (If not all) of those Bluetooth test cases against the Wifi. In a UI example imagine you add a new button which in the process you accidentally break an old button, if you have 10 buttons and it has no relation to the new button so you don't bother manually testing it but an automated test suite would pick it up straight away.
If you need more justification about testing I would highly suggest reading Continuous Integration which demonstrates why you should test and the benefits as well as giving examples on how to go about it. It also has some examples about DB testing!
I hope this helps, I am also new to testing but have learnt a lot in a short period.
I can't remember where I saw this, but it sums up unit testing very well.
"A well written unit test cannot find a bug... But it sure as heck can find a regression"

Continuous Integration for stack with Visual C++ and C#

Please recommend a good continuous integration that would build and integrate with the .net stack and the visual c++ as well.
Some recommendations I have got are
Jenkins
CruiseControl
Teamcity
Because of the polyglot nature of the project, which continuous integration solution would you recommend?
I have used all three over several years. Some of the answers below state that most of the work will be producing your own build scripts. This has been true in my experience as well. We use a combination of MSBuild and Powershell scripts for our build process, which can be run under just about any CI tool, so picking one comes down to what you're looking for in terms of customization, integration with other systems, performance, and ease of use.
Short answer:
I recommend Jenkins. So far it seems to be the best combination of the above qualities. It has a ton of plugins, some localization and is actively developed by the OSS community.
Long answer:
I started with Cruise Control .Net. It was easily configurable with a text file and I found it highly reliable. However, we moved away from it because Thoughtworks was moving toward a paid product (Cruise, now Go) and future development was in question. A new team has since forked the project but there is little word about future development since.
We moved to TeamCity, which is free and has a great ajax-y UI. It is easy to setup and get going and has a lot of features for distributed builds. We quit using TeamCity for several reasons. The server does a ton of stuff and it was a bit overkill for our basic needs. Even so, it was not very customizable (see Time Zones and notification contents) and we often found the administration UI confusing. That was all still okay, but we also had steadily worsening performance problems. We started with a standard HSSQLDB out-of-the-box, moved our installation to SQL server when we started experiencing degraded performance, then had to quit using the server at all as performance continued to degrade over time. I'm not sure what the culprit was but I couldn't find any cleanup to do that would explain the constantly worsening performance as the Tomcat web server fought with SQL Server for resources, even when there were no active builds running. I am sure it's my fault and I was missing some crucial setting or needed to feed the server more memory, but this is a shared utility box, we did not have these issues with CC.Net, and most of all, I am not a Java/Tomcat guy and don't have a lot of extra time to keep fighting with these issues.
We've moved to Jenkins now. It seems to be working fine so far but we've only been with it a short while. It was easy to set up, does not seem to be taking nearly as many resources as TeamCity and has a ridiculous number of plugins. The only downside so far is like many OSS products, it does not seem to have the best documentation and it does so much that I may be tweaking knobs for a while to get it set up the way we want.
Between CruiseControl and TeamCity, TeamCity is faster and easier to set up, but you may need to check on licensing for it. I can't speak to Jenkins, never having used it.
Jenkins has the big advantage of being very extensible (currently over 400 plugins), which allows you to combine it with a huge number of other tools. So it gives you complete freedom in your other tool choices. I recently read that this is one problem of TeamCity, that you get locked in using the whole stack of tools (e.g. using SVN or Git as version control system will not be possible).
I am using Jenkins myself for our projects which has both Java and C++ code, and I am very happy with the tool. We had CruiseControl before, and have not once regretted the switch.
I have tried both Cruise Control and Jenkins, and Jenkins impressed me with very fast and user-friendly set up.
The three you list are all sensible choices, and the main problem will be producing the build script(s) needed to do produce the build artifact(s). If you manage to make them do everything needed, changing CI system shouldn't be a big issue.
After implementing all three in different shops, I'd chose all of the above. Pick one.

Should I start to use Windows Azure

I have just gotten Visual Studio 2010 Pro Academic Version today with the MSDN Free Azure service.
I was wondering should I start develpering it now or start at a latter stage. I have just started to program with .NET and C# but should I learn Azure now or wait untill it is mainstream?
Should I buy more data then the Pro MSDN or just use the default data?
Thanks in advance.
Benny.
If you are just beginning with .Net and C# then I would say no. Its not the time for you to move on to using Azure... but then others might disagree with me. so its your own opinion.
However you don't need to wait it to be mainstream.. Many are already using it extensively. Even if you think its not as much in demand (from job perspective) in your locality then also learning anything thats goin to be used anyway and will become mainstream will make you an expert by then... Its totally depend upon business situation. An other answer has shed some light on this
If you learn something today that isn't yet mainstream, then when it does go mainstream, you might just be an expert that people will pay handsomely for some consulting work.
Also, consider where your product is? Do you really need something super stable and super secure for something you haven't even built yet? I find that by the time I have an application I've built finally come to market, the tools I've used that were "Beta" are now widely used, mainstream, and stable.
Keep in mind that technology moves fast. I've seen developers use tools in a new project that became obsolete by the time the project went to production.
With that said, I agree with Shekar_Pro, Azure, and many other cloud based services, are already widely used and adopted.

Ultra-Simple LightWeight Source Control for Visual Studio Projects?

I am using tortoise SVN with Ankh. I really have spent too much time tweaking and cleaning mess from time to time and I lost hope in educating each every developer on how to use things properly. I am sorry but I am fed up and tired restoring the repository/reverting/fixing merges manually, sometimes even having to write some code again.
So here's my question : Is there a chimpanzee-friendly solution for source control privileging Simplicity over Flexibility ? Projects and teams are small and I figured out that we just need VERY simple and basic chekout/checkin mechanisms, with no flourish, and limited functionality and features. That would help me stop being paranoid about projects integrity.
I know that there is no easy way to do this and there is minimum techinicity and discipline required, but I ended up wondering if we Really needed all that in our case, as in the long run, it causes more trouble than it helps.
Your problem sounds like it has more to do with process and branching strategies than anything else.
If your developers know to always get the latest code before checking in and resolving conflicts locally, running all tests etc, you will already have a leg up.
Educate your developers instead of trying to use a dumbed down SCM (that in the future will probably not be adequate to your needs).
As for branching strategy - I had found that branch per feature is the most natural way to work and mostly avoids merge conflicts.
Changing SCMs will not help with your issues if you don't tackle process and branching.
First, I would suggest that you force developers to clean up their own messes, not do it for them. By doing it for them, you are only encouraging them to stay ignorant. By all mean, be a resource and provide help for them, but make them do it themselves. They will quickly learn what they have to.
Second, there are few options that have the kind of integration with VS that most developers would like. SVN is one of them. Team System is another (but a much more expensive and complciated solution). Visual Source Safe is also an option, but it's really an old, out of date system that hasn't been updated since 2005 (and even that, that was largely a patch job to a system that hadn't been updated in 7 years before it).
If you want free, there is nothing worth using that is simpler than Subversion. Everything else will be ancient technology (like CVS) that will have even more problems. There are several free SCM's that are more powerful, like git and Mercurial, but you would have even more problems. If you're willing to pay, then many third party tools have better merge and visualization tools. One I like is AccuRev.
There are also some better commercial SVN plug-ins for visual studio that may help as well. I've not used any of them, but they may improve the developers use of SVN.
Try the combination of Mercurial and Tortoisehg as GUI.
You can also use it from Visual Studio with VisualHG.
Every developer is free to clone and manage her own repository.
Once you reach an agreement you can push up to a colleague's repository or a central location.
To aid with adoption, you might convince others to watch the DVCS video on the FogCreek Kiln page.
See what-makes-merging-in-dvcs-easy and similar SO discussions regarding the relative ease of merging.
I would say that every developer that works in a team should have a strong understanding of source control principles. Maybe you should get better developers! :-)
To answer your question I have always found Team System wonderful and very flexible. With such good IDE integration, it can be configured to ensure best practice in source control. However, it is quite a big source control system so may be over the top for your purposes.
I believe the issues is more of process than product.
Strict written documentation and process might work
Keep it as simple as possible.
You might make adherence to the process a contractual obligation.
That said I have had very good luck with Visual SVN for Visual Studio.
It is easy to use and integrates well.
If that is too hard, might revert to TortoiseSVN which is pretty idiot proof.
As for an alternate super simple product I know not of such a product, but
if you really need something lightweight, then datestamped and named zip
files is a the poor and ignorants form of source control. Merging and
restoring is a bitch though.

Categories

Resources