(This is about local builds on dev machines, NOT builds being done externally on a build server as blessed builds.)
In normal Visual Studio/C# development, the local build system "just works." However, on rare occasions***, the built code behaves erratically. (Erratically = not as the source code says) Maybe it produces weird errors, produces output that shouldn't be possible, or acts as though recent code changes never happened. These builds can be remedied by doing a clean/rebuild. However, since many solutions take drastically longer to do a full clean/rebuild than to trust the standard build system and simply "Build",
doing a full clean/rebuild every time isn't a practical option.
The problem comes when one of those "erratic" builds gets made and issues are seen. Typically, the bugs get spotted, and developer resources start focusing on how this bizarre behavior could be happening based on the source code used to create it. Since the initial developer can't figure out how the erratic behavior came from the source code, more resources get called in to hunt in futility for the bug in the source code.
Given that once a "non-clean" build goes erratic when built on a certain machine, it will typically continue to build erratically until a clean/rebuild is performed, huge amounts of developer resources can get sucked into the futility before someone throws their hands up and says "I'm going to try some low-probability thing, because we're grasping at straws here", tries cleaning and rebuilding, and suddenly the erratic behavior goes away.
Assuming:
Because of speed, "build" will be chosen over clean/rebuild except when there is an explicit reason for clean/rebuild
The erratic builds happen rarely enough that convincing devs to habitually clean/rebuild daily would be a fruitless endeavor
Making builds and finding errors is a routine part of development and therefore, nothing "stands out" about spotting a build bug that needs to be fixed when the true cause is a corrupt build. Similarly, bringing in additional resources for debugging a hard bug is also routine and not anything that stands out.
Are there ways to defend against this? Are their ways to detect when the temp/cached files that "cleaning" deletes are out of date and problematic?
Are their ways (beyond telling devs to "remember to try cleaning and rebuilding") to prevent a huge snowball of resources from checking the source code for non-existent issues in the code?
***In the last 12 months, I've seen 3 or 4 occurrences of erratic builds wasting >> one developer/day before the cause is found. In the same time, there were probably 3-5000 developer builds that worked normally.
Hello recently i have been profiling some mono application. Few weeks ago mono profiler outputted some simple reports. But then we make some major refactoring now during profiler run half of the file is filled with errors like :
unmatched leave at stack pos: 4 for method unknown method 0xb4b1d580
sometimes with actually method name when it finds symbols. My question is what does this mean? Can i somehow fix those errors? are profiling results affected?
I've been looking into this bug for a week or so, and after all attempts to figure out how to fix it I still have not done so. That issue is the title of this post. When I go into my c# project and increment the AssemblyVersion to a newer version I get a small performance boost. When I decrement the AssemblyVersion, I get a large noticeable performance boost.
At this time I am working on an 64 bit machine with an AMD processor, and if I switch to an Intel machine (also 64 bit), this issue does not occur. The project I am working on is reliant on 2 or 3 Microsoft dlls and 1 dll created by someone who worked on this project in the past whom I can not contact (built in x86).
In VisualStudio 2013 I analyzed the performance when the Assembly version number was incremented, decremented, and kept the same. From what I could see it looked like the application on average was using less threads on the slower version, and having more collisions.
I am going to be honest and say I am picking up a project someone in the past was working on, and I am definitely a C# amateur. Because of this I have spent the past week or so trying to research exactly what the AssemblyVersion and AssemblyFileVersion are used for, how the AssemblyInfo.cs is built with the project, and how dlls are built with the project. Though I still am a little bit hazy on the facts. Here are a few places I have looked in my research about them:
DLL hell
Differences between assemblyversion assemblyfileversion and assemblyinformationalversion
Process Interoperability
Cross assembly causes performance hit
I have also run the Visual studio Performance and Diagnostics tool to try and visualize the cpu's strain graphically, and to see the number of times a function was called during the application's lifetime. All 3 options (increment, decrement, stay the same) had the function that runs the application (a timer that goes off every 50 ms) run more if it was a slower version, and less if it was a faster version.
I have tried rebuilding the dll being used with our project to x64 and building the project as Any CPU, but that too did not work. After that I hit a brick wall and have absolutely no idea where to look for more help / info about my problem I am encountering.
I am really sorry if this is difficult to answer from what I have given, or if anything is unclear. If anybody needs any clearer explanation, I will attempt to reply and do so. After 4 eastern though I wont be able to reply to questions until tomorrow in the morning.
Thanks everyone
edit* Performance measurements were made with the stopwatch class (bad idea I know). The performance is noticeably different in how fast the GUI will refresh the results on a page. (there are around 3-10 messages which can be displayed per second on the GUI)
I have some C# performance tests, basically running two different methods and checking that one runs much faster than the other.
When I run them locally in NUnit, one of the tests runs ten times as fast as the other, so I've got an NUnit test that uses Stopwatch to check that it is at least twice as fast (in case of regression). But when I run the tests in TeamCity, the fast method is only about 1.5 times as fast as the slow one? I would expect hardware differences to have some effect, but not this much. What could be causing this?
To answer my own question, the problem turned out to be that code coverage was turned on for the test build in TeamCity, so the overhead of this brought the two method runtimes closer together. Hopefully this answer helps someone else in future.
First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss.
Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java...
Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature?
In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc...
Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;).
Once upon a time, ye olde The Great Computer Language Shootout did include some shell scripts.
So, courtesy of the Internet Archive, from 2004 -
Note the shell scripts didn't have programs for many of the tests.
Score Missing-Tests
Java 20 1
Perl 16 0
Python 16 0
gawk 12 6
mawk 10 6
bash 7 12
Note shell scripts can sometimes be small and fast :-)
"Reverse a file"
CPU (sec) Mem (KB) Lines Code
bash 0.0670 1464 1
C gcc 0.0810 4064 59
Python 0.3869 13160 6
It is highly dependent on what the script is doing. I've seen poorly written shell scripts sped up by one, two even three orders of magnitude by making simple changes.
Typically, a shell script is simply some glue logic that runs utilities that are usually compiled C or C++. If that's the case, there may not be much that can be done to speed things up. If the grunt work is being done by a poorly written utility that's compiled, it's just doing a lot of wasted effort really fast.
That said, Python or Perl are going to be much faster than a shell script, but a VM or native code will be faster yet.
Since you can't tell us any details, we can't really provide specific help.
If you want to see a simple demonstration for comparison, try my pure-Bash implementation of hexdump and compare it to the real thing:
$ time ./bash-hexdump /bin/bash > /dev/null
real 7m17.577s
user 7m2.570s
sys 0m14.745s
$ time hexdump -C /bin/bash > /dev/null
real 0m2.459s
user 0m2.260s
sys 0m0.176s
One of the main reasons the Bash version is slow is that it reads the file character by character which is necessary to handle null bytes (shells aren't very good at handling binary data), but the primary reason is the speed of execution. Here is an example of a Python script I found:
$ time ./hexdump.py /bin/bash > /dev/null
real 0m11.694s
user 0m11.605s
sys 0m0.040s
I simply just would love to know and
find the results of anyone who has
done some performance comparisons
of...
The abiding lesson of such comparisons is that the particular details matter - a lot.
Not only the particular details of the task, but (shouldn't we know this as programmers) the particular details of how the shell script is written.
So can you find someone who understands that shell language and can check that shell script was written in an efficient way? (Wouldn't it be nice if changing a couple of lines took it from 40 minutes to 5 minutes.)
Just did this very simple benchmark on my system, and the results are as expected.
Add up all integers between 1 and 50,000 and output answer at each step
Bash: 3 seconds
C: 0.5 seconds
If you are writing code and you have concerns about the speed of processing, you should be writing code that is either compiled directly to assembly or compiled for a modern VM.
But... with Moore's Law kicking up processing power every 18 months, I wonder: are the performance requirements really necessary? Even interpreted code runs incredibly fast on most modern systems, and it's only going to get better with time. Do you really need the kind of speed improvements that compiled code would give you?
If the answer is no, then write in whatever makes you happy.
While this doesn't include "Shell" (aka sh/bash/ksh/powerscript) languages, it is a relatively large list of "language [implementation] performance" -- packed full with generalities and caveats. In any case, someone may enjoy it.
http://benchmarksgame.alioth.debian.org/
As mentioned above, you won't be able to do SQL queries from shell. Languages which runs on a VM will take a little time upfront because of the VM factor but otherwise the difference should be negligible.
If the question really is to decrease it from 40 to 5 minutes that I will try to find out which piece is taking the majority of the time. If the query is running for the longest time then switching language won't help you much.
Again (without much detail in the question) I would start with looking into different components of the system to see which one is the bottleneck.