I have some some simple code derived from an example that is meant to form a quick write to the Cassandra db, then loop back and read all current entries, everything worked fine. When .6 came out, i upgraded Cassandra and thrift, which threw errors in my code (www[dot]copypastecode[dot]com/26760/) - i was able to iron out the errors by converting the necessary types, however in the version that compiles now only seems to read one item back, im not sure if its not saving db changes or if its only reading back 1 entry. the "fixed" code is here: http://www.copypastecode.com/26752/. Any help would be greatly appreciated.
First of all, let me say that you should definitly use TBufferedStream instead of TSocket for the TBinaryProtocol, that will make a huge impact on your application performance.
For the Apache Thrift API documentation that BATCH_INSERT methods is deprecated, so it could have introduced a misleading bug on that operation that actually only insert the first column. Said so, why don't you try to use BATCH_MUTATE instead?
By the way, why are you trying to use Thrift directly? There are some nice c# clients for Cassandra that are actually performing really well. You can find the whole list at http://wiki.apache.org/cassandra/ClientOptions.
I'm the author of one of them are is pretty much updated with Apache and its being used by some companies on production environment. Take a look at my homepage.
Related
Just need a point in the right direction with this one.
I've created the Cisco Unified Call Manager API via the instructions provided by Cisco, the API for CUCM is called AXL.
It's currently in my C# WPF project and works just fine (i've retrieved some phone data successfully), the issue is that the API is in a single CS file that's 345K lines long. This is causing an extremely long delay when I attempt the first action using the API (after it has compiled).
As one user on the Cisco forum advised:
There is a very high chance that your problem is with the time that it takes the .net framework to generate the xml serialization assembly.
Pre-generate the xml serialization assembly when using AXL on .net and your first response will be MUCH faster.
I've tried to pre-generate it using the instructions from user brain backup in this thread. Unfortunately the first use of the API is still around ~45 seconds (it did reduce it by about a minute). I'm not extremely savvy with the debugging tools within Visual Studio so unsure how to check what exactly is causing the issue (but it certainly looks like an issue related to generating the XML).
I was wondering if anyone could recommend of a way to remove the unnecessary methods from the CS file (99% of it won't be used anyway) without having to manually re-create it. Any type of tool that can pull/delete methods and their dependencies from a CS file would be absolutely brilliant.
There is a way to check whether you method has been used or not and if used how many times and where check this out.
https://visualstudiomagazine.com/Blogs/Tool-Tracker/2014/12/Finding-Method-Property-Variable.aspx
It might make sense to pare the AXL WSDL itself and re-compile - as mentioned, it's unlikely you'll ever use anywhere near the whole schema.
You should be able to just edit AXLAPI.wsdl and remove all of the and elements except for the items you are actually useing.
Had the same issue, it was almost unusable with the delay. Two things that I have found to get around this with almost instant results.
Don't use the WSDL. Write your own methods to handle SOAP requests. Takes time and can be error prone but results are almost instant.
Use a tool that can handle large text files, like Notpad++ to open your WSDL generated code file and take only what methods out that you need. This is the method I've choose and it works great.
Also, I believe you could just use the executeSQLQuery methods and cut out a good portion of the rest of the code but I've yet to try it. Each method above I have tried without pregenerating the xml serialization. I found the problem to be with the generated C# axl code file size.
I'm making an IRC bot in C#, and want to have Lua be executable via a command. I already have this working, and have overcome some basic obstacles, but now I'm having a larger problem with a StackOverflowException; My friend gave me some Lua code to run, which every time seems to cause a StackOverflowException, no matter how hard I try to prevent it.
print(string.find(string.rep("a", 2^20), string.rep(".?", 2^20)))
So, with this being executed using LuaInterface (LuaInterface 2.0.0.16708 to be precise) - I get a StackOverflowException in my code and I don't seem to be able to fix this, looking at some previous questions.
I know parsing code before executing it to predict stack overflows is hard, so I don't know how I would circumvent this. I have already tried multi-threading (which solved a previous problem where yielding code wouldn't return control back to C#) but this does not seem to help.
To get around that particular error use Lua 5.2.2 or newer. The case is a reported bug that got fixed in the version 5.2.2. It gives "pattern too complex" error instead.
And as far as sandboxing is concered why not fashion it after the Lua live demo as suggested in this SO answer? I don't know how secure it is but I'd presume the authors have both the incentive and capability of making it as secure as possible. The sources can be found from here.
I tried to google but didn't find a decent tutorial with snippet code.
Does anyone used typed DataSets\DataTable in c# ?
Is it from .net 3.5 and above?
To answer the second parts of the question (not the "how to..." from the title, but the "does anyone..." and "is it...") - the answer would be a yes, but a yes with a pained expression on my face. For new code, I would strongly recommend looking at a class-based model; pick your poison between the many ORMs, micro-ORMs, and raw ADO.NET. DataTable itself does still have a use, in particular for processing and storing unpredictable data (where you have no idea what the schema is in advance). By the time you are talking about typed data-sets, I would suggest you obviously know enough about the type that this no longer applies, and an object-model is a very valid alternative.
It is still a supported part of the framework, and it is still in use as a technology. It has some nice features like the diff-set. However, most (if not all) of that is also available against an object-based design, with classes and properties (without the added overhead of the DataTable abstraction).
MSDN has guidance. It really hasn't changed since typed datasets were first introduced.
http://msdn.microsoft.com/en-us/library/esbykkzb(v=VS.100).aspx
There are tons of videos available here: http://www.learnvisualstudio.net/series/aspdotnet_2_0_data_access_and_databinding/
And I found one more tutorial here: http://www.15seconds.com/issue/031223.htm
Sparingly.... Unless you need to know to maintain legacy software, learn an ORM or two, particularly in conjunction with LINQ.
Some of my colleagues have them, the software I work on doesn't use them at all, on account of some big mouth developer getting his way again...
This is the question:
Im using Lucene.Net, and Im importing like ~255k documents with ~6 fields each. Ive tried a few things, but the process takes a lot (~1day). Im not using any strange analyzer, just the standard analizer and Im tokenizing only one of the fields. I tried changing the max merge docs and nothing.
Has anyone bumped into this problem?
Thanks and best regards
I'll take a different alternative and I've decided to post the result, so If anyone should face the same problem may find this other way to go.
Lucene.net has an interesting feature allowing to merge two indexes, so my idea is to index my content into several smaller indexes and join them using the merge feature.
This has worked for me. I tested this solution indexing WordNet to perform queries on it and it worked flawlessly.
Assuming you don't have access to a profiler (Redgate ANTS is very good), then:
Work out your bottleneck: is it the Lucene code or your data reader? Comment out the Lucene indexing code, leaving just your data reader. It should be easy to tell on which side your problem lies.
Make sure you're using lucene as built from SVN. The version 2.9.x from subversion is much better than earlier versions, especially with regards speed of indexing
Use the default merge factors etc. Lucene seems to be much better at this than my attempts at tweaking.
Lastly (and perhaps most importantly!) does it matter that indexing is slow? If you're only going to ever have to do this once or twice a year: I'd say don't worry about it. (Unless this is a learning exercise or somesuch)
Hope this helps,
Relating to another question I asked yesterday with regards to logging I was introduced to TraceListeners which I'd never come across before and sorely wish I had. I can't count the amount of times I've written loggers needlessly to do this and nobody had ever pointed this out or asked my why I didn't use the built in tools. This leads me to wonder what other features I've overlooked and written into my applications needlessly because of features of .NET that I'm unaware of.
Does anyone else have features of .NET that would've completely changed the way they wrote applications or components of their applications had they only known that .NET already had a built in means of supporting it?
It would be handy if other developers posted scenarios where they frequently come across components or blocks of code that are completely needless in hindsight had the original developer only known of a built in .NET component - such as the TraceListeners that I previously noted.
This doesn't necessarily include newly added features of 3.5 per se, but could if pertinent to the scenario.
Edit - As per previous comments, I'm not really interested in the "Hidden Features" of the language which I agree have been documented before - I'm looking for often overlooked framework components that through my own (or the original developer's) ignorance have written/rewritten their own components/classes/methods needlessly.
The yield keyword changed the way I wrote code. It is an AMAZING little keyword that has a ton of implications for writing really great code.
Yield creates "differed invoke" with the data that allows you to string together several operations, but only ever traverse the list once. In the following example, with yield, you would only ever create one list, and traverse the data set once.
FindAllData().Filter("keyword").Transform(MyTransform).ToList()
The yield keyword is what the majority of LINQ extensions were built off of that gives you the performance that LINQ has.
Also:
Hidden Features of ASP.NET
Hidden Features of VB.NET?
Hidden Features of F#
The most frequently overlooked feature I came across is the ASP.net Health Monitoring system. A decent overview is here: https://web.archive.org/web/20210305134220/https://aspnet.4guysfromrolla.com/articles/031407-1.aspx
I know I personally recreated it on several apps before I actually saw anything in a book or on the web about it.
I spoke to someone at a conference one time and asked about it. They told me the developer at MS had bad communication skills so it was largely left undocumented :)
I re-wrote the System.Net.WebClient class a while back. I was doing some web scraping and started my own class to wrap HttpWebRequest/HttpWebReponse. Then I discovered WebClient part way through. I finished it anyway because I needed functionality that WebClient does not provide (control of cookies and user agent).
Something I'm thinking about re-writing is the String.Format() method. I want to reflect the code used to parse the input string and mimic it to build my own "CompiledFormat" class that you can use in a loop without having to re-parse your format string with each iteration. The result will allow efficient code like this:
var PhoneFormat = new CompiledFormat("({0}){1}-{2}x{3}");
foreach (PhoneNumber item in MyPhoneList)
{
Console.WriteLine(PhoneFormat.Apply(PhoneNumber.AreaCode, PhoneNumber.Prefix, PhoneNumber.Number, PhoneNumber.Extension));
}
Update:
This prompted me to finally go do it. See the results here: Performance issue: comparing to String.Format