Flesch-Kincaid Readability in Asp.Net - c#

I'm working on an asp.net content management system and we need to have Flesch-Kincaid grade level stats available to the users. I've done quite a bit of searching and I haven't found any viable ways to implement this. The closest I've come is the MS Word ReadabilityStatistics Property. I can get this to work great in a console app but support for the dll in asp.net isn't supported and I get an access denied error whenever I try to leverage it from my asp.net application. I spent 8 hours the other day trying to get that to work from asp.net with no luck.
Does anyone know of another dll or method to get a FK value? We do tell our content users that the FK value is from MS word, so that'd be best, but really anything which is close would be appreciated at this point. Even an js version would be adequate.

If you're happy translating PHP, the work has been done for you.

Related

C# Sass Compiler

So I have a C# (asp.net) based dashboard for a proprietary content management system. One of the things that the dashboard allows is for the user to go in and add custom css/sass to their site. When they do this, my controller calls a program that compiles the sass using NSass.Core.
Up until now, I have been using Foundation 5 as my responsive framework. Yesterday when attempting to update my controller to allow for Foundation 6 compilation, it started throwing errors. The errors were occurring every time the compiler attempted to parse a sass map (associative array).
I started doing some research into the problem and found out that sass maps are a relatively new mechanic in sass and the last time Nsass was updated was three years ago, so I am assuming this is the problem.
Has anyone had similar experience? If so, what was your solution. If not, does anyone use anything else that would work for me? I have tried installing a couple other packages, but started receiving various other errors such as libsassnet not being able to find the 32 bit dll. Hopefully someone here can give me an answer that saves me some time.
The errors I have received when using Nsass were all along the lines of "error reading values after primary" where primary is the first value in the first map the compiler comes across. When I take that map out, it just moves to the next one and gives the same error.
As far as narrowing my question down... I just want to know what other people are using out there to compile Sass in C#
There is a nuget package: Bundle Transformer: Sass and SCSS is a provider for Bundle Transformer. In turn, this is an extension of System.Web.Optimisation that could allow you to add code to your CMS to compile user generated SCSS into Css files.
An example of this can be found in the Optimus package for the Umbraco CMS. Looking through this code could give you a good basis for creating your own system. If you speak with the Author of the package (a really nice guy) he might be able to help you create your own targeted package that isn't dependent on Umbraco.
Hope that helps.

implement memchached on Asp.net 4.0

I am new in memchached concept. I search everywhere but i couldn't find anything how to implement in ASP.net 4.0. Can anyone tell me about the right concept.
I successfully installed memchached Server in services.msc
Now what to do after this step.
can any one have good example in Asp.net. If yes, Please provide me.
OR Please tell me step by step code.
I also read these article
http://rsuharta.wordpress.com/2011/04/27/memcached-provider-in-the-net-web-application/
But didn't understand anything. Please provide me best solution
Thanks.
Here is a CodeProject article walking you through using memcached in an ASP.NET application.
However, let me first say that it's awful likely that if you don't already understand the concept of a framework like memcached you don't need it.
Let me try and make this as clear as possible so you can make the right decision. For some reason, as of late, data caching has become the new "golden hammer" and all kinds of frameworks have popped up. But the problem is that most developers don't understand the real driving forces behind implementing data caching and they don't understand that it's really not a trivial matter. I'm going to give you the same example I gave someone else just yesterday on SO, but a paraphrased version.
Imagine if you will an application stack (i.e. more than one application) that accesses a shared set of data at a rate of more than, and I'm going to give you the real number, 40M+ transactions per day. Now, when I use the term transaction here I really mean read or write. Which only complicates things BTW because now I have to optimize for both.
Alright, so now we have a set of applications accessing this shared data at a ridiculous rate per day - how do we ensure reasonable response times for both read and write? Data caching. But, if you're not sitting in that boat you probably don't need data caching and need to spend your time learning other things that are more relevant to what you're doing.

Obtaining Google search results' site position

I want to code some algorithm or parser which should get site position in google search results. The issue is every time google page layout will change I should correct/change the algorithm. How do you think guys is will really often change? Is there any techniques/advices/tricks about determining Google' site position?
How can I make robust position detection algorithm?
I want to use C#, .NET 2.0 and HtmlAgilityPack for that purpose. Any advices or proposes will be very appreciated. Thanks in advance, guys!
POST UPDATE
I know that google will show captcha to prevent machine queries. I got special service for that, that will recognise any captcha. Could you guys tell me about your experience in exact scraping results?
Google offer a plethora of APIs to access their services. For searching there's the Custom Search API.
I asked about this a year ago and got some good answers. Definitely the Agility Pack is the way to go.
In the end we did code up a rough scraper which did the job and ran without any problems. We were hitting Google relatively lightly (about 25 queries per day). We took the precaution of randomising 1) the order and 2) time of day and 3) time paused between queries. I don't know if any of that helped, but we were never hit by a captcha.
We don't bother with it much now.
Its main weaknesses were/are:
we only bothered to check the first page (we perhaps could have coded an enhanced version which looked at the first X pages, but maybe that would be a higher risk - in terms of being detected by Google).
its results were unreliable and jumped around. You could be 8th every day for weeks, except for a single random day when you were 3rd. Perhaps ... the whole idea of carefully taking a daily or weekly reading and logging our ranking is too flawed
To answer your question about Google breaking your code: Google didn't make a fundamentally breaking change in all the months we ran it but they changed something which broke the "snapshot" we were saving of the result (maybe a CSS change?) which did nothing to improve the credibility of the results.
We went through this process a few months back. We tried the API's mentioned above and the results were not even close to the actual search results. (Google for this lots of information).
Scraping the page is a problem, google seem to change the markup every few months and also have checks in place to work out if you are human or not.
We eventually gave up and went with one of the commercially available (And updated often) bits of kit.
I've coded a couple of projects on this, parsing organic results and adwords results. HTML Agility pack is definitely the way to go.
I was running a query every 3 minutes I think and that never triggered a CAPTCHA.
In regards to formatting changing, I was picking up on the ID of the UL (talking from memory here) and that only changed once in around a year (organic and adwords).
As mentioned above though, Google don't really like you doing this! :-)
I'm pretty sure that you will not easily get access to Google search results. They are constantly trying to stop people doing it.
If thought about screen scraping - be aware that they will start displaying captcha and you won't be able to get anything.

Facebook Work History Unavailable

I am relatively new to FB development, but I have managed to do what I wanted, which was to get a list of friends, and from each of them, get their work history. I accomplished this by using Facebook's own C# SDK, and using the Get method on each of my friends, basically doing: _fa.Get("/").
This worked perfectly up until a couple of days ago, where it suddenly stopped working, and now I no longer have the work history (and education for that matter) available to me in the JSONObject which is returned from the Get method. One other thing of note is that a couple of my friends who installed the app I am developing (as a means of testing), do return their work history, but other friends (who have not installed the app), and do have a work history open (which I can see if I look at their profile by browsing into it with my browser) do not return it in my Get call...
The obvious explanation is that FB changed something, and now applications can no longer access this information unless installed on a specific user profile (hence the odd behavior).
Has anyone else encountered the same thing? Am I doing something wrong?
Any help would be appreciated!
Many thanks,
As you said, the answer appears obvious and is probably some change in default privacy settings that have been rolled out. Note that Facebook has introduced a couple of new features this week, most notably the "places" stuff. Most likely work history is not shared anymore by default. You probably only had access because the work history was publicly visible anyway.
Update
Seems to me, the best places to check for changes is the developer blog and the developer roadmap.

Migrating C# code from Cassandra .5 to .6

I have some some simple code derived from an example that is meant to form a quick write to the Cassandra db, then loop back and read all current entries, everything worked fine. When .6 came out, i upgraded Cassandra and thrift, which threw errors in my code (www[dot]copypastecode[dot]com/26760/) - i was able to iron out the errors by converting the necessary types, however in the version that compiles now only seems to read one item back, im not sure if its not saving db changes or if its only reading back 1 entry. the "fixed" code is here: http://www.copypastecode.com/26752/. Any help would be greatly appreciated.
First of all, let me say that you should definitly use TBufferedStream instead of TSocket for the TBinaryProtocol, that will make a huge impact on your application performance.
For the Apache Thrift API documentation that BATCH_INSERT methods is deprecated, so it could have introduced a misleading bug on that operation that actually only insert the first column. Said so, why don't you try to use BATCH_MUTATE instead?
By the way, why are you trying to use Thrift directly? There are some nice c# clients for Cassandra that are actually performing really well. You can find the whole list at http://wiki.apache.org/cassandra/ClientOptions.
I'm the author of one of them are is pretty much updated with Apache and its being used by some companies on production environment. Take a look at my homepage.

Categories

Resources