Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Its seems that this question has been ask many time before, however none of the questions I have read really reached any general consensus or substantiated conclusion. So....
I have a .NET 3.5 application framework that is made up of the following nodes:
NET.TCP WCF Listener Service (OS Level Service)
A request processing framework that sits under the listener
An SQL 2008 database that contains configuration data
An SQL 2008 database that contains a store of processed requests (i.e. message data in, message data out, times, status logs etc)
The framework works exactly as required and I am getting great/acceptable response times. However I need to get these response times down as low as they possibly can be. One of the obvious solutions is to implement caching of the, rarely changing, configuration data (node #3).
The key requirements of the caching model would be:
Ability to have cached objects expire after they have not been used for X timespan (i.e. sliding expiration)
Ability to have cached objects expire after X timespan regardless of their last use.
Ability to clear, either entirely or selectively, the cache in a thread safe manner and therefore not impact on any process that may be querying the cache at the time the clear methods are called
As the framework is not a web app, System.Web.Caching cannot be used (well thats the general advice I have read). It seems a little overkill to add the MS Enterprise Framework to the project just for Caching Application Block functionality (that and I have heard that MS are deprecating this as .NET 4 now has System.Runtime.Caching). It isn't viable to use .NET 4 and therefore System.Runtime.Caching
Then there is another aspect to consider. The configuration data is coming out of an SQL database and this will perform its own caching of commonly used data. So maybe caching isnt required here at all? That said the DB and the Service reside on seperate servers, therefore caching in memory on the server will remove the overhead of the network comms between server and DB.
So the question is what caching model, if any, would you suggest for this? At the moment I am leaning towards writing my own, however if this can be avoided I am all ears.
EDIT
After looking at the MS Enterprise Library a little more it seems that this may be a viable solution. It meets most of the requirements and is not overly complicated. This afternoon I intend to do some threading tests to make sure that it works as expected. The question that I do have though is: Is this library production ready or is it more a technology demonstration?
If you are on a single server, then I would go with MS Enterprise Library Caching Block. It works fine in a prod environment. If you get into multiple servers, then I would move to something like memcached.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
My case is - I'm developing a system that help a car service to communicate with their customers and vice versa. There will be two products - a mobile app written in Xamarin.Forms and a web page in ASP.Net - both doing same things more or less, an alternative. The database will be hosted on SQL Server. And now I'm confused about the "middle" layer. I have been reading a lot about WCF and WebAPI and I still can't figure out which is better for me. Any suggestions for this scenario?
"Better" is always a hard question to answer. So, that's my disclaimer.
WebAPI is currently pretty standard, and quite easy. It allows for simple REST api's - although these are very doable with WCF as well.
The main difference between WCF and Web API
Web API
is, well Web (HTTP) - almost every language supports it, it's relatively light weight.
WCF
It's big - HTTP is just one of the options for binding. It's ideal for enterprise wide connectivity solutions. For example - reusing your logic for HTTP bindings and or message queueing.
One nice feature of WCF is that, at least for C#, it generates client libraries and models for you. It comes with Visual Studio (note: see warning). For the WebAPI, you might need to create the client libraries yourself - which basically be a lot of HTTP calls.
If you want it simple - WebAPI has very good support and can be implemented easily from any language - the clients and models are pretty straight forward - but usually you do need to code them yourself, unless you use OpenAPI spec and some toolkits.
Warning
The generated WCF libraries, might or might not be compatible with the framework (Mono, Xamarin, Core, etc.) you are using. As #Dai mentions, the WCF client library generation might be outdated. Although I do not know if there are more open source tools available to extract clients from the WSDL. So, you should try if your client is compatible first.
For Web API client generation, you can look at tools like Swaggerhub. Do note: you need to define the spec in your application (or provide it explicitly)
See: https://swagger.io/tools/swaggerhub/
The advise (obviously just an opinion)
If you don't need the full package of WCF, the extensive binding capabilities and such, I would go for the WebAPI variant.
If you combine it with Swagger (OpenAPI spec), you'll get a pretty open and easy to use API available for a broad variety of languages.
For more info on swagger/swashbuckle: https://learn.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-swashbuckle?view=aspnetcore-3.1&tabs=visual-studio
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am trying to develop a data management solution for a commercial product that meets several criteria. The criteria and my reasoning are as follows:
The solution should be in C# or support C#
Manage data as JSON
No external schema maintenance
Be able to cache some or all data in memory and persist to disk
Not require an additional installation
If the solution involves third-party software, the license must support no-cost commercial use
Requirement #1: My application is written in C# and I would prefer that any solution does not involve integrating with applications, libraries, or code in another language.
Requirement #2: There are several JSON-based tools and libraries that I would like to utilize, so I need a solution where data is either in or easily converts to/from JSON.
Requirement #3: I want to avoid the schema maintenance that comes with using relational database solutions. I prefer to manage mismatched data-to-object mappings in code and have code update older data instead of managing schema updates separately.
Requirement #4: I require some or all data to be loaded into memory at all times, and all data to be persisted to disk. Whether data persists in memory or not should be optional per data type.
Requirement #5: When installing my product I don't want to have any secondary installations or have any external services running other than my application. A completely self-contained solution is best.
Requirement #6: The intended use is for a distributed commercial product. I would prefer to avoid any additional fees or licensing issues that come with many third-party solutions.
To date I have tried several solutions. Originally I did not have as many constraints and went with a SQLite.NET and its use wasn't unpleasant, but the overhead from schema maintenance and data format was more than I would like. I investigated a lot of NoSQL solutions (such as RavenDB), other third-party solutions (Karvonite), and simple JSON file storage implementations, but I'm not satisfied with any of them.
Is there a custom approach or solution that I am missing, that someone else has used successfully? I'm hoping that I am simply overlooking the option(s) that I am after, and that some NoSQL and .NET experts out there have enough experience in this area to point me in the right direction.
EDIT: In case any original commentators are confused, I updated the question and title to better adhere to SO's policies.
Fluent NHibernate Automapping on top of SQLite would meet all your requirements except edit #2 - "NoSQL, preferably all data is a JSON document"
It automaps a relational DB schema from your object model...does not use JSON. edit You might be able to save JSON data as BLOB, however. (caveat: I know almost nothing about JSON)
Have you taken a look at the Karvonite Framework? The Karvonite Framework provides a strongly-typed embedded database system that includes a portable library implementation for .NET / Windows Store / Silverlight / Windows Phone / Xbox development. I have only used this for small database implementations but so far it has met every one of my needs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently, a project came my way with requirements to ...
1. Build a C# console app that continuously checks website availability.
2. Save website status somewhere so that different platforms can access the status.
The console app is completed but I'm wrestling with where I should save the status. I'm thinking a SQL record.
How would you handle where you save the status so that it's extensible, flexible and available for x number of frameworks or platforms?
UPDATE: Looks like I go with DB storage with a RESTful service. I also save the status to an xml file as a fallback to service being down.
The availability of the web-sites could be POSTed to a second web service which returned a JSON/Xml result on the availability of said website(s). This pretty much means any platform/language that is capable of making a web-service call can check the availability of the web site(s).
Admittedly, this does give a single point of failure (the status web service), but inevitably you'll end up with that kind of thing anyway unless you want to start having fail-over web services, etc.
You could save it as XML, which is platform independent. And then to share it, you could use a web server and publish it there. It seems ironic to share website availability on an other website but just as websites, other type of servers/services can have downtime also.
You could create a webservice, and you probably will need to open less unusual ports on firewall to connect to a HTTP server than to connect a SQL Server database. You can also extend that service layer to add business rules more easily than at database level.
I guess webservice is the best option. Just expose a restful api to get a simple Json response with the server status. Fast and resources cheap.
Don't re-invent the wheel. Sign up for Pingdom, Montastic, AlertBot, or one of the plethora of other pre-existing services that will do this for you.
But, if you really must, a database table would be fine.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
We're going to develop an ASP.NET MVC 4 web application which needs to be workflow based.
The scenario is something like this:
The Scenario
Users make request to get loan of a bank by submitting a form,
operators find the requests in a grid in their dashboard, they see the
details and if it is okay they send it to boss, and send it back to
users to fix or complete the request, if not. The boss decides to pay
loan or not, if yes and the price is below something it goes to fund
section, if it is above something the request goes to another boss and
so on..
Requirements
In each state there might be some additional relevant data attached, for example the points of user calculated on time of sending the request.
A process manager (admin) exists who can cancel any request wherever it is or pass the request to anyone he wishes.
There might be multiple transitions available which state can move along them, the state should check the conditions and choose one transition.
Meanwhile operators can
Pass requests between each other (if they're allowed to), for example if they are too busy or they're going on vacation (substituent)
See history of requests and see what data changed in round-trips (versioning)
Write notes before sending the request to next one or return it to someone.
The Question
In above scenario, which technology is more suitable and why?
Workflow Foundation
BizTalk
or libraries like:
Simple State Machine
Jazz
stateless
State Machine Compiler
I would not use BizTalk for this, even though I was a BizTalk developer for a number of years, and implemented similar workflows using it.
The reason is that I have come to the conclusion that modelling complex business workflows in BizTalk is an anathema to what BizTalk really does well, which is high performance message routing and transformations, and host integration capabilities.
However, neither would I use WF for this. I think that MS have made WF needlessly difficult to work with. I worked with WF3 which was the first version, so perhaps things have improved. But as far as I know MS removed state machine workflows from WF4 onward and now only supports sequential workflows.
So in answer to your question, I think neither are suitable for this purpose.
Why not start with NO technology stack except for ASP.NET MVC, JQuery, and SQL Server. This seems to be the MS web development standard at the moment. Likely you're already licensed for this.
Even though you seem to have your requirements up front, you'll likely find that some or even most of the requirements you have listed are subject to change or even removal.
So start with one or two core user stories which can be delivered quickly in small iterations and then continue to add features like that. When you get the point where you need to start looking at other technologies or frameworks then that is the time to reassess the decision. At this point I would personally look at using NServiceBus sagas as another option to manage your long running processes.
I think making a decision about tech stack too early in the planning process can work against you in many ways.
Sorry does not address your original question directly.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've been looking for a simple key/license system for our users. Its partly to stop piracy (avoid users from sharing the application around) and the other half to track the number of 'licensed users' we have. I have already read a few good suggestions on SO but I'm curious as to how people have implemented the 30 day evaluation criteria.
Do you generate a key that stores the date somewhere and do a comparison each time or is it a little more complicated - deleting the file/removing the registry shouldn't deactivate.
Are there any example implementations out there that can give me a head start? The irony is that our PM doesn't want to license a third-party system to do it for us.
This is for a Windows Forms application.
Have you checked out the Rhino-Licensing project by Ayende Rahien. You can also see his blog post about licensing a commercial product which led him to develop this solution.
There are two separate challenges: i. How do you prevent a copied app from running. ii. How to prevent users from ripping out/bypassing your prevention scheme. The first one is usually done by taking a hard to copy signature of the user's system (e.g. Hard Drive ID + Processor ID + RAM, etc) and using it as the seed/key AND activating it on-line by calling "home".
The Second issue is harder to do in .Net since the source code can be in someway extracted and recompiled to exclude your protection system. The key here is to make it cheaper to buy the license than to remove the protection at the user's end. You may find that for most products, the suggestion to use a customized engine to encrypt your product libraries that also contain your copy-protect and decrypt it at initial run-time, might be enough.
I am not sure you can actually protect a .NET - There may be commercial solutions that do the trick. The reason is .NET code can be seen through Lutz Roeder (Thanks Jasonh for the heads up) Red Gate's Reflector (It was formerly by the named guy above). The best way to deal with it is to look for code obfuscation which makes reflecting more trickier, I can point you to one place I know of that does this for free - Phoenix - NtCore.Com.
The more esoteric solution would be to create a .NET hosting environment in C++, load the binary image (which could be encrypted) and the hosting environment than undecrypt it in memory - have heard of that theory but not sure how that would be done in practice. Please do not use your own protection scheme as there could be a weakness.
Someone once said - "Security through obscurity"....
Hope this helps,
Best regards,
Tom.
I worked on a project that handled this by putting some critical functionality (for example data storage, reporting, or payments) on an external server we ran, and requiring the user to log in to this server to get the functionality.
Customers can make backups, share, or run the application locally, but to access this critical function they have to type a password in to our application and connect to our server. Customers knew the password allowed changing their data, so they would not want to share the password with other people.
This was handy because we do not care how many copes of the application are out in the wild, we only track server connections. We included machine-identifying data like MAC address in the connection data, so we can track which machines are connecting.
I'm not just saying this because my company sells the OffByZero Cobalt software licensing solution for .NET: your PM should know that software licensing is very hard to get right, and if you roll your own, you'll be supporting it for the foreseeable future.
Take a look at the article Developing for Software Protection and Licensing; it explains how to choose a solution, why you should obfuscate your application and gives a number of tips for structuring your code to be harder to crack.
In particular it makes the point that the vast majority of companies should outsource their software licensing, as it makes no sense to spend developer time on building and maintaining a complex system that isn't your core business.
What is more important to your company: adding an important new feature to your product, or tracking down a peculiar permission behaviour on an ancient version of Windows that's clobbering your licensing system?