Is it possible to cache once produced response on server-side and then redeliver it in response to the same request?
Let me explain:
I have an endloint that takes about 5 seconds to generate a response - this includes going to the database and fetching data, processing it, performing some computations on it, serealizing and gzipping the response - the entire thing takes 5 seconds.
Once this is done for the first time I want the result to be available for all the requests coming from all the users.
In my views client side caching, when you either cache the result on the client and do not hit the server at all for some time or when you hit the server but get 304 not-changed instead of the data is not good enough.
What i want is to hit the sever and if this enndpoint (with the same set of parameters) was already called by anyone then get the full response. Is it possible at all?
You have a number of options for this.
One option is API level caching, you create a key using the parameters required to generate the response, you go and fetch the data and save the pair in the cache. Then next time a request comes in, you recreate the key and go and check your cache first. If it's there, happy days, return it, if not, go fetch it and store it.
This of course depends on the amount of data you have, too muchm data or too big data and this will not work. You could also store it for a while, say 10 minutes, 1 hour etc.
If you have a lot of data and caching like this isn't possible then consider something else. Maybe create your own no-sql cache store ( using something like MongoDB maybe ),store it and retrieve it from there, without the need for any changes so it's a straight retrieve, thus very quick.
You could also use something like Redis Cache.
Lots of options, just choose whatever is appropriate.
Related
I am currently working on a task where I need to synchronize the data, example:
Client makes a request to get data to the API (API has access to the database), client received the data and saves it to the list. That list is used for some time etc..... Then in the meanwhile another client accesses the same API and database and makes some changes... to the same table... After a while the first client want's to update his current data, since the table is quite big let's say 10 thousand records, grabbing the entire table again is inefficient. I would like to grab only the records that have been modified,deleted, or newly created. And then update the current list the client 1 has. if the client has no records, he classifies all of them as newly created (at start up) and just grabs them all. I would like to do as much checking on the client's side.
How would I go about this ? I do have fields such as Modified, LastSync, IsDeleted. So I can find the records I need but main issue is how to do it efficiently with minimal repetition.
At the moment I tried to get all the rows at first, then after I want to update (Synchronize) I get the minimal required info LastSync Modified IsDeleted Key, from the API, which I compare with what I have on the client and then send only keys of the rows that don't match to the server to get the entire values that match the keys. But I am not sure about efficiency of this also... not sure how to update the current list with those values efficiently the only way I can think of is using loop in loop to compare keys and update the list, but I know it's not a good approach.
This will never work as long as you do not do the checks on the server side. There is always a chance that someone post between your api get call to server and you post call to server. Whatever test you can do at the client side, you can do it on the server side.
Depending on your DB and other setup, you could accomplish this by adding triggers on the tables/fields that you want to track in the database and setting up a cache (could use Redis, Memcached, Aerospike, etc.) and a cache refresh service . If something in the DB is added/updated/deleted, you can set up your trigger to write to a separate archive table in the DB. You can then set up a job from, e.g., Jenkins (or have a Kafka connector -- there are many ways to accomplish this) to poll the archive tables and the original table for changes based on an ID or date or whatever criteria you need. Anything that has changed will be refreshed and then written back to the cache. Then your API wouldn't be accessing the DB at all. It would just call the cache for the most recent data whenever a client requests it. Your separate service wold be responsible for synchronization of data, database access, and keeping the cache up to date.
I have a business logic which has lot of DB fetch operations and a bit complex business logic.
Data fetched is rarely changed within the session of user.
Many Fetch opertaions(data fetched is rarely changed within the session of user).
For each and every action on the form(button click/ value change in Textbox etc...) we need to run the business logic to check if it's valid change.
Currently we are using Asp.net Forms Application and these business logic is in InSessionScope().
Currently we are working on migrating to Restful API(WebAPI).
Can we use sessions(InSessionScope()) in RESTFul?
If not in sessions how to avoid more database calls and use the same object on subsequent calls and increase performance?
Based on my personal experience NEVER use Session in a Rest Application as AspNET WebAPI are .. even if you can .. but instead use Tokens for Authorization and User Profilation (with AspNet Identity) and for performance (don't hit DB too many times) i suggest to you some ways as i have done:
1 - USE CACHE!! (there are some great frameworks and lib for cache ..you can use different Layers of cache .. Query .. Response of webapi ..for example I'm use to cache the entire API response (Json) and auto invaildate it on POST / PUT / DELETE request) ..in .NET you can use this https://github.com/filipw/Strathweb.CacheOutput
You can also use Redis for caching (if you don't want to cache locally in the Server but to have a distribuited cache)
2 - Try to think in NoSQL way .. in our application we use a mix of DB .. SQL Server but also MongoDB (expecially for big amount of data ) for example we use SQL server to manage AspNEt Identity but we use MongoDB to store our Product (we have about 6 milions of products) and it take about 1 sec for query (also with aggregation!!) ..
3 - Try to use LocalStorage on the FrontEnd if you can to store some information. .and then sync them when you need ..
Hope it can help you.. enjoy WebAPI ..enjoy REST!! (and leave webforms as soon as you can ...in my idea!!)
You can use tokens implemented by you or JwtToken.
If you choose implement custom token on login method you must return a token to you app, next in any api call pass this token like a header or query string and decrypt in server to validate propose.
I have two different calls to controller in Web API. Let's say that the route for a first one is http://localhost/1 and for second http://localhost/2
My goal is to do something like transactions. In a first call I want to send data to server. So this is where my first question comes: How can I save data for a short time on server, and not saving it into database? What's the best practice for this?
In second call I'm sending verification code. If code is ok, than I will do something with data that client send in previous call.
This sounds like a transaction to me. Like commit if code is ok or rollback transaction if code verification faild, but I'm not sure is it possible to use transactions in this kind of scenarios, when you have two different POST methods.
Can someone help me with thinking about this little bit more?
Dont save anything temporarily in server. That's a bad practice.
WebApi is stateless. So, its better to save every details in server.
In the first POST call, return a unique transaction reference number (Use SQL server to save this information)
E.g. POST to http://localhost/requestVerificationNumber/ which returns a GUID
In the second POST call, cross check the verification code by matching it with the unique transaction number stored before. It is the responsibility of the second POST call to send that reference number.
E.g. POST to http://localhost/verifyCode/ along with the GUID sent before.
The advantage of this method is that all the transactions are stored in Sql Server and later be manipulated.
I have a page that executes a long process, parsing over 6 million rows from several csv files into my database.
Question is just as when the user clicks "GO" to start processing and parsing 6 million rows I would like to set a Session Variable that is immediately available to the rest of my web site application so that any user of the web site knows that a user with a unique ID number has started parsing files without having to wait until the end of the 6 million rows processed?
Also with jQuery and JSON, I'd like to get feedback on a webpage as to which csv file is being processed and how many rows have been processed.
There could be other people parsing files at the same time, how could I track all of this and stop any mix up etc with other users even though there is no login or user authentication on the site?
I'm developing in C# with .NET 4.0 Entity Framework 4.0, jQuery, and MS SQL 2008 R2.
I was thinking of using Session Variables however in my static [WebMethod] for my jQuery JSON calls I am not able to pull back my Session unless I'm using HttpContext.Current.Session but I am not sure how if this solution would work?
Any guidance or ideas would be mostly appreciated.
Thanks
First of all: Session variables are not supposed to be seen for any user everywhere.
when some client connects to the server, there is a session made for them by the server, and next time the same user requests (within the expiration time of the session), the session (and it's variables) are usable.
You can use a static class for this if you intend to.
for example
public static class MyApplicationStateBag
{
public static Dictionary<string,object> Objects {get; private set;}
}
and for your progress report. you can use a asp:Timer to check the progress percentage every second or two.
here is a sample code that I have written for asp:Timer within an UpdatePanel:
Writing a code trigger for the updatepanel.
I suggest you use a Guid for identifying the current progress as the key to your state bag.
The correct way of doing this is via services, for example WCF services. You don't want to put immense load on the web server, which is not supposed to do that.
The usual scenario:
User clicks on GO button
Web server creates a job and starts this job on a separate WCF service
Each job has ID and metadata (status, start time, etc.) that is persisted to the storage
Web server returns response with job ID to the user
User, via AJAX (JQuery) queries the job in the storage, once completed you can retrieve results
You can also save Job ID to the session
P.S. it's not a direct answer to your question, but I hope it helps
I'm making an application with server sided variables that change every second. Every second those new variable need to be shown at all the clients that have the webpage open.
Now most people told me to go with comet because I need to push/pull the data every second, now I've got a few questions:
What would be a better solution looking at the fact that I need the new data EVERY SECOND, pulling from the client or pushing with the server?
Also the item ID's that are on the server side (with the variable's that ID got) can change and when the client refreshes the page he needs to get the oldest (and living) ID's. This would mean that my jquery/javascript on the client side must know which ID's he got on the page, what is best way to do this?
Last thing is that I can't find a good (not to expensive) comet library/api for asp.net (C#). Anyone ever used a comet library with good results? We're looking at a site that should be able to have 2000 comet connections at every moment.
There is a SendToAll function in PokeIn ASP.NET ajax library.
WebSync by Frozen Mountain is a full-fledged scalable comet server for IIS and ASP.NET. It integrates seamlessly into your application stack to support real-time data communications with thousands of clients per server node.
Check it out, there's a Community edition freely available.
What would be a better solution
looking at the fact that I need the
new data EVERY SECOND, pulling from
the client or pushing with the server?
I don't think it doesn't matter that much, as the time between requests and the time new data will be available is rather short. I would just instantiate a new XMLHttpRequest at the client after the previous one succeeded. You could send the server the last received data (if not too big) so it can compare that data with the current one available on the server and only send something back when new data is available.
Also the item ID's that are on the
server side (with the variable's that
ID got) can change and when the client
refreshes the page he needs to get the
oldest (and living) ID's. This would
mean that my jquery/javascript on the
client side must know which ID's he
got on the page, what is best way to
do this?
I'm not totally sure I understand what you mean, but if I'm right you can just store every name/value pair in an object. When a new variable arrives at the client, it doesn't overwrite existing data; when a certain variable is already present, it is updated with the latest value. It could look like:
{ first_variable: 345,
second_one: "foo",
and_the_third: ["I", "am", "an", "array,", "hooray!"]
}
and when a new state of second_one arrives, e.g. "bar", the object is updated to:
{ first_variable: 345,
second_one: "bar",
and_the_third: ["I", "am", "an", "array,", "hooray!"]
}
Last thing is that I can't find a good
(not to expensive) comet library/api
for asp.net (C#). Anyone ever used a
comet library with good results?
I don't have any experience with ASP.NET, but do you need such a library for this? Can't you just program the server-side code yourself, which, as I said, leaves the connection open and periodically (continually) compares the current state with the previous sent state?
UPDATE: To show it's not that difficult to keep a connection open at the server side, I'll show you a long-polling simulation I wrote in PHP:
<?php
sleep(5);
?>
<b>OK!</b>
Instead of letting the process sleep a few seconds, you can easily test for changes of the state in a loop. And instead of sending an arbitrary HTML element, you can send the data back, e.g. in JSON notation. I can't imagine it would be that hard to do this in ASP.NET/C#.