I am looking for the best way to cache a json proxy on .NET with C#.
The web service we are using restricts the number of hits per IP and is periodically unavailable. Luckily the data isn't very volitile, but it does change.
Ideally I'd like to do is cache the web response only when the request is valid. We would determine a vaid response by two factors, the http response code and the json code doesn't give us a "too many requests" error.
I'd like to store the proxy request in cache using the proxied URL and query string so we could use the same proxy for multiple external resources and multiple queries.
I'd also like to store the request either on disk as a file or in the data base in case the URL is unavailable and the Cache is empty.
Here is the proposed process:
Check the Cache for results by using query string
If the cached versions exists use it. If not make a new request.
If the reqeust succeeds, store a new set of results to the cache and write the results to disk or database or disk.
If that request fails use the file or database cached version.
I have read the MSDN articles and a few other blog posts, but it would be great to get a specific example on how to do this efficiently.
Thanks!
Related
What is the correct method to consume a resource passing a token after i am correctly authenticated? For example, is it right to do a GET with a Bearer Authorization to get an array of JSON objects or should i make a POST request?
According to Wikipedia
GET requests a representation of the specified resource. Note that GET
should not be used for operations that cause side-effects, such as
using it for taking actions in web applications. One reason for this
is that GET may be used arbitrarily by robots or crawlers, which
should not need to consider the side effects that a request should
cause.
and
POST submits data to be processed (e.g., from an HTML form) to the
identified resource. The data is included in the body of the request.
This may result in the creation of a new resource or the updates of
existing resources or both.
So, it does not depend on passing a token. It depends whether your request just retrieves the resource or it creates/updates an existing resource.
For just retrieving a resource use GET.
For creating a new resource/updating a new resource use POST.
This is my first time developing this kind of system, so many of these concepts are very new to me. Any and all help would be appreciated. I'll try to sum up what I'm doing as efficiently as possible.
Background: I have a web application running AngularJS with Bootstrap. The app communicates with the server and DB through a web service programmed using C#. On the site, users can upload files and reference them later using direct links. There's no restriction to file type (yet), so just about anything is allowed.
My Goal: Having direct links creates a big security problem for me, since the documents/images are supposed to be private data. What I would prefer to do is validate a user's credentials when the link is clicked, then load the file in the browser using a more generic url path.
--Example--
"mysite.com/attachments/1" ---> (Image)
--instead of--
"mysite.com/data/files/importantImg.jpg"
Where I'm At: Not very far. My first thought was to add a page that sends the server request and receives a file byte stream along with mime type that I can reassemble and present to the user. However, I have no idea if this is possible using a web service that sends JSON requests, nor do I have a clue about how the reassembling process would work client-side.
Like I said, I'll take any and all advice. I'd love to learn more about this subject for future projects as well, but for now I just need to be pointed in the right direction.
Your first thought is correct, for it, you need to use the Response object, and more specifically the AddHeader and Write functions. Of course this will be a different page that will only handle file downloads, so it will be perfectly fine in your JSON web service.
I don't think you want to do this with a web service. Just use a regular IHttpHandler to perform the validation and return the data. So you would have the URL "attachments/1" get rewritten to "attachments/download.ashx?id=1". When you've verified access, write the data to the response stream. You can use the Content Disposition header to set the file name.
I have a requirement where i need to get all the issues for a particular project in jira so for this i have created a console application which has rest client class using which I make a GET request call and for testing purpose rest api url is
"https://jira.atlassian.com/rest/api/latest/issue/JRA-9"
using this url i make a HttpWebRequest and get the response back in json formated string. Now this json string contain all the issue specific information but my actual requrement is to get all the project specific issues.
I tried to find out if i get any project specifc URL for testing purpose from where i get json reply back and I found http://kelpie9:8081/rest/api/2/search?jql=project=QA+order+by+duedate&fields=id,key but for this i get the "The remote name could not be resolved: 'kelpie9'" error.
Could you please help me in this?
`
JIRA's REST API does not appear to currently support any project-based queries separate from their search API.
You can specify a specific project in the search by using the JQL. Given that you know a project (e.g., "JRA" in "JRA-9"), then you can quickly search through all of its issues:
Working result: https://jira.atlassian.com/rest/api/latest/search?jql=project=JRA
One important note is that the results return actual total versus what is actually returned:
"startAt":0,"maxResults":50,"total":30177
You can add query string variables to the request to get more (or less) results. You can also control the fields related to issues to retrieve as well: https://jira.atlassian.com/rest/api/latest/search?jql=project=JRA&startAt=75&maxResults=75 (slower the more you request, and probably not nice to hit their public servers with big numbers).
You can even POST a JSON object that represents the query (slightly tweaked from the linked search docs):
{"jql":"project = JRA","startAt":75,"maxResults":75,"fields":["id","key"]}
Of interest, and as part of the JQL, you can sort the results by any field. Just add " order by id" to the project name, as-in "jql=JRA+order+by+id" in the querystring or "jql": "project = JRA order by id" in the POSTed JSON body.
Note: Above is the actual answer to the real question. However, the literal question is the cause of the `The remote name could not be resolved: 'kelpie9' error.
Their documentation shows kelpie9 as an example server name that they are testing on internally, running on port 8081. Your computer is not aware of a server/machine named kelpie9, as it does not publicly exist. Replace kelpie9 with whatever your JIRA server's hostname is internally and 8081 with whatever port it is using (or remove it if you do not see one when you view JIRA on your intranet site, which means port 80 for http and port 443 for https). For example, many companies run it a "https://jira/". You would replace the example link with https://jira/rest/api/2/search?jql=project=QA+order+by+duedate&fields=id,key.
Suppose I have a asp.net mvc 3 application with an interface named /getdata, different users connect to the server by my PC client software and get private data using this interface. Different users are identified by their own well-encrypted tokens.
Now the problem is ClientA told us he got another user's data. From the log of ClientA we found he got ClientB's (but they don't know each other, they can't share accounts). I looked through the code of my web application but couldn't find any chance to mix their data.
So I wonder can this happen:
(1) ClientB starts a http request to http://mysite.com/getdata, with his token in the http header, via a web proxy.
(2) The web proxy accesses my web server, get ClientB's data.
(3) My web server approves the request and returns ClientB's data, since everything is correct.
(4) ClientB gets his data and correctly displayed
(5) Almost the same time after ClientB get his data, ClientA starts a same request, with ClientA's token in the header.
(6) The web proxy find the url that ClientA requesting is the same as ClientB's, and the result is still in cache, then returns ClientB's data. Then ClientA gets another's data.
In my web app interface, at the very beginning I already set all the response no-cache, max-age=0 and so on to prevent client-side cache. My question is:
Can the scanario in the image happen?
If yes, how can I prevent the web proxy returning cached data? I can't modify the program of the PC client, and web proxy servers are out of my control.
If no, what's the possible reason that A is getting B's data?
Can the scanario in the image happen?
Yes, this is possible if the clients are using the GET verb to access the /getdata endpoint.
If yes, how can I prevent the web proxy returning cached data? I can't modify the program of the PC client, and web proxy servers are out of my control.
Decorate the controller action that is serving the GetData endpoint with a [NoCache] attribute to ensure that no data gets cached downstream.
I'm reading this article that compares XML to JSON, and in the comments section, a user mentions the need to use a "local proxy" to access XML.
Can someone explain what a local proxy means in this context? I'm assuming he means Javascript, but I'm open to understanding what parsers are available in other languages (C#, etc...)
JavaScript has a Same Origin Policy which keeps you from access content from other domains. This prevents the XMLHttpRequest object from being able to retrieve the contents of the XML file from the other domain.
A local proxy is just a simple file that just re-routs the request from your domain to the other domain and fetches the content. This way the same orgin policy is met.
The reason JSON does not run into the restriction is JavaScript, Image, and CSS files can be referenced from other domains. Because JavaScript files can be loaded from other domains, we can use JSONP (JSON with Padding) to get the content.
Most people agree that JSONP is not secure since any content can be injected into the JavaScript file. You just have to trust your source that they will not inject any bad content (ads, popups, tracking stuff, etc) into the web page.
This is related to JSONP (as the user states in his comment) which basically defines the ability of JavaScript to execute whatever is provided in the remote source <script src="http://url.com/file"></script>, and gives a browser the ability to retrieve data from remote sources.
I don't like the JSONP term myself, as you could execute XML as well, so the user's comment is actually wrong. You could return something like run('<some xml></some xml>') on your server, and then use the built in JavaScript XML parser to get the data that you need - it doesn't need to be JSON.