Add rel="nofollow" automatically to all outbound links in ASP.NET - c#

Any geniuses on StackOverflow ever made a solution which automatically adds rel="nofollow", to all external links?
I'd just like to apologise, I'm very new to backend coding - my attempts have literally got nowhere hence why I haven't posted them.
I've found some solutions in php, but nothing in ASP.NET.
I have a solution in jQuery, but the issue is - it'll be added after load. This is no good for telling Googlebot to ignore said links on my pages.
The jQuery solution is:
$("div.content a[href^='http']:not([href*='mysite.co.uk'])").attr("rel",
"follow");

One way would be to create your own custom HttpModule that set's the response to use a derived stream class to filter the HTTP body. There is a linked example in there on how to create a basic HttpModule. Github or Nuget may have a filter class that someone has written to do modifications the output stream when it's content type is text/html that you might be able to modify for your needs.
To build one on your own you will essentially need to attach to the BeginRequest event and set a filter to the HttpApplication's response.filter. That filter will be in charge of implementation of reading the response the page/control/ihttphandler has created and modifying it before it sends it to the client and then implementing the write to the client.

Related

How to write delete REST API that accepts a long list of items to delete?

I'm writing RESTful APIs and am getting used to the recommended protocols for using HTTP verbs for different operations.
However, I'm not sure how those protocols handle the case where you are deleting a potentially long list of items.
It appears that, like GET, the DELETE verb has no body and so is limited to the length of a URL. So how could you support accepting an arbitrarily long list of items to be deleted?
From the top....
HTTP is our standard for self-descriptive messages, which is subject to the uniform interface constraint. That in turn means that everyone on the web understands HTTP requests the same way.
In other words
DELETE /api/users/5b45eda8-067c-42c1-ae1b-e0f82ad736d6
has the same meaning as
DELETE /www/home.html
In both cases, we're asking the server to enact a change to its resource model.
Because everyone understands these requests the same way, we can create general purpose components (ex: caches) that understand the meaning of messages in the transfer of documents over a network domain and can therefore do intelligent things (like invalidating previously cached responses).
And we can do this even though the general purpose components know nothing about the semantics of the resource, and nothing about the underlying domain model hidden behind the resource.
DELETE, in HTTP, always specifies a single target URI; "bulk delete" is not an option here.
(I haven't found any registered HTTP methods that describe a bulk delete to general purpose components. It's possible that one of the WebDAV methods could express those semantics, but the WebDAV standard also has a lot of other baggage - I wouldn't try repurposing those methods for a "normal" API.)
So if you are trying to DELETE three resources in your API, then you are going to need three requests to do it -- just like you would if you were trying to DELETE three pages on your web site.
That said, if deleting a bunch of resources on your web site using a single HTTP request is more important than letting general purpose components understand what is going on: you have the option of using POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” -- Fielding, 2009
General purpose components will understand that the resource identified by the target URI is changing in some way, but it won't understand what is happening in the payload.
In theory, you could standardize a payload that means "we're deleting all of these resources", and then general purpose components could be implemented to recognize that standard. In practice, good luck.
Now, if instead what you want is a bulk delete of entities in your domain model, you have some options available.
On the web, we would normally use something like a form - perhaps with a check box for each entity. You select the entities that you want to delete, submit the form, and the HTTP request handler parses the message, then forwards the information to your domain model.
You could achieve something similar with a remote authoring idiom - here's a resource whose representation is a list of entities. You PUT to the server a copy of that document with entities removed, and then on the server you make changes to the domain model to match.
It's a very declarative approach: "change the domain model so that the representation of the resource will look like this".
This is analogous to how you would use HTTP to fix a spelling error in a web page: send a PUT request with the new HTML (including the spelling correction) in the request body.
PATCH is very much the same idea: we describe changes to the representation of the resource, and the server passes that information to the domain model. The difference here being that instead of sending the entire representation, we just send a patch document that describes the correction.
If you want an imperative approach - just use POST
POST /Bob
Content-Type: text/plain
Bob,
Please delete domain entities 1, 2, 5, 7
General purpose components won't understand how you are trying to modify the target resource, but they'll at least know that much.
Where things get messy is when there are lots of resources whose representation depends on the same resources. The standards don't offer us much in the way of affordances to announce "here are all the resources that have changed".
Cache invalidation is one of the two hard problems. HTTP has some affordances that work in the simple cases, but trade offs become necessary when things get more complicated.

Where to use GetBufferlessInputStream?

Where do I set HttpContext.Request.GetBufferlessInputStream(true)? I am trying to allow the user to upload files larger than 2GB and obviously I am running into the "maxRequestLength" int type restriction. I have tried to create a StreamReader the following way:
var reader = new StreamReader(HttpContext.Request.GetBufferlessInputStream(true));
But I'm doing it in a controller and I end up getting the following error:
"This method or property is not supported after HttpRequest.Form, Files, InputStream, or BinaryRead has been invoked."
So I'm guessing I have to make this change before the controller method gets called. I've searched stack overflow and many other websites for answers, but all I've found is how to use it not where to use it.
Thank you for your time and helping me out with this.
You are going to have to implement your logic in an HttpModule -
see this similar question: How should I use HttpRequest.GetBufferlessInputStream?
Update - actually you are better off writing your own HttpHandler instead of a Module
Using GetBufferlessInputStream is basically telling ASP.NET that you are going to handle the entire request, instead of only a portion of it. As you've seen when you use this in a module, after your module completes, the request continues through the remainder of the request pipeline, and ASP.NET is going to expect to be able to read part of the input.
By writing your own HttpHandler, you are responsible for the entirety of the processing - so this would only be for handling the large upload.
I found a link with a sample that looks pretty relevant for you https://blogs.visoftinc.com/2013/03/26/streaming-large-files-asynchronously-using-net-4-5/

c# soap and logmein

As logmeinrescue doesn't support batch users creation using a simple csv upload or similar and instead offers the ability to create user accounts using http post and get or by using soap I thought I would look into this.
Unfortunately following the code example in this link I have been unable to work out how to utilize the SOAP aspect of the code as I have never had previous experience using it.
So far I have written a fairly basic program that reads in the csv with all the user account data needed for creation and would loop through and assign the values needed.
If anyone could assist it would be greatly appreciated, I have tried to look up some documentation in regards to c# and soap but I was unable to find something that really helped me with configuring for logmein.
The c# example at the bottom of documentation you link to looks helpful but flawed. APISoapClient does not have a CookieContainer property so I would try it without the line of code that tries to set it. Later in the code there is a call to sAPI.createUsers but the example has not defined sAPI, I think they meant to re-use proxy.
To get started the easy way, right-click your project References and select Add Service Reference. Enter their endpoint and click GO:
https://secure.logmeinrescue.com/api/API.asmx
If you change the namespace from Service1 to match the example (APIServiceReference) your code will look like the example. From then on you can perform API operations basically just like you were dealing with classes, the SOAP mess is abstracted away for you.

Responding to HTTP POST

I'm not sure if I'm asking the right question.
We have a web app that we're trying to have a 3rd party POST to. We're creating a special landing page for them to which they can submit the data we need via POST.
I'm not sure how to respond to their request, which I assume I handle as an incoming HttpRequest. Do I process their data in PageLoad or some other event? Where/How is this data contained?
Do I have to use HttpListener or the ProcessRequest handler, or what?
Doing a search here or on Google turns up a lot of results on how to POST to another site, but can't seem to find a relevant site on how to be that "other" site and handle the incoming POST data.
Again, I'm not sure I'm asking this right.
EDIT: I found the Page.ProcessRequest Method in the MSDN library, but the Remarks say "You should not call this method"
Thanks!
You really need to look at the basics of ASP.NET. Even if this were a case where an IHttpHandler would be best-suited, I'd suggest using an .aspx page in this case as it's the best place to begin learning, and you can move to an IHttpHandler later on.
If the data is posted in application/x-www-form-urlencoded or multipart/form-data (the two formats used by forms on web pages - if they haven't told you what format they are using then it's probably one of those two), the Request.Form property (actually, a property of a property) will act as a dictionary into the data sent (e.g. if they have a field called "foo" then Request.Form["foo"] wll return the value of it as a string). Otherwise you'll want to use the Request.InputStream and read from that. This latter is a tiny bit more involved though.
Best would be to use an IHttpHandler, but it is possible to do what you want using a standard ASP.NET Page. Using PageLoad is fine, you have access to the Request and Response properties, which give you everything you need to process an HTTP request. For example, to obtain form parameters, you can use Request["input1"] to get the form input value (either query string, form post, or cookie) with the name "input1".
What is it you need to do in response to this post request? What sort of data do you need to return? Until that is answered, hard to help further.

Maintaining Consistency Between JavaScript and C# Object Models

I'm working on an ASP.NET web application that uses a lot of JavaScript on the client side to allow the user to do things like drag-drop reordering of lists, looking up items to add to the list (like the suggestions in the Google search bar), deleting items from the list, etc.
I have a JavaScript "class" that I use to store each of the list items on the client side as well as information about what action the user has performed on the item (add, edit, delete, move). The only time the page is posted to the server is when the user is done, right before the page is submitted I serialize all the information about the changes that were made into JSON and store it in hidden fields on the page.
What I'm looking for is some general advice about how to build out my classes in C#. I think it might be nice to have a class in C# that matches the JavaScript one so I can just deserealize the JSON to instances of this class. It seems a bit strange though to have classes on the server side that both directly duplicate the JavaScript classes, and only exist to support the JavaScript UI implementation.
This is kind of an abstract question. I'm just looking for some guidance form others who has done similar things in terms of maintaining matching client and server side object models.
Makes perfect sense. If I were confronting this problem, I would consider using a single definitive description of the data type or class, and then generating code from that description.
The description might be a javascript source file; you could build a parser that generates the apropriate C# code from that JS. Or, it could be a C# source file, and you do the converse.
You might find more utility in describing it in RelaxNG, and then building (or finding) a generator for both C# and Javascript. In this case the RelaxNG schema would be checked into source code control, and the generated artifacts would not.
EDIT: Also there is a nascent spec called WADL, which I think would help in this regard as well. I haven't evaluated WADL. Peripherally, I am aware that it hasn't taken the world by storm, but I don't know why that is the case. There's a question on SO regarding that.
EDIT2: Given the lack of tools (WADL is apparently stillborn), if I were you I might try this tactical approach:
Use the [DataContract] attributes on your c# types and treat those as definitive.
build a tool that slurps in your C# type, from a compiled assembly and instantiates the type, by using the JsonSerializer on a sample XML JSON document, that provides, a sort of defacto "object model definition". The tool should somehow verify that the instantiated type can round-trip into equivalent JSON, maybe with a checksum or CRC on the resulting stuff.
run that tool as part of your build process.
To make this happen, you'd have to check in that "sample JSON document" into source code and you'd also have to make sure that is the form you were using in the various JS code in your app. Since Javascript is dynamic, you might also need a type verifier or something, that would run as part of jslint or some other build-time verification step, that would check your Javascript source to see that it is using your "standard" objbect model definitions.

Categories

Resources