File upload from one domain to another domain - c#

I had created one website which has two modules,
ADMIN
USER
They are hosted on different domains. Now when user open its domain suppose its abc.com and can register their company and also upload photo from there and uploaded photo will go in Company_Logo FOLDER.
Now suppose ADMIN's domain is xyz.com . now i want that ADMIN open its xyz.com and can see the photo uploaded from abc.com now i want like ADMIN means from xyz.com can change that uploaded photo to abc.com which is in Company_Logo FOLDER.
In short photo uploded from User side which is on abc.com and replace from ADMIN side which is on xyz.com so how can i do that

So you have two different sites, hosted on different domains and perhaps even different servers, and you want site A to notify site B when some file has been uploaded. You then want to be able to alter that file on site A from site B.
Seems to me you need to create some sort of API on site A, that lets users (admins) from site B check recently uploaded files and also lets them overwrite it.

Okay, this can be done but you'll need to use an HttpHandler. You can find a good example here, but I'll spell out the important parts. I cannot feasibly write the entire handler for you here.
First, let's build a class in the web project and call it ImageHandler ...
public class ImageHandler : IHttpHandler
{
}
... next let's implement the interface ...
public bool IsReusable
{
get { return false; }
}
public void ProcessRequest(HttpContext context)
{
// find out what we're trying to do first
string method = context.Request.HttpMethod;
switch (method)
{
case "GET":
// read the query string for the document name or ID
// read the file in from the shared folder
// write those bytes to the response, ensuring to set the Reponse.ContentType
// and also remember to issue Reponse.Clear()
break;
case "PUT":
// read the Headers from the Request to get the byte[] of the file to CREATE
// write those bytes to disk
// construct a 200 response
break;
case "POST":
// read the Headers from the Request to get the byte[] of the file to UPDATE
// write those bytes to disk
// construct a 200 response
break;
case "DELETE":
// read the Headers from the Request to get the byte[] of the file to DELETE
// write those bytes to disk
// construct a 200 response
break;
}
}
... finally we need to setup the handler in the web.config ...
<configuration>
<system.web>
<httpHandlers>
<!-- remember that you need to replace the {YourNamespace} with your fully qualified -->
<!-- namespace and you need to replace {YourAssemblyName} with your assembly name -->
<!-- EXCLUDING the .dll -->
<add verb="*" path="*/images/*" type="{YourNamespace}.ImageHandler, {YourAssemblyName}" />
</httpHandlers>
</system.web>
</configuration>
Finally, something you're also going to want to do is pass in some kind of session key that can be validated when you get into the handler because otherwise this is open to everbody. It wouldn't matter if you didn't need the PUT, POST and DELETE verbs, but you do.
Technically you wouldn't need to check the session key on GET if you didn't care that everybody could access the GET, but you gotta check it on the others.

You have two options.
If both of your sites are hosted in the same machine or a shared hosting environment, chances are there that your site can access the other directories. In that case you will be easily able to place the images in desired folder.
Now the second case, where one of your site does not have access to the folder of another site, - it is rather complicated. You will have to create a proxy where by the admin site will accept the image and in turn it will put it in the main site folder. I do not recommend this though.

You can do this in 2 steps:
1) Upload image to your server using standard File Upload mechanism
2) Use HttpWebRequest class to upload image to different server on server-side right after original upload.
Please refer to this article: Upload files with HTTPWebrequest (multipart/form-data)
see this for reference:
http://forums.asp.net/t/1726911.aspx/1

Related

Authenticate GET requests to files in a folder C# MVC

I have a web site (IIS, C#.Net, MVC4) where users are (forms-)authenticated and they upload media files (mostly .mp4) and authorize set of users to play back on demand. I store these files on local storage.
I play these files using jwplayer back to the authorized users on demand.
jwplayer expects I pass the url directly for it to play, but I didn't want to expose a direct url.
I really have to restrict unauthorized access to these files as they are private files.
I tried implementing a controller method to handle https://mysite/Video/Watch?VideoId=xyz, and return FileStream of the actual file. It works on a browser directly. (Though not sure how efficient it is for large files.)
But the problem is, jwplayer looks for urls of pattern http(s)://domain/path/file.mp4[?parameter1=value1&parameter2=value2 and so on.]
When I give a url like https://mysite/Video/Watch?VideoId=xyz, it says 'No playable sources found' without even sending a HEAD request.
If I expose the urls directly, the files are available for anybody to download, which will break the privacy.
Worst case, I would at least want to avoid hot links which will live for ever.
I have also looked at www.jwplayer.com/blog/securing-your-content/ but did not find the solutions suitable.
My questions are,
Is there a way I can retain the pattern of the url http(s)://domain/path/file.mp4 and still control the access to the file?
If (1.) is not possible, how do I leverage the parameters that could be passed on the url. With the parameters, I can think of signed urls. What should I do on the server if I have to provide and handle/validate signed urls.
Just not to hinder the performance, after any validation, can I somehow get the iis to handle the filestream rather my code?
I implemented an HTTPModule to allow/block access to the file. This addresses my questions 1 & 3.
Code snippet below.
void context_PreRequestHandlerExecute(object sender, EventArgs e)
{
HttpApplication app = sender as HttpApplication;
//Get the file extension
string fileExt= Path.GetExtension(app.Request.Url.AbsolutePath);
//Check if the extension is mp4
bool requestForMP4 = fileExt.Equals(".mp4", StringComparison.InvariantCultureIgnoreCase);
//If the request is not for an mp4 file, we have nothing to do here
if (!requestForMP4)
return;
//Initially assume no access to media
bool allowAccessToMedia = false;
//....
// Logic to determine access
// If allowed set allowAccessToMedia = true
// otherwise, just return
//....
if(!allowAccessToMedia)
{
//Terminate the request with HTTP StatusCode 403.2 Forbidden: Read Access Forbidden
app.Response.StatusCode = (int)HttpStatusCode.Forbidden;
app.Response.SubStatusCode = 2;
app.CompleteRequest();
}
}

Accurate Session Tracking in ASP.NET MVC

Whenever a user hits a page on my website, I run the following code to track user hits, page views, where they are going, etc...
public static void AddPath(string pathType, string renderType, int pageid = 0, int testid = 0)
{
UserTracking ut = (UserTracking)HttpContext.Current.Session["Paths"];
if (ut == null)
{
ut = new UserTracking();
ut.IPAddress = HttpContext.Current.Request.UserHostAddress;
ut.VisitDate = DateTime.Now;
ut.Device = (string)HttpContext.Current.Session["Browser"];
if (HttpContext.Current.Request.UrlReferrer != null)
{
ut.Referrer = HttpContext.Current.Request.UrlReferrer.PathAndQuery.ToString();
ut.ReferrerHost = HttpContext.Current.Request.UrlReferrer.Host.ToString();
ut.AbsoluteUri = HttpContext.Current.Request.UrlReferrer.AbsoluteUri.ToString();
}
}
//Do some stuff including adding paths
HttpContext.Current.Session["Paths"] = ut;
}
In my Global.asax.cs file when the session ends, I store that session information. The current session timeout is set to 20 minutes.
protected void Session_End(object sender, EventArgs e)
{
UserTracking ut = (UserTracking)Session["Paths"];
if (ut != null)
TrackingHelper.StorePathData(ut);
}
The problem is that I'm not getting accurate storage of the information. For instance, I'm getting thousands of session stores that look like this within a couple minutes.
Session #1
Time: 2014-10-21 01:30:31.990
Paths: /blog
IP Address: 54.201.99.134
Session #2
Time: 2014-10-21 01:30:31.357
Paths: /blog-page-2
IP Address: 54.201.99.134
What it should be doing, is storing only one session for these instances:
What the session should look like
Time: 2014-10-21 01:30:31.357
Paths: /blog,/blog-page-2
IP Address: 54.201.99.134
Clearly, this seems like a search engine crawl, but the problem is, I'm not sure if this is the case.
1) Why is this happening?
2) How can I get an accurate # of sessions to match Google analytics as closely as possible?
3) How can I exclude bots? Or how to detect that it was a bot that fired it?
Edit: Many people are asking "Why"
For those of you that are asking "Why" we are doing this as opposed to just using analytics, to make a very long story short, we are building user profiles to mine data out of their profile. We're looking at what they are viewing, how long they are viewing it, their click paths, we also have A/B tests running for certain pages and we're detecting which pages are firing throughout the user viewing cycle and we're tracking some other information that is custom and we're not able to put this into a google analytics API and pull this information out. Once they've navigated the site, we're thing using this information to build user profiles for every session on the site. We essentially need to then detect which of these sessions is actually real and give the site owners the ability to view the data along with our data mining application to analyze the data and provide feedback to the site owners on certain criteria to help them better their website from these profiles. If you have a better way of doing this, we're all ears.
1) the asp.net session is tracked with the help of the asp.net session Cookie.
But it is disabled for anonymous users (not logged on users)
You can activate sessionId creation for anonymous user's in the web.config
<configuration>
<system.web>
<anonymousIdentification enabled="true"/>
</system.web>
</configuration>
A much better place to hook up your tackin is to add an global mvc ActionFilterAttribute.
The generated SessionId is stored in the httprequest, accessed by
filterContext.RequestContext.HttpContext.Request.AnonymousID
2) You should create a feed of tracking paths to analys it asyncronly or not even in the same process. Maybe you want to store the tracking on disk "like a Server log" to reanalyse it later.
Geo Location and db lookup's needs some processing time and most likly you cant get the accurate geo location from the ip address.
A much better source is to get it from the user profiles / user address later on. (after the order submit)
Sometimes the asp.net session cookie don't work, because the user has some notracking plugin activated. Google Analytics would fail here too. You can increase the tracking accuracy with a custom
ajax Client callback.
To make the Ajax callback happen globally for all pages, you can use the help of the ActionFilterAttribute to inject some Script-Content to the end of the html content stream Response.
To map an IPv4 address to a session can help, but it should only be a hint.
Noadays a lot of ISP supporting IPv6. They are mapping there clients
most of the time to a small IPv4 pool. So one user can switch its ipv4 very fast
and there is a high possibility that visitors of the same page are using the same ISP and so share a IPv4.
3) Most robots identify themselves by a custom user agent in the request headers.
There are good and bad ones. See http://www.affiliatebeginnersguide.com/articles/block_bots.html
But with the Ajax callback u can verify the browser presents, at least the present of a costly
html-dom with JavaScript Environment.
X) To simplfy the start and concentrate on the Analysis. Implement a simple ActionFilterAttribute
and Register it globaly in RegisterGlobalFilters
filters.Add(new OurTrackingActionFilterAttribute(ourTrackingService));
In the filter override OnActionExecuting
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
base.OnActionExecuting(filterContext);
OnTrackingAction(filterContext);
}
public virtual void OnTrackingAction(ActionExecutingContext filterContext)
{
var context = filterContext.RequestContext.HttpContext;
var track = new OurWebTrack(context);
trackingService.Track(track);
}
To don't delay the Server Response with some tracking processing,
take a look into the Reactive package http://msdn.microsoft.com/en-us/data/gg577609.aspx
It's a good way to split the capture from the processing.
Create a "Subject" in the TrackingService and simple push our tracking objects into it.
You can write observers to transmit, save or process the tracking objects.
As default the observers will only get one object at a time and so you dont need to syncronise/lock your status variables/Directory/memeory-cache and maybe u want to load the data and reprocess it with a new version of your application later on (maybe in debuging).

WIF and Subdomains

We have an existing ASP.NET application (WebForms) that uses home-grown authentication. We've been tasked with implementing a single sign-on solution and have chosen to use WIF.
We have a single instance of the application running and we identify the client by using a subdomain (e.g. client1.ourapp.com, client2.ourapp.com, etc). In the application code we strip off the first subdomain and that identifies the client.
We've been working with a WIF proof-of-concept to figure out how to get the user redirected back to the correct subdomain once they've authenticated. The out-of-the-box behavior seems to be that the STS redirects the user to whatever realm is identified in the config file. The following is the PoC config file. I'm using my hosts file to fake different clients (i.e. 127.0.0.1 client1.ourapp.com, 127.0.0.1 client2.ourapp.com).
<federatedAuthentication>
<wsFederation
passiveRedirectEnabled="true"
issuer="http://ourapp.com/SSOPOCSite_STS/"
realm="http://client1.ourapp.com"
requireHttps="false" />
</federatedAuthentication>
Obviously this isn't going to work because we can't redirect everyone to the same subdomain.
We think we've figured out how to handle this but would like some outside opinions on whether we're doing it the right way or whether we just got lucky.
We created an event handler for the FAM's RedirectingToIdentityProvider event. In it we get the company name from the request URL, build a realm string using the company name, set the Realm and HomeRealm of the SignInRequestMessage, then let the FAM do its thing (i.e. redirect us to the STS for authentication).
protected void WSFederationAuthenticationModule_RedirectingToIdentityProvider( object sender, RedirectingToIdentityProviderEventArgs e )
{
// this method parses the HTTP_HOST and gets the first subdomain
var companyName = GetCompanyName();
var realm = GetRealm( companyName );
e.SignInRequestMessage.Realm = realm;
e.SignInRequestMessage.HomeRealm = companyName;
}
string GetRealm( string companyName )
{
return String.Format( "http://{0}.ourapp.com/SSOPOCSite/", companyName );
}
Does this seem like a reasonable solution to the problem?
Are there any problems we might experience as a result?
Is there a better approach?
Your solution sounds good (explicitly passing along the information you need), the only other solution that comes to mind is using Request.UrlReferrer to determine which subdomain the user came from.

prevent from linking css from others sites

I have a comerce css on my site. I use IIS and vendor says that others can use my css fonts because they know the url. Is it possible to set server or something so that only my site can use it ? It is about cufon
Things you can do:
Give up. If your users can see it, they can steal it. Similarly, don't expect to protect your site from users viewing its source code.
If the font is a vector font, rasterize the font for all the font sizes you support, but no others. This may have a negative impact on browsing experience of your users. This makes stealing your font give less useful data, but doesn't actually stop the theft.
Replace all use of the font with bitmaps. Much more work to steal in that case, and only gives the user rasterized version of font (and not necessarily all the letters). You can create a special text UserControl that sticks a bitmap where-ever you put it, so this isn't actually that much work to do or maintain. It does increase the bandwidth requirements for your page, though. It also forces you to do some of the layout by hand that is normally handled by the browser, which could add heavy maintenance costs or minimal maintenance costs, depending on how your site's layout works. And as with #2, it can have a negative impact on browsing experience of your users. It also hurts accessibility, though not absurdly so since your UserControl will presumably use alt text to duplicate the text.
I strongly recommend #1.
If you are on IIS7 or greater you can perform a Referer check without writing any custom code, simply by using IIS URL Rewrite in the manor discussed here. However as simply a Referer check, it has the shortcomings discussed in the other answers given.
(For introduction to IIS URL Rewrite see here.)
Excerpt from the first link:
Let me now explain what we have done
on this property page:
Specified name of the rule as "Prevent Leeching". This must be a
unique rule.
Every requested URL will be matched as the pattern is ".*" and is a
regular expression.
Added two condition and specified both the condition to be satisfied
(see "Logical Grouping" is "Match
All")
HTTP_REFERER does not match empty as it can be a direct reference to the
image
HTTP_REFERER does not match my own site http://www.contoso.com
If the above two conditions are
satisfied (apparently meaning the
request is coming from any other
site), we are just redirecting it to
pick up some other image which can be
anything And that's it. So without
writing even a single line of code we
are able to prevent hot-linking.
I would probably tailor your Rewrite configuration so that it is only performed on your font URLs (and other static assets of concern) rather than every single incoming request.
If you don't have remote desktop access or are just editing web.config, your rewrite rule will probably look something like:
<rule name="block font leaching" stopProcessing="true">
<match url="myFontFile.woff" />
<conditions logicalGrouping="MatchAny">
<add input="{HTTP_REFERER}" pattern="^$" /><!-- no referrer -->
<add input="{HTTP_REFERER}" pattern="yourdomain.com" negate="true" /><!-- or not your site -->
</conditions>
<action type="AbortRequest" /><!-- block the request -->
</rule>
In this example I choose the block the request entirely (through AbortRequest), however you could just as well have redirected to a page with a friendly notice.
Not reliably. In order to serve up the embedded fonts they need to readable by the public, and referable by your CSS.
What you could do is create an asp.net page, or a handler which takes a parameter of the font file, reads the file from somewhere in your web site (APP_DATA is a good place to put them - you can't browse to APP_DATA) and spits it out. In the script you could check the HTTP_REFERER server side variable and if it is either blank, or comes from your site you server the file, if it doesn't you don't.
MSDN has an example of how to serve up a binary file in C#. You'll need to ensure you get the MIME type right, however be aware this would probably break any caching provided by the browser or proxies. This also wouldn't stop people downloading the fonts by typing the URL into their browser and saving them locally, but if bandwidth is the concern that's not really going to be a problem.
If you're on IIS7 you could write an Http Module which would do the referrer check for you, Scott Hansleman wrote one for image leeching prevention quite a while ago, you could edit that to match your purposes.
You could make an http handler to serve up css files. In your custom http handler, check that the request.Url.Host equals request.UrlReferrer.Host. If they don't match, set the response to 404 or serve up an empty css file.
This is untested but should be close to what you would need.
You would add a link to css like:
<link rel="Stylesheet" href="CustomCSSHandler.ashx?file=site.css" />
public class CustomCSSHandler : IHttpHandler
{
public void ProcessRequest(HttpContext ctx)
{
HttpRequest req = ctx.Request;
//Get the file from the query stirng
string file = req.QueryString["file"];
//Find the actual path
string path = ctx.Server.MapPath(file); //Might need to modify location of css
//Limit to only css files
if(Path.GetExtension(path) != ".css")
ctx.Response.End();
if (req.UrlReferrer != null && req.UrlReferrer.Host.Length > 0)
{
if (CultureInfo.InvariantCulture.CompareInfo.Compare(req.Url.Host, req.UrlReferrer.Host, CompareOptions.IgnoreCase) != 0)
{
path = ctx.Server.MapPath("~/thiswontexist.css");
}
}
//Make sure file exists
if(!File.Exists(path))
{
ctx.Response.Status = "File not found";
ctx.Response.StatusCode = 404;
ctx.Response.End();
}
ctx.Response.StatusCode = 200;
ctx.Response.ContentType = "text/css";
ctx.Response.WriteFile(path);
}
}

how to know which Image has been requested C#,ASP.Net

I am developing a web app. which will generate a random link pointing to an image on my server. something like -http://dummy.com/Images/Image1.jpg?id=19234
Here this link can then be used by anybody on their site, now I just want to know how many sites are using my links, without anybody clicking on those links.
Can It be done using HTTPModule ??
Is this as simple as Googling? Search for
link:http://dummy.com/Images/Image1.jpg?id=19234
If you want to do this programmatically, you'll need to use the Google API.
The issue you'd have with an HttpHandler is that it will generally only kick in for requests that are being handled by the ASP.Net engine - the image requests will normally be handled by IIS without going through the handler.
Your web logs should be able to tell you who the referers for any given item on your servers are - assuming that you have them, and you hve something to process them - this will be more accurate than using Google.
Going forward, one of the ways I've done this in the past is to have the image generated by an HttpHandler (implementing IHttpHandler).
This will return the image as a stream (setting the content type to "image/jpeg"), and you can add further processing (such as logging where the request (referer) came from, etc).
The limitation I found with the HttpHandler, is that some services (PBBS for example) require an image link to have an image extension - I got around this by processing all 404's with an ASP.Net page that checks for the .jpg extension in the request. If it finds one, instead of returning the usual 404 page, it returns the requeted image. You'll need to configure the 404 handler in IIS though, as the web.config error handler only kicks in for ASP.Net requests (web services and .aspx type pages).
Example handler:
// Sample from the ASP.Net Personal Web Site Starter Kit
public class Handler : IHttpHandler
{
public bool IsReusable
{
get { return true; }
}
public void ProcessRequest(HttpContext context)
{
// Set up the response settings
context.Response.ContentType = "image/jpeg";
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.BufferOutput = false;
// QueryString parameters are available here:
// context.Request.QueryString["QueryStringKey"]
// You can also access the Referrer object, and log the requests here.
Stream stream;
// Read your image into the stream, either from file system or DB
if (stream == null)
{
stream = PhotoManager.GetPhoto();
}
// Write image stream to the response stream
const int buffersize = 1024 * 16;
var buffer = new byte[buffersize];
int count = stream.Read(buffer, 0, buffersize);
while (count > 0)
{
context.Response.OutputStream.Write(buffer, 0, count);
count = stream.Read(buffer, 0, buffersize);
}
}
}
You can have similar code (or better yet, refactor the main image streaming code into a shared class) in the 404 page, that checks for the existence of the image extension, and renders the image out that way (again, setting the content type, etc).
Oddthinking is right. See http://code.google.com/intl/en/apis/ajaxsearch/documentation/#fonje_snippets or Google's API. They give examples for PHP and Java, but there are also AJAX frameworks for ASP.NET (http://www.asp.net/ajax/), and I'm sure C# as well.
You can change your image extension to an aspx extension (http://dummy.com/Images/Image1.aspx?id=19234), there is no problem in this, because this page the only thing it would do Response.OutputStream of the image. That is to say it would be similar to a jpg but with the advantage you can have some other code to process.
In this aspx (before outputing the image), we would ask about the http_referer and it would be stored in a data table if this registry does not exist.
This is really useful if for example you want to restrict the access to images. You could add some logic to forbid if they are not logged in.

Categories

Resources