This is a rather weird question, so please bear with me.
There's a SWF file on one page that, when you click on a 'Start' button loads some image. The URL to this image is passed as a SWFObject variable. I would like to change this image to one I have uploaded on my host.
I have tried setting a breakpoint on the line that pushes the image URL variable to the object and setting my URL, but the image wouldn't load because of cross-domain policy.
Now I am thinking of writing a simple c# proxy which will return my image instead of the real one when Flash requests it ...
Do you perhaps have any better ideas on how this could be done?
To sum it up, I want to replace an image that SWF loads from a constant URL to a custom image of mine. Decompiling is not an option.
EDIT: I figured out the image-not-loading problem, it was cached after all.
Too long for comments:
It's entirely possible that your SWF file has the image embedded in itself. However, you say a number of things that sound very conflicting to me.
First off, you mention putting a breakpoint on the line that "pushes the image URL variable to the object" What, exactly, does this? Is that the C# code?
You also state that it doesn't load due to cross-domain policy. Have you resolved that? Also, why do you think that's the problem?
Why did you try replacing the image with other tools? Are you trying to manually get around a cross-domain policy restriction?
Finally, you ask state that you "can't figure out why the image packets are not being received." Before we go that far, are they even being requested? You mention that wireshark doesn't even see the request going out..
I guess all of this boils down to:
1. Do you have control over where the SWF loads the image from?
2. If so, is that request being made? You should be able to see this from your server logs.
Related
I am trying to "download" a file using the AxWebbrowser in VB.Net language (though, the answer could be in C#, I don't matter). The browser is already logged in, so I tried to catch the download, but for now the test is done with a PDF and I think it is opened directly in the browser. OK, lets put you in the context.
AxWebbrowser going to a login page which I use JavaScript to fill it and continue
Navigating to a message page which includes links to attached files
Trying to download these files (in fact, trying to get their bytes to transform them in Base64 and include them in my HTML returned... so now the only problem is getting the bytes, after that, conversion + inclusion is something I know)
So, I tried to pass to the browser the URL directly and detect the download and catch the bytes ==> Not able to.
I tried to use a WebClient which I set the cookies, but it doesn't work. Though, using this and comparing with Chrome, I see the cookies aren't the same (in fact I can highly presume there is one important missing).
So, either why I don't get all the cookies or how could I get the bytes from these files?
I got it!!! Finally thought to search how to "save as" a file and then got it.
https://www.codeproject.com/tips/659004/download-of-file-with-open-save-dialog-box
Using GetGlobalCookies got me the proper cookies and the file is downloaded correctly using Webclient. Oh yeah!
-- I removed my duplicate answer to another question. I answered another question by mistake --
I'm teaching myself .NET c# through books and self-assigned projects for fun. I thought it could be a good experience to try and create my own image click captcha control from scratch. The kind where you identify "the right image" from a few options and click the right one (the cat or something) to identify yourself as human.
As I was trying to think of all the ways a script might learn its way around whatever I create, I considered the possibility that it could simply learn the right answers from trial-error and saving the filenames of each image. Eventually it'd learn which filenames were the "right ones"
I can't think of any way to actually hide an image filename from a browser or source code, but renaming them every go-through isn't practical either. Is there some way I could "render" the images in some sort of custom MIME type (is that the right question? i'm new sorry) each time they're requested instead of just throwing out IMG SRC's?
This might just be impossible, but figured I'd try asking the experts. Thanks for your time!
What you do is provide a proxy for the images:
<img src="imageServer.aspx?id=12345" />
What you do at your end it send the MIME header, then stream out the file. This way there is no direct relation between the image that is served and a particular filename.
You can create Bitmap class from the loaded Image and save the image, used to generate bitmap somewhere to validate user input
There's a good example of dynamic image creation on the MSDN site here
http://msdn.microsoft.com/en-us/library/system.web.httpresponse.outputstream.aspx
I would recommend to use a generic handler (*.ashx) for this. Much less overhead and a cleaner way to load an image that calling an aspx page.
I've used generic handlers for image purposes countless times. For instance you can provide server-side resizing. Another cool feature would be accessing session values, like "only show the image if the user is logged in"
Looking at http://www.codeproject.com/Articles/34084/Generic-Image-Handler-Using-IHttpHandler might be a good start.
I'm currently under the belief it may be to do with the header information but I'm really not too sure. The image on this page is the best example of it that I can give. It will display sometimes in a web browser control, and other times it just plain refuses to. Does anyone have any idea as to why?
The code I'm using to try and grab the image is simply:
WebBrowser.Navigate("http://www.lse.co.uk/tools/user/imgChatUsagePie.asp?nick=mulledwine", null, null, "image/gif");
It's really hard to ascertain as to what is causing the image to display sometimes and others not as it works completely fine within Chrome. Is this a problem related specifically to the web browser control?
Thank you in advance.
Apparently, the server side script checks the ASP Session ID cookie and displays the image depending on some session variable stored on the server.
Try navigating the to the HTML page first, then request that image.
Sometimes the data is a valid image, and sometimes it is not.
But seriously You need to give more information I assume you mean
http://www.lse.co.uk/tools/user/imgChatUsagePie.asp?nick=mulledwine
Maybe post the code you use to generate the image. Is this browser specific. Under what conditions does the pie chart fail to generate. Does the url give an error message when it refuses to display the image etc. I would assume some variable you use to generate the image is null and throws an unhandled exception causing the page to return 500 and some kind of error rather than image data.
If it is really just sometimes it works and sometimes it doesn't id recommend using some kind of test suit to keep requesting the image at random images until it fails and then isolate the error message. Selenium or Fiddler may be of some use to isolate the problem.
I have a function which displays images from a binded uri (ie; www.website.com/picture1.jpg).
I have found and now understand that the phone caches the images that are downloaded. I read that its only for the life of the runng of the app, but even when I close the app and go back into it the same images from the cache come up.
Is there a way to stop this caching happening at all for this particular page?
EDIT: The images update regularly, but still have the same name, hence the need not to cache. Think security camera for example.
Many thanks.
There's no way around it unless you add a random query string to the image uri on each GET i.e.
var imageUrl = "www.website.com/picture1.jpg";
var imageUri = new Uri(String.Format("{0}?{1}", imageUrl, Guid.NewGuid()));
The caching is a little too aggressive - If doing a GET to the same Uri on any http request for the applications life cycle - Even if the content changes every time - The phone will cache it. It kept me puzzled for hours when I was trying to talk to a JSON-RPC web service...
Of course in general you will want images to be cached - But if you're sure the images you're after will be changing frequently then the above will work.
Add a unique querystring parameter to the URL. (eg, DateTime.Now)
There is CreateOptions property on BitmapImage (if you are loading in code) which lets you specify BitmapCreateOptions,one of which is IgnoreImageCache: Loads images without using an existing image cache. This option should only be selected when images in a cache need to be refreshed.
I've not tried it out, but it sounds like the kind of thing you are looking for ... if you do try it, I'd be interested in the result.
Depending on if you have control over the website and its content; shouldn't this be handled by setting the HTTP Response headers?
I would assume the platform respects the headers (unverified).
Otherwise the above posted random string trick will work.
Cache is a good thing , cause in your case, it can save the cost to download the image, if the image is not change, why you need to download it again?
if your image had been changed, and you want to enforce to download it again, you can generate a unique id at the url.
but think about it, why?
I want to download an image from a cartoon website. and my app is WinForm,not WebForm.
So let's say that there is an image on the a.html.
Normally, when I click the previous page and am redirected to this page,
there will be a image :"image is loading",let's say A.jpg, in the same block.
After 5 seconds, the real one,let's say B.jpg, will be displayed.
So what I got is only the caching image rather than the one,B.jpg, which I want.
So..... how should I do it?
Thanks in advance.
ps: I have posted this qustion for more than 48 hours, and only got a few of answers which don't solve my problem.
I am wondering that why there are only 2 people posted their answers?
Is my question not clear?
If any, please let me know.
Thanks
EDIT: Original answer removed since I misunderstood the question entirely.
What you want to do is basically HTML scraping: using the actual html of the page to discover where files are hosted and download them. Because I'm not sure if there are any legal reasons that would prevent you from downloading the image files in this manner, I'm just going to outline an approach to doing this and not provide any working samples or anything. In other words, use this information at your own risk.
Using fiddler2 in Firefox, you should be able to find the domain and full url that one of the images is downloaded from. Basically just start fiddler2, navigate to the site in firefox, and then look for the biggest file that is downloaded. That will tell you exactly where the image is coming from.
Next, take a look at the HTML source code for the page you are viewing. The way this particular site works, it looks like it hides the previous/next downloads in a swf or something, but you can find the urls in the javascript for the page. Look for a javascript array called picArr.
To download these using a WinForms app, I would use the WebRequest object. Create a request for each image url and save the response to disk.