How to get chrome to recognize bookmarks in file download URL - c#

I have searched everywhere and cannot find how to get chrome to recognize bookmarks in a doc download. (It's hard to search for this when bookmarks and anchors have two meanings).
For instance a URL http://my.website.com/myfile.doc#mybookmark downloads the file just fine, but I cannot get the browser to scroll to the bookmarks. I've tried the build in google docs viewer, as well as microsofts and neither work.
https://products.office.com/en-US/office-online/view-office-documents-online
https://docs.google.com/viewer?url=my.website.com/myfile#mybookmark
Does anyone know how this can be done?
If I can do it inside of the code, or in some other way please feel free to say so. I just need to have the client download a file from link and it go to the bookmark I specify.
(The site is written in VB.net, the clients use Chrome) C# is fine as well.
Thanks!

Related

iframe resource interpreted as document but transferred with mime type application/vnd.ms-excel

I am developing asp.net mvc 5 web application using c#. I am trying to display excel file in iframe
<iframe src="../../Data/ExcelSheets/ProjectExpenditureDetails/20170917184328709.xls" width="100%" height="500"></iframe>
When page loading always download the excel file and it is not displaying on the iframe.
Web developer tool says: resource interpreted as document, but transferred with mime type application/vnd.ms-excel
I don't know my approach is correct or not. If it's correct how to solve my problem. If it's wrong what is the best way to display excel file in web page.
I don't believe there is a way to do this natively in a browser. There are likely plugins that would allow it, but on a web site you can't guarantee someone will have that installed. I believe third party services are able to provide a some javascript that allows you to open a document. Google docs does something like this.
Think about flash applications (that used to be a thing). They contained proprietary code that wouldn't run in a browser without a plugin installed. XLS files are similar. There are some exceptions, but mostly a browser can only be expected to understand html, css, javascript, and a number of image formats. Even PDFs require a plugin to view, you just don't see it very often because many browsers make that fairly seamless now.
Unfortunately if you want to do this through a web site that needs to be secure I believe the route to go is actually replacing the shared sheet with application based functionality. Your clients may feel more comfortable about moving to google docs based sheets, which can be shared, but mine wouldn't touch it with a 10 foot pole. I'm not sure it is warranted, but that is how they feel.

How to convert PDF files to swf or HTML for viewing in C# MVC 4.5

I have hundreds of PDF files that i need to present to users. When presenting these files through my MVC web app i don't want the users to have the ability to download the files, e.g.. i don't want the Acrobat reader controls for print/save showing up. Reading this stackoverflow post it seems that it's not possible to disable those controls.
I know users can still take screen shots and print out the page, but that's not an issue in my case.
What is the best way to go about this. I've reasearched using SWFTOOLS which looks like it may be a good solution, but i dont want to save the swf files to my filesystem. The optimal solution is PDF.js, but another problem i have is users will be accessing the files through IE8 - so PDF.js is out of the question. Unless there is another similar library that will convert the files to HTML 4.
Basically I just need to display the PDF files, on the fly would be best, in a different format than PDF
Any suggestions?
I had a similar project a while back, where sensitive pdfs were needed to be displayed to specific users but they weren't allowed to download/print/save it.
Since it was a web app I ended up using pdf.js. It is Mozilla's PDF renderer for firefox. It renders the pdf on to a canvas and by default has all the bells and whistles. If you have firefox, open a pdf file to see it in action.
It was tough to get it running at first but I ended up using a demo I found online as the base of the project. After removing each functionality that was forbidden the finished product did exactly what was required. You will need to add a print css file to block printing or find a better solution. I ended up using the css approach since print preview by passed my javascript check for the print action. Also ensure you block ctrl + s which allows the user to save the pdf.
Another aspect to note is that it works better on later versions of IE and struggles on older versions as the file size increases. Firefox and chrome are not a problem and I believe its the same for opera although I haven't tested that.
I would convert it to an image file, you can find tools or write script to do it, I personally would do it by displaying them in browser first and then use browser plug-ins to take screenshot of the entire webpage.
(you can automate this)
then just display then converted pdfs
**this is probably not the best solution :( **

Using a WebBrowser, can you access to all downloaded/ing files?

Let's say that I navigate to google. It will download several files, including artworks, js scripts, etc. Can I access it from a member of WebBrowser, and if not, is there a special methodology to follow, in .Net? I already know HtmlAgilityPack, but it is for local file only. Websites behavior relies on a very strict structure from live, online documents and scripts, so I need something that works with online websites.
The WebBrowser of C# is an old browser and because of this you can't do any complex things like downloading from it. But there is a class in c# called WebClient by which you can download and upload what you want...

Issue in downloading video from Youtube?

I am trying to make a video download application for desktop in C#.
Now the problem is that following code works fine:
WebClient webOne = new WebClient();
string temp1 = " http://www.c-sharpcorner.com/UploadFile/shivprasadk/visual-studio-and-net-tips-and-tricks-15/Media/Tip15.wmv";
webOne.DownloadFile(new Uri(temp1), "video.wmv");
But following code doesn't:
temp1="http://www.youtube.com/watch?v=Y_..."
(in this case a 200-400 kilobyte junk file gets downloaded )
Difference between the two URLs is obvious, first one contains exact name for file while other seems to be encrypted in some way...
I was unable to find any proper and satisfactory solution to the problem so I would highly appreciate a little help here, Thanks.
Note:
from one of the questions here I got a link http://youtubefisher.codeplex.com/ so I visited there, got the source code and read it. It's great work but what I don't seem to get is that how in the world that person came to know what structures and classes he had to make for downloading a YouTube video and why did he have to go through all that trouble why isn't my method working?
Someone please guide. Thanks again.
In order to download a video from youtube, you have to find the actual video location. Not the page that you use to watch the video. The http://www.youtube.com/watch?v=... url, is an html page (much like this one) that will load the video from it's source location and display it. Normally, you have to parse the html and extract the video location from the html.
In your case, you found code that does this already - and lucky you, because downloading videos from youtube is not simple at all. Looking at the link you provided in your question, the magic behind the madness is available in YoutubeService.cs / GetDownloadUrl():
http://youtubefisher.codeplex.com/SourceControl/changeset/view/68461#1113202
That method is parsing the html page returned by a youtube watch url, and finding the actual video content. The added complexity, is that youtube videos can also be a variety of different formats.
If you need to convert the video type after downloading, i recommend FFMPEG
EDIT: In response to your comment - You didnt look at the source code of YoutubeFisher at all, did you.. I'd recommend analysing the file I mentioned (YoutubeService.cs). Although after taking a quick look myself, you'll have to parse the yt.playerConfig variable within the html page.
Use that source to help you.
EDIT: In response to your second comment: "Actually I am trying to develop an application that can download video from any video site." You say that like its easy - fyi, its not. Since every video website is different, you cant just write something that will work for everything out of the box. If I had to do it though, heres how i would: I would write custom parsers for the major video sharing websites (Metacafe, Youtube, Whatever else) so that those ones are guarenteed to work. After that, I would write a "fallover" if you will. Basically, if you're requesting a video from an unknown website, it would scour the html looking for known video extentions (flv, wmv, mp4, etc) and then extract the url from that.
You could use a regex for extracting the url in the latter case, or a combination of something like indexof, substring, and lastindexof.
I found this page # CodeProject, it shows you how to make a very efficient Youtube downloader using no third party libraries. Remember it is sometimes necessary to slightly modify the code as Youtube sometimes makes changes to it's web structure, which may interfere with the way your app interacts with Youtube.
Here is the link: here you can also download the C# project files and see the files directly.
CodeProject - Youtube downloader using C# .NET

How Can I Get Files And Folders Of A Special Website Like IDM Grabber In C#

If you have worked with IDM(Internet Download Manager) this has a item named Grabber that searches in a special web site and get the files and folders of that website and you can download them using IDM.
I would like to do something similar in C#. I would like to download html web pages and extract links from those pages. I would also like to detect directories and attempt to search their contents - possibly parsing "Index Of" directory listing pages.
How would I go about doing this?
Use regex or use the HtmlAgilityPack (http://htmlagilitypack.codeplex.com/) to parse the website and find links to files. You may need to check the extension of the file. Ie. Only parse links that end in .zip|.exe|.msi|.rar|.png|.pdf|.gif|.jpg|.jpeg.
I once wrote a "Web Spider" to do this and published the source code over at Code Project.
If you want to do it as an end-user, I found out the the free Httrack Website Copier works pretty well.

Categories

Resources