I am creating a web browser using C#, and I need to get specific data from the web pages that are loaded in my browser.
The pages I am loading is a download scripts. The data I want to get is: the number of times the file has been downloaded.
I want to save this value in text.
What code can I use for this, or where can I start? Any help is appreciated.
Most web browsers have their own storage. Mozilla uses SQLite for some things.
Whenever your app/browser needs to retrieve a remote resource (URL of any kind), simply log it to a database table.
Perhaps use SQLite yourself for this. A decent start would be to create a history table like this:
URL --varchar(max)
LastAccessed --datetime
TotalRequests --int
If it is a file that the users will be downloading, you could add a global static int and increment it every time the file is downloaded.
Related
I have this specific scenario:
The user is sending me request which contains a URL to a file in my private repository.
The server side catch this request and Download the file.
The server making some calculation on the downloaded file.
The server sending the results back to client.
I implemented this in the "Naive" way. Which mean, I downloading the file (step 2) for each request. In most cases, the user will send the same file. So, I thought about better approach: keep the downloaded file in short term "cache".
This mean, I will download the item once, and use this for every user request.
Now the question is, how to manage those files?
In "perfect world", I will use the downloaded file for up to 30 minutes. After this time, I won't use it any more. So, optional solutions are:
Making a file system mechanism to handling files for short terms. Negative: Complex solution.
Using temporary directory to do this job (e.g. Path.GetTempFileName()). Negative: What if the system will start to delete those files, in the middle of reading it?
So, it's seems that each solution has bad sides. What do you recommend?
This is my first time developing this kind of system, so many of these concepts are very new to me. Any and all help would be appreciated. I'll try to sum up what I'm doing as efficiently as possible.
Background: I have a web application running AngularJS with Bootstrap. The app communicates with the server and DB through a web service programmed using C#. On the site, users can upload files and reference them later using direct links. There's no restriction to file type (yet), so just about anything is allowed.
My Goal: Having direct links creates a big security problem for me, since the documents/images are supposed to be private data. What I would prefer to do is validate a user's credentials when the link is clicked, then load the file in the browser using a more generic url path.
--Example--
"mysite.com/attachments/1" ---> (Image)
--instead of--
"mysite.com/data/files/importantImg.jpg"
Where I'm At: Not very far. My first thought was to add a page that sends the server request and receives a file byte stream along with mime type that I can reassemble and present to the user. However, I have no idea if this is possible using a web service that sends JSON requests, nor do I have a clue about how the reassembling process would work client-side.
Like I said, I'll take any and all advice. I'd love to learn more about this subject for future projects as well, but for now I just need to be pointed in the right direction.
Your first thought is correct, for it, you need to use the Response object, and more specifically the AddHeader and Write functions. Of course this will be a different page that will only handle file downloads, so it will be perfectly fine in your JSON web service.
I don't think you want to do this with a web service. Just use a regular IHttpHandler to perform the validation and return the data. So you would have the URL "attachments/1" get rewritten to "attachments/download.ashx?id=1". When you've verified access, write the data to the response stream. You can use the Content Disposition header to set the file name.
I want to grab a set of data from a site into my C# application. I've referred to some sites and articles using the WebClient class.
But the problem is the data I want is in a news bar made using flash. Is it possible to grab the data from it? The data in it also keeps on updating as well.
Have you tried the Yahoo approach? The below project does just that.
It is easy to download stock data from Yahoo!. For example, copy and
paste this URL into your browser address:
http://download.finance.yahoo.com/d/quotes.csv?s=YHOO+GOOG+MSFT&f=sl1d1t1c1hgvbap2.
Depending on your Internet browser setting, you may be asked to save
the results into a filename called "quotes.csv" or the following will
appear in your browser:
http://www.codeproject.com/KB/aspnet/StockQuote.aspx?display=Normal
It is unable to grab a data from Flash.
One possible solution is that, if you dig into embed tag at the Flash object or find some url or rss that looks to be consumed by the flash, you can read that by WebClient or (hopefully) XmlReader.
I have a page that downloads a large HTML file from another domain then serve it to the user. The file is around 100k - 10MB and usually takes about 5min. What was think about doing something like this to make the user experience better.
download file
if file is not download within 10 seconds then displays a page that tells the user that the file is being downloaded
if the server completes the download in 1 second then it will serve the downloaded html
can this be done? do I need to use the async feature?
Updated question: the downloaded file is a html file
In order to provide an 'asynchronous' file download try a trick that Google is using: Create a hidden iframe and set it's source to the file you want to download. You can then still run javascript on your original page while the file is being downloaded through the iframe.
I think you should:
Return an HTML page to the user straight away, to tell them the transfer has started.
Start the download from the other domain in a separate process on your server.
Have the HTML from step 1 repeatedly reload, so you can check if the download has completed already, and possibly give an ETA or update to the user.
Return a link to the user when the initial transfer is complete.
It sounds like you need to use a waiting page that refreshes itself every so often and displays the status of your download. The download can be run on a separate thread using a System.Threading.Task, for instance.
We have a c# asp.net web application that, amongst other things, allows users to download previously uploaded files such as PDF's, Word docs etc. The asp.net app is served up via an IIS6 server and the file resources live on a different server.
When the user requests a file (i.e. click a button on the web form), we stream the file back to their browser, changing the ContentType appropriately.
This seemed a good way to avoid going down the IIS virtual folder route to serve up the file resources - which we had concerns about due to the potential for users to hack the URL. i.e. with a URL like https://mydomain/myresource/clientid/myreport.docx, a savvy user could have a good stab at guessing alternative cvlientid's and document names.
The trouble with streaming a Word document to the browser is that when the browser throws it at Word, Word treats it as a brand new doc, which means the original document's properties & margin info is lost.
Our users store metadata information in the Word doc properties, so this solution is not acceptable to them.
Serving up via IIS virtual folders solves that problem, but introduces the URL security problem.
So my questions are ...
Does anyone know how we can use URL encryption/decryption (or obfuscation) with IIS Virtual folders?
Or does anyone know of any open source projects that do a similar job.
Or does anyone have any sugestions on how to go about writing our own implementation of Virtual folders but with encrypted URLs?
Many thanks in advance.
ps. our web app is delivered over https
Sorry guys, in my question, I have made some incorrect assumptions.
What am I trying to do is persist the properties stored on a word document when they are delivered from server (using either Response.TransmitFile or via a virtual folder) to a client browser.
I set up a test scenario with an IIS virtual folder and dropped a docx file (that I know contains info in the title & subject properties) in my virtual folder's physical path.
I pointed my browser at the virtual folder alias and the browser popped up its message to either open or save the doc.
If I choose to save it, the saved docx still has the properties intact.
If I choose to open it fist and then save it from Word, the saved docx has lost the properties.
So I think I need to post a different question!
You may find that the ClaimsAuthorizationManager class in "Windows Identity Foundation" does what you want. You get to implement whatever logic you like to determine who can download what without using "directory security".