I have created a web browser in .NET [c#]. It is working fine but I am little confused with how to manage these things.. please help me in deciding how to implement it!
Where/How the cookies will be stored in my browser?
Bookmark?
History?
Pop Up blocker / other browser settings?
I can understand that the question is too general, but even your little suggestion will help me a lot.
Look at how other browsers handle cookies, they store it in the browsers temp (cache) folder
Bookmarks are just a key value list (at the very simpliest), key is a URL, value is the display name. Any place you can store a key/value list (like a database) would work.
History is just bookmarks that are automatically captured and ordered by date.
This depends on what you are using for the web browser control - if you are using the IE activeX control then you are stuck to sharing settings with it. This also means that you have a fantastic UI built for the settings though. If you have written your own then you need to store them somewhere, in a database for example.
I have mentioned database a few times, but I am not thinking a full MS SQL Server install, but rather something like SQLite.
Related
I'm trying to help save time at work with for a lot of tedious copy/paste tasks we have.
So, we have a propitiatory CRM (with proper HTML ID's, etc for accessing elements) and I'd like to copy those vales from the CRM to textboxes on other web pages (outside of the CRM, so sites like Twitter, Facebook, Google, etc)
I'm aware browsers limit this for security and I'm open to anything, it can be a C#/C++ application, Adobe AIR, etc. We only use Firefox at work so even an extension would work. (We do have GreaseMonkey installed so if that's usable too, sweet).
So, any ideas on how to copy values from one web page to another? Ideally, I'm looking to click a button and have it auto-populate fields. If that button has to launch the web pages that need to be copied over to, that's fine.
Example: Copy customers Username from our CRM, paste it in Facebook's Username field when creating a new account.
UPDATE: To answer a user below, the HTML elements on each domain have specific HTML ID's. The data won't need to be manipulated or cleaned up, just a simple copy from ourCRM.com to facebook.com / twitter.com
Ruby Mechanize is a good bet for scraping the data. Then you can store it and post it however you please.
First, I'd suggest that you more clearly define exactly what it is you're looking to do. I read this as you're trying to take some unstructured data from Point A and copy it to Point B. Do the names of these fields remain constant every time you do the operation? Do you need to simply pull any textbox elements from the page and copy them all over? Do some sort of filtering of this data before writing it over?
Once you've got a clear idea of the requirements, if you go the C# route, I'd use something like SimpleBrowser. Judging by the example on their Github page, you could give it the URL of the page you're looking to copy, then name each of the fields you're looking to obtain the value of, perhaps store these in an IDictionary, then open a new URL and copy those values back into the page (and submit the form).
Alternatively, if you don't know the names of the fields, perhaps there's a provided function in that or a similar project that will allow you to simply enumerate all the text fields on the page and retrieve the values for all of them. Then you'd simply apply some logic of your own to filter those options down to whatever is on the destination form.
SO we thought of an easier way to do this (in case anyone else runs into this issue).
1) From our CRM, we added a "Sign up for Facebook" button
2) The button opens a new window with GET variables in the URL
3) Use a greasemonkey script to read those GET variables and fill in textbox values
4) SUCCESS!
Simple, took about 10 minutes to get working. Thanks for you suggestions.
I have looked for answers to this question, but I am not sure if I am asking it right.
I am looking for what do developers do in this situation:
I am developing an ASP.NET C# applications. I have CSS and SCRIPT files, and I am using jQuery. I install my application to the Web Servers (or I have my customer install them). If I have made any changes to my script files by adding some new jQuery or something, my customers don't get that effect after I do an update. I assume that the reason is that their browsers cache the file on the local computer and they do now download the new file from the server.
In my development environment I clear the cache when I close the browser and on IE I tell it in options to always load from the server. That way when developing I never have cached data.
What is the best practice to make sure that if I do make changes, those files get refreshed on the client computers after I do an update? Is there something in Code I can do?
I really don't want to change the filename and update all my script references.
Thanks,
Cory
The traditional way is to append a query string argument to the end of the reference to the css/script file path. For example, if you append a build number as the query string, each version of the software will make its own request for the relevant resource.
We have a company intranet and the powers that be think it would be nice to have a collection of icons/links representing the applications that most reps use (Outlook, Excel, few other apps).
The idea would be that if the application is installed, clicking the link/icon would launch the application on the client machine.
Anyone ever had a requirement like that and been successful implementing it?
Wanted to reach out to everyone before I go back and say no. Thanks in advance for any replies.
Make each button be a link to download a company template file for the given application. For example, the "Excel" button would download and the user should be prompted to open it with Excel.
For instance, try clicking one of the links here:
http://www.google.com/#sclient=psy&hl=en&q=template+filetype:xls
Linking to static files on the web server should be sufficient, so long as your server sends the correct MIME-Type or Content-Type.
You will probably have to adjust browsers' security settings to allow them to follow the links, but you should be able to use URLs of the form file:///C:\\Program Files\\Notepad.exe (You might prefer to use normal slashes / instead, as you don't have to watch out for how many copies of \ are needed to get past quoted string interpretation in whatever you're using to design the web page(s).
Using IE on a local intranet we have implemented this in an ActiveX control. Josh Pearce's solution works for those types of apps with MIME types, but not all apps you may wish to open would work this way.
Using C# (or VB if needed), I am setting up a simple automated browser program. Right now I am doing this via watin. I am doing this on my Windows 7 Desktop and the browser I am automating via Watin is IE. Ideally I would like to keep it as is, in C# using watin, but I am flexible.
Each time I run the program I would like to delete the cookies, which with watin is simple to execute. The problem I have is deleting the flash cookies.
I know to manually delete the cookies you can do it here, but I'd like to figure out a way to do it via programming FLASH COOKIES SETTINGS
Also, here is a Great paper on Flash Cookies and Privacy
Please let me know if I left anything out, or what I can do to make this question as clear as possible.
Take a look at this blog post for a simple batch file that does this: http://www.ardamis.com/2010/07/07/how-to-delete-flash-cookies/ . This can easily be converted to C# or (you can shell out to the batch file) and executed from your test suite init.
Use SOLReader to delete the relevant LSO data.
This may be some "best practices" thing I've overlooked or don't know about, so go easy on me please.
I have an asp.net website that populates a gridview with columns from my database table. One of those columns gets processed into a link to a word document on another server. The issue is that if a user clicks on the word document to view it, and then that document is updated on the remote server, the user cannot access the changed document until their browser cache is cleared and it's forced to go out to the network to grab a fresh copy when the link is clicked.
Basically I want to somehow force the machine never to use the cached copy of the document, but always go out to the network to get the newest copy.
Bonus question: Would this be better handled somehow by storing the documents in SharePoint?
UPDATE: using Response.Cache.SetCacheability(HttpCacheability.NoCache); in my codebehind I have now resolved the issue in FireFox, but IE8 is weird. If I update the document and then left click on it, it brings up the word doc in the IE window without the changes. However, if I make changes, save them and then middle click on the document so it opens up a new tab, the document reflects the changes. I'm mostly there...
Try adding a little extra data to the link. Here's an example using js; if you're building the url server side, it should be essentially the same:
var url = "http://www.mydomain.com/mywordfile.doc?ts=" + (new Date()).getTime();
That'll force the url to have a different query url each time, which (in theory) should force the browser to re-request and re-download it.
By chance are you seeing this with IE8 specifically? We've seen it show this behavior where caching was previously not an issue.
Typically it can be cleared up with a couple steps: explicitly telling the browser not to cache via HTTP headers, and also expiring the page immediately. Google the "pragma no-cache" header, there is typically a couple of different lines you need to add to cover all browsers.