I am developing a .net web application in c#/.net. For images on my site I am using relative paths. However to assist with performance of my site I am looking to introduce cookieless domains for images.
In most situations I can just add the domain to the images.
But in certain scenarios I cant and need to do it at run time. So I am looking at introducing some code to resolve the urls. I have a couple options of doing this.
Method in base page to loop through all the controls, add append the domain to all controls that inherit for System.Web.Ui.Image and append domain if not present.
Or do similar in a http module. (Is this possible)
Will doing the above slow down my site rendering? I dont want this to be counter productive!
Either of the above approaches should work ok for .net controls.
But any ideas how I might append the domain to html img tag and/or any images within my stylesheets? I probably can just set the domains of the img tag in code, but not sure of the stylesheets.
You can consider using Response.Filter too.
Related
A little while back, one of the junior developers at our company was tasked with creating a website for users to enter timesheets offsite. Mostly this is used for staff that reside offshore and have limited bandwidth (it's satellite internet, so we're already looking at a 500ms - 600ms response time, typically with only 10KB/s or less, including 10% - 20% intermittent packet loss).
So it's a challenging situation...
Recently I've been tasked with helping the junior to improve the speed and functionality of the website, mostly for my own benefit, since I'm usually a desktop dev. One thing I've noticed is that the website is using MultiView and I'm wondering if that's the best approach. I can see the reasoning; download the entire website once, then just make queries back and forth, showing/hiding the various views as necessary. Except it doesn't seem to work as smoothly as that.
95% of operations required a run by the server; i.e. add a new timesheet - need to tell the server, which in turn creates a new entry in the database. When the server is done, it seems to cause the client to download the entire webpage again, which is obviously counter productive.
So my question(s) are as follows;
Is this the expected behaviour, given the above situation? i.e. Should the entire webpage be getting re-downloaded once the server has completed it's actions?
If so, is this the best approach for the situation? Would it be better to have smaller, individual pages for the various features (timesheets/leave/etc.)?
I know this is probably a bit opinion based, but any ideas or assistance is greatly appreciated; for both our benefits.
Going from memory, Multiview only renders one of the views, not all of them, but since you mention the Multiview, that tells me you are using the older WebForms technology which often carries large amounts of overhead saving/restoring state. You can try and optimize that, especially if you are using some kind of grid control.
A better approach may be to ditch WebForms and switch to a newer technology like MVC. Rewrite the application to use AJAX with a webservice that returns JSON whenever possible to reduce the amount of data that needs to be sent to and from the server. Using MVC will also reduce the number of resources required for a page load (No resource.axd, etc) which will help page load times, especially over high latency links.
Make sure the server is set to compress dynamic pages with GZIP.
Compress and minify your javascript and CSS.
Don't use inline styles (the style attribute) in your HTML (use classes or IDs+children selectors) to reduce HTMLsize.
Bundle all your javascript and CSS.
Sprite your images in CSS where possible.
Run your images through a good image optimizer like http://kraken.io
Make sure you are caching whatever you can, and the cache duration is set properly.
Minify your HTML.
Stop using WebForms (or watch your page state, and control state very closely)
Check into some of the SPA architectures out there -- you may be able to make the whole application "offline-able" with the exception of the calls to get/update/create data.
Ultimately, each page should only require 1 HTML file, 1 CSS file, 1 Javascript file, and 1 sprite sheet on the first page hit, and then every page after that should only require a single HTML file.
You might also want to look into using a client side library like angular or knockout to handle rendering views. This can reduce the amount of traffic that needs to be sent (although it likely will increase the number of requests by one).
I think the best bet is a SPA (Single Page App) with Angularjs. Done right it greatly reduces the number of http requests. Navigation does not cause entire page reload in any case. Javascript files, css files etc, are loaded just one time at app load time. Once the app is loaded in the browser, the traffic is mainly sending JSON back and forth.
There are some tricks you should apply to reduce app load time:
Bundle javascript files into just one minified javascript file.
Bundle css files into just one css file.
Levearage http cache. You can use file versioning combined with MaxAge http header, so the browser does not even ask the server if the file has changed.
Some tools to help:
Fiddler, look at what is being cached and what isn't.
Facebooks augmented Traffic Control
To my understanding, ajax would be the best choice for you. If you want to access server 95% of times and reload the page with the new elements then the performance would hamper.
So instead of doing this make partial reloading with Ajax or Jquery. There are plenty of functionality available with jquery which would use ajax and reload specific portion of the webpage instead of whole page. It would increse the performance a lot.
One more thing I would like to add is that the response packet coming from server might be huge chunk. So instead of directly throwing the response from the server, implement GZip functionality in the website. It would compress the size of the data packet and the page would load/reload much faster.
Other than these, place your CSS and JS code inside some .css and .js file instead of placing it inside the page itself(and make sure to use it maximum time from all the pages). Browser would make a cache version of those files and reuse it instead of download it every time you want to connect to the server.
I believe that you have already figured out what's wrong. No Multiview is not good if it is implemented as is without tweaks. If your website uses viewstate and on top of that you have the multiview implemented, then it is going to be a costly affair.
Here are your options.
To use most out of the code, I would recommend to convert your methods HTTP GET / POST methods which can be then called separately from the needed actions in the html.
Don't re-render the entire page, but render the content which changes on menu action.
Change the non-changing part of your page / site to static content and apply compression on the static contents.
Enable page caching.
Cache the data offline wherever possible. (Remember it comes with a overhead of syncing data).
If you are considering a revamp give a thought about HTML 5 offline features.
I'm developing a C# replacement for a legacy VB app for my company. The front end is basically a Web Browser control inside of a Windows form, serving offline content which is sometimes altered to include the user's data. Because there are 100 or more web files in the legacy app, we are going to reuse the web UI from the old application with a new C# wrapper around it, modifying them as needed.
My questions are about how to store and deliver the web content.
Does it make sense to copy the web files to a temporary folder and point the Web Browser control to the file:// address of the temporary folder?
Is there some kind of pre-built offline-friendly server framework that makes more sense than copying the files to a temporary folder?
I have the web source files in my project as resources, but I'm not sure if that is appropriate for my uses. Is it?
The legacy VB implementation alters the web files to inject data using Substring methods; it searches for magic strings and replaces them with the appropriate data. That code smells pretty bad, is there a better, more native data injection strategy I should look at?
Some background:
The data is presented using HTML\CSS\JS and also sometimes XSL.
The browser delivers content that is available at compile time.
I'm going to have to handle some events using c# code when users click on buttons of the page.
I'm free to choose whatever approach is necessary to implement the application.
Hosting
I would probably avoid using a temporary location for the web content it just seems a little crude. If there is no internal linking between your html pages and all the css/js is embedded in one file it may be easier to just use the WebBrowser.DocumentText property.
Another option I have successfully used as a lightweight embedded web server is logv-http, it has a pretty easy to configure syntax. If you want to configure against anything other than localhost it does require administrator privileges but it sounds like everything will be local.
var server = new Server("localhost", 13337);
server.Get("http://localhost:13337" ,(req, res) => res.Write("Hello World!"));
server.Start();
Templating
I think the string replaces aren't necessarily bad depends how many there are and how complicated they are trying to be, but for simple find replace it shouldn't be too hard to manage. If there are lots of replaces wrapping them into a RegEx should help performance.
Storing the web content as embedded resources is probably how I would go that way you can read them out at run-time do you pre-processing and then return either via the the web server method or direct into the DocumentText.
I'm importing classic ASP pages into a new Sitefinity installation. Unfortunately, the existing site makes extensive use of URL rewriting via Helicon ISAPI Rewrite 3.
I'm generating the list of pages that need to be imported by crawling the navigation menus in the old site. These are, unfortunately, not dynamically generated from any sort of central repository, so the best way I've found to build the site hierarchy is to crawl the site.
When creating page nodes in the Sitefinity nav hierarchy to hold the content from the old pages, I need to be able to create the new pages at a location roughly equivelant to their location in the file system in the old site. However, the rewrite rules make this difficult to determine. For instance, I may get a link form parsing the old HTML like:
http://www.mysite.com/product_name
which is rewritten (not redirected) to
http://www.mysite.com/products/product_name/product_root.asp
I need a way to get the second url from the first. The first thing that comes to mind is to somehow use the .htaccess file to parse the URLs, get the result and use that for the rest of the import process.
Is there a way to do this from a Winforms app without having to involve a web server? I realize that I could modify one of the ASP includes, such as the page footer, to emit a comment containing the rewritten URL of each page, but I'd rather not make unnecessary changes to the existing code if it can be avoided.
Update
For example,
http://www.keil.com/arm/
rewrites to
http://www.keil.com/products/arm/mdk.asp
I'm working on an existing large site that uses querystings in ID for different sections (representing physical stores) of the website.
I'd like to be able to implement pathinfo requests for SEO purposes so I'm looking at URLS like:
http://www.domain.com/cooking-classes.aspx?ID=5 (where 5 would be the ID of the local store)
Is there a way to make this type of URL work?
http://www.domain.com/cooking-classes.aspx?ID=5/chocolate ? I can get the content to work without the querystring however the existing infrastructure needs the ID to run. I tried:
http://www.domain.com/cooking-classes.aspx/chocolate?ID=5 however the ID comes back incorrectly.
Using http://www.domain.com/cooking-classes.aspx/5/chocolate means a rewrte of the page handling engine.
Am I clutching at straws here? No real way to get PathInfo and Querystring to play nicely with each other?
I'd like to stay away from any IIS mods as we don't have access.
Your last URL is going to yield the best result for search engines, however you may want to drop the .aspx. You will need to write an HttpHandler or HttpModule to be able to accomplish this. It's actually not as much work as it may seem, and you don't have to change your page at all. Your HttpHandler can do a behind the scenes redirect preserving the URL. Check out this article on the MSDN:
http://msdn.microsoft.com/en-us/library/ms972974.aspx
If you don't need anything super specific, you could use an existing HttpModule like the one mentioned in the post on ScottGu's blog:
http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx
He mentions UrlRewriter.net which is open source:
http://urlrewriter.net/
What is the best way to transform large bunches of very similar web pages into a newer css-based layout programatically?
I am changing all the contents of an old website into a new css-based layout. Many of the pages are very similar, and I want to be able to automate the process.
What I am currently thinking of doing is to read the pages in using HtmlAgilityPack, and make a method for each group of similar pages that will create the output text.
What do you think is the best way to do this? The pages mostly differ by things like which .jpg file is used for the image, or how many groups of heading-image-text there are on that particular page
EDIT: I cannot use any other file type than .html, as that is all I am authorized to do. Any suggestions?
EDIT2: Ideally, I would also be able to make this be generic enough that I could use it for many different groups of html files by just switching around a few moving parts.
Sounds like you should be re-using code. If you are using strictly HTML, I would consider doing PHP or ASP based webpages instead. That way, you can create Header/Content/Footer/Nav sections, and re-use the same code across all your webpages.
This would make it a lot more sustainable, as you would only need to edit one file in the future.
What about using Server Side Includes (SSI) <#!--#INCLUDE -->
This was you can create different parts of your webpage in different files and just "include" them in any other page you want.
header.html
body.html
footer.html
<html>
<!--#INCLUDE file="header.html" -->
<!--#INCLUDE file="body.html" -->
<!--#INCLUDE file="footer.html" -->
</html>
More info here