I have an ASP.NET application which is taking more time to load initially. After loading the first time, the page loads faster.
My page has an Image gallery. This gallery is loaded based on the category selection. This is done by ajax. When I click a particular category , it will load a gallery via ajax. But the problem is The first Ajax request to load a category will take more time. Second time we try to access the same category it will load faster.
I have not enabled the server-side and client-side caching. What actually happens behind scene? I think When I try to read a file from a disk for the first time, it will cache in memory and second time it will given from the memory. Is it true my assumption? So my questions are:
1.Will Os will disk cache the file read operation?
2.If not so what is the problem happening when open the first time?
3.How can resolve this problem? Is there any IIS setting or Page Level?
Please Help.
Try deploying a precompiled solution to the server:
http://msdn.microsoft.com/en-us/library/ms228015(v=vs.85).aspx
Related
A little while back, one of the junior developers at our company was tasked with creating a website for users to enter timesheets offsite. Mostly this is used for staff that reside offshore and have limited bandwidth (it's satellite internet, so we're already looking at a 500ms - 600ms response time, typically with only 10KB/s or less, including 10% - 20% intermittent packet loss).
So it's a challenging situation...
Recently I've been tasked with helping the junior to improve the speed and functionality of the website, mostly for my own benefit, since I'm usually a desktop dev. One thing I've noticed is that the website is using MultiView and I'm wondering if that's the best approach. I can see the reasoning; download the entire website once, then just make queries back and forth, showing/hiding the various views as necessary. Except it doesn't seem to work as smoothly as that.
95% of operations required a run by the server; i.e. add a new timesheet - need to tell the server, which in turn creates a new entry in the database. When the server is done, it seems to cause the client to download the entire webpage again, which is obviously counter productive.
So my question(s) are as follows;
Is this the expected behaviour, given the above situation? i.e. Should the entire webpage be getting re-downloaded once the server has completed it's actions?
If so, is this the best approach for the situation? Would it be better to have smaller, individual pages for the various features (timesheets/leave/etc.)?
I know this is probably a bit opinion based, but any ideas or assistance is greatly appreciated; for both our benefits.
Going from memory, Multiview only renders one of the views, not all of them, but since you mention the Multiview, that tells me you are using the older WebForms technology which often carries large amounts of overhead saving/restoring state. You can try and optimize that, especially if you are using some kind of grid control.
A better approach may be to ditch WebForms and switch to a newer technology like MVC. Rewrite the application to use AJAX with a webservice that returns JSON whenever possible to reduce the amount of data that needs to be sent to and from the server. Using MVC will also reduce the number of resources required for a page load (No resource.axd, etc) which will help page load times, especially over high latency links.
Make sure the server is set to compress dynamic pages with GZIP.
Compress and minify your javascript and CSS.
Don't use inline styles (the style attribute) in your HTML (use classes or IDs+children selectors) to reduce HTMLsize.
Bundle all your javascript and CSS.
Sprite your images in CSS where possible.
Run your images through a good image optimizer like http://kraken.io
Make sure you are caching whatever you can, and the cache duration is set properly.
Minify your HTML.
Stop using WebForms (or watch your page state, and control state very closely)
Check into some of the SPA architectures out there -- you may be able to make the whole application "offline-able" with the exception of the calls to get/update/create data.
Ultimately, each page should only require 1 HTML file, 1 CSS file, 1 Javascript file, and 1 sprite sheet on the first page hit, and then every page after that should only require a single HTML file.
You might also want to look into using a client side library like angular or knockout to handle rendering views. This can reduce the amount of traffic that needs to be sent (although it likely will increase the number of requests by one).
I think the best bet is a SPA (Single Page App) with Angularjs. Done right it greatly reduces the number of http requests. Navigation does not cause entire page reload in any case. Javascript files, css files etc, are loaded just one time at app load time. Once the app is loaded in the browser, the traffic is mainly sending JSON back and forth.
There are some tricks you should apply to reduce app load time:
Bundle javascript files into just one minified javascript file.
Bundle css files into just one css file.
Levearage http cache. You can use file versioning combined with MaxAge http header, so the browser does not even ask the server if the file has changed.
Some tools to help:
Fiddler, look at what is being cached and what isn't.
Facebooks augmented Traffic Control
To my understanding, ajax would be the best choice for you. If you want to access server 95% of times and reload the page with the new elements then the performance would hamper.
So instead of doing this make partial reloading with Ajax or Jquery. There are plenty of functionality available with jquery which would use ajax and reload specific portion of the webpage instead of whole page. It would increse the performance a lot.
One more thing I would like to add is that the response packet coming from server might be huge chunk. So instead of directly throwing the response from the server, implement GZip functionality in the website. It would compress the size of the data packet and the page would load/reload much faster.
Other than these, place your CSS and JS code inside some .css and .js file instead of placing it inside the page itself(and make sure to use it maximum time from all the pages). Browser would make a cache version of those files and reuse it instead of download it every time you want to connect to the server.
I believe that you have already figured out what's wrong. No Multiview is not good if it is implemented as is without tweaks. If your website uses viewstate and on top of that you have the multiview implemented, then it is going to be a costly affair.
Here are your options.
To use most out of the code, I would recommend to convert your methods HTTP GET / POST methods which can be then called separately from the needed actions in the html.
Don't re-render the entire page, but render the content which changes on menu action.
Change the non-changing part of your page / site to static content and apply compression on the static contents.
Enable page caching.
Cache the data offline wherever possible. (Remember it comes with a overhead of syncing data).
If you are considering a revamp give a thought about HTML 5 offline features.
I am working on an ASP.NET/MVC4 app and I fetch data continuously and my problem is related to caching.
The problem is that when I click on a particular link in my application it works fine, but sometimes it automatically redirects to the INDEX page that is the default page.
I surfed around about this problem and found that it's a problem in Mozilla that it maintains caching of every link. But sometimes some weird things happen and it automatically redirects a particular link to the INDEX page (301 Permanently REMOVED) and also stores it in the cache such that now every time I click on that link it always redirects me to the INDEX page that's been cached.
So now I have to clear the cache in my browser every time I face this problem.
How can I make it not automatically redirect to the cached INDEX page?
You should really expand on what exactly is happening at that particular link you mention because well it should not 301 redirect unless your telling it to.
Also you say I fetch data continuously. What does this mean to us? Why is this important to know? Explain if this changes the link or the data? Are you 404ing the older data or something? That could possibly explain why you 301 back to your index.
Now with the limited information we have been given by you... if you want to prevent firefox from caching your urls/redirects simply make your url have a querystring that updates which each request. Like using a timestamp.
For example: http://example.com/return-data.asp?timestamp=1350668920
Then each time you continuously fetch data update the page's link accordingly
For example: http://example.com/return-data.asp?timestamp=1350669084
I working in application development using ASP.NET in C#.
I have a Grid View which I wanted it to load at the very last of the page load.
The page should load the master page first, where all images and other functionality was loaded, then only load the GridView.
Can this achieve on c# back end code?like the page life cycle event.
Please advice, thank you in advanced.
You can't tell to load something first or last using C#, because it is about HTTP Request and Response which is basically TCP/IP. You can however use AJAX to load some of the contents first and load some other contents last handling Javascript events like onload.
EDIT
I understand loading in this context as loading the page on the client side. If you are talking about loading page on the server side, yes you can do that. You need to load contents using page lifecycle events.
Looking for information - I am creating a catolog website that includes a list of products. Each product has an image stored stored on the hard drive on the server. If the image does not exist, I want to show a default image. Whats the best way of doing this. I am using C# and considered checking on the server side if the image exists. But as some pages could have 50-60 images this would slow down the page. I use jquery on the client side. Any tips on this?
This is a great question, as the sitation arises in many circumstances. I see several options:
1) check for image availability during rendering of the catalog and use a link to the default image for items that do not have an image,
2) check for image availability in the image controller and return the default image when not available
3) put images inline in the document using data URLs
A major factor here is the possibility of caching.
Option (1) facilitates caching of the default image, but precludes caching of the catalog page. It is better if there are many items without an image, then such items will not even generate a hit to the server Furthermore, if there's a low chance that an image would appear for an item, you could cache the index too (for a reasonably short time).
Option (2) facilitates caching of the index page, but each image will have to send a request to the server. Again, you could use aggressive caching to avoid the same requests the second time the page is rendered.
Option (3) is best if your images are small and if the catalog page is relatively static. Be sure to use caching on the server side though while generating the page to reduce the load on the filesystem/database.
Sounds like this is a web application, so you should look into doing some caching. Even though image file lookups are expensive, once your page gets hit a few times the disk lookups will no longer be necessary.
Or you could store the information about whether a product image exists in your database. Then you prepopulate the database with the information and no disk checks are necessary.
Your best bet is to do this server-side as you suggest. You could do it client-side (attempt to load image, and load a default image if that fails), but this is not really what client-side scripting is designed for. You're making the user do extra HTTP requests, which is slower for the user.
An even better solution, as marcind suggests, is to pre-populate the database with default images. So in your CMS, when you create a new item, it assigns a default image URL to itself. You can then manually change it from there.
How does your jQuery code know the name of the image?
Seeing that your image files are physical files on the server and are accessible from a browser, I'd probably leave that part as is since that implies you don't have to serve the images yourself and IIS can handle that for you as a static file.
So your jQuery code obviously know the name of the image for each product. I assume this name is given to it by some server side process, so that process needs to give it either the name of the image for the product or the default image.
Some part of your code has to go through the process of figuring out if an image exists for the product and react accordingly. If you're using a database for your products that you could have a field in product table that indicates if the product has an image or not.
This may be some "best practices" thing I've overlooked or don't know about, so go easy on me please.
I have an asp.net website that populates a gridview with columns from my database table. One of those columns gets processed into a link to a word document on another server. The issue is that if a user clicks on the word document to view it, and then that document is updated on the remote server, the user cannot access the changed document until their browser cache is cleared and it's forced to go out to the network to grab a fresh copy when the link is clicked.
Basically I want to somehow force the machine never to use the cached copy of the document, but always go out to the network to get the newest copy.
Bonus question: Would this be better handled somehow by storing the documents in SharePoint?
UPDATE: using Response.Cache.SetCacheability(HttpCacheability.NoCache); in my codebehind I have now resolved the issue in FireFox, but IE8 is weird. If I update the document and then left click on it, it brings up the word doc in the IE window without the changes. However, if I make changes, save them and then middle click on the document so it opens up a new tab, the document reflects the changes. I'm mostly there...
Try adding a little extra data to the link. Here's an example using js; if you're building the url server side, it should be essentially the same:
var url = "http://www.mydomain.com/mywordfile.doc?ts=" + (new Date()).getTime();
That'll force the url to have a different query url each time, which (in theory) should force the browser to re-request and re-download it.
By chance are you seeing this with IE8 specifically? We've seen it show this behavior where caching was previously not an issue.
Typically it can be cleared up with a couple steps: explicitly telling the browser not to cache via HTTP headers, and also expiring the page immediately. Google the "pragma no-cache" header, there is typically a couple of different lines you need to add to cover all browsers.