The project contains files of .aspx.cs , .aspx , .htm , .cs etc. As far as I understand, it is a web application project. I am working on a base page named PageBase.cs which includes features that all other pages would inherit from. I want to test how this page works and I am stuck.
There's no "Start Debug" nor "Run" options. The only one I get is "Attach to a Process". When I attached this .cs file to a process, VS shows that debug is ready but no outcomes are shown. I'm not even sure what outcomes I am expecting though so I can only stop debugging. The followings are the links I found in my research, hopefully they would be helpful in some way:
https://msdn.microsoft.com/en-us/library/3s68z0b3.aspx
https://msdn.microsoft.com/en-us/library/df5x06h3(v=vs.110).aspx
I know this question is trivial but I am totally new to .Net. Please help.
Since you've only created a class, you need to have a way to reach that code. Have one of your pages inherit from that class, and make sure that your custom class is wired into the Page Life cycle events properly (Page_Load, Init etc) depending on when you want the code to execute.
Assuming you set up the inheritance properly, and that your debugger is attached to the process, your breakpoints in the class will be hit when you access that page and hit the appropriate stages in the page lifecycle.
This is what I did for PageBase.cs:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace Test.Lib.Base
{
public class PageBase : System.Web.UI.Page
{
#region Method
protected override void OnInit(EventArgs e)
{
AutocompleteOff();
base.OnInit(e);
if (User.Identity.IsAuthenticated)
ViewStateUserKey = Session.SessionID;
}
protected override void AutocompleteOff()
{
Page.Form.Attributes.Add("autocomplete", "off");
}
#endregion
}
}
And then for other pages under test folder, (Body.aspx.cs for instance) I added PageBase as the following:
public partial class PostLogin : Lib.Base.PageBase
{
# Method
...
}
The suggested way of using ServiceStack.NET with Silverlight is to use the Linked-Project addon. This enables two synchronous Projects and their sources, one for Silverlight, one for .NET 3.5+.
But when it comes to validation, this gets a bit annoying.
ServiceStack is using FluentValidation, which is cool. But it has changed the namespace.
So I end up with:
using MyNamespace.Model;
// HERE ----------------------------
#if SILVERLIGHT
using FluentValidation;
#else
using ServiceStack.FluentValidation;
#endif
//TO HERE------------------------
namespace HR.RoBP.Contracts.Validators.Model
{
public class CustomerValidator : AbstractValidator<Customer>
{
public CustomerValidator()
{
RuleFor(r => r.Name).NotEmpty().NotNull();
}
}
}
This is not much, but it gets really annoing when writing a new validator each time. I often forget it, compile, have errors, fix it.
I know there is something changed in FluentValidation on ServiceStack.NET.
But must it be in a seperate Namespace?
I think its in the interest of servicestack to keep code files clean.
But using the the same validation on client and server forces me to do this.
If there is a elegant way to fix this issue, I would love to hear about it.
You unfortunately can't set a project-wide namespace alias. You could however try to write a template for your validator class that has that boilerplate code built in, and you can easily click Add -> New Item -> Your Validator Template.
I have a cs file delivered from a vendor with a structure like the following:
public partial class Test : System.Web.UI.Page
{
public void InsertSignature()
{
Response.Write("ASDFASDFAF#WRRASDFCAERASDCDSAF");
}
}
I am attempting to use the InsertSignature function in a MVC 3 application using the following code
MySample sample = new Test();
sample.InsertSignature();
I'm getting the following HttpException: "Response is not available in this context." Is there anyway that this can work with out modifying the vendor delivered product. I know there are ways to make this work by modifying the file but if at all possible it would be great to avoid doing this.
Thanks in advance.
It seems as a known issue (Response is not available in this context)
Just replace
Response.Write("ASDFASDFAF#WRRASDFCAERASDCDSAF");
with
HttpContext.Current.Response.Write("ASDFASDFAF#WRRASDFCAERASDCDSAF");
and this will do the trick. It doesn't seem as a big change.
UPDATE: If you wish to prevent the code from rewriting with future updates, just rename the method - e.g. from InsertSignature() to InsertSig(). If the file is being updated with the vendor's version, it will simply not compile, and will be clear what the reason is.
We have a nested layout for our various pages. For example:
Master.cshtml
<!DOCTYPE html>
<html>
<head>...</head>
<body>#RenderBody()<body>
</html>
Question.cshtml
<div>
... lot of stuff ...
#Html.Partial("Voting", Model.Votes)
</div>
<script type="text/javascript">
... some javascript ..
</script>
Voting.cshtml
<div>
... lot of stuff ...
</div>
<script type="text/javascript">
... some javascript ..
</script>
This all works fine, but I would like to push all of the JavaScript blocks to be rendered in the footer of the page, after all the content.
Is there a way I can define a magic directive in nested partials that can cause the various script tags to render in-order at the bottom of the page?
For example, could I create a magic helper that captures all the js blocks and then get the top level layout to render it:
Voting.cshtml
<div>
... lot of stuff ...
</div>
#appendJSToFooter{
<script type="text/javascript">
... some javascript ..
</script>
}
I came up with a relatively simple solution to this problem about a year ago by creating a helper to register scripts in the ViewContext.TempData. My initial implementation (which I am in the process of rethinking) just outputs links to the various referenced scripts. Not perfect but here's a walk-through of my current implementation.
On a partial I register the associated script file by name:
#Html.RegisterScript("BnjMatchyMatchy")
On the main page I then call a method to iterate the registered scripts:
#Html.RenderRegisteredScripts()
This is the current helper:
public static class JavaScriptHelper
{
private const string JAVASCRIPTKEY = "js";
public static void RegisterScript(this HtmlHelper helper, string script)
{
var jScripts = helper.ViewContext.TempData[JAVASCRIPTKEY]
as IList<string>; // TODO should probably be an IOrderedEnumerable
if (jScripts == null)
{
jScripts = new List<string>();
}
if (!jScripts.Contains(script))
{
jScripts.Add(script);
}
helper.ViewContext.TempData[JAVASCRIPTKEY] = jScripts;
}
public static MvcHtmlString RenderRegisteredScripts(this HtmlHelper helper)
{
var jScripts = helper.ViewContext.TempData[JAVASCRIPTKEY]
as IEnumerable<string>;
var result = String.Empty;
if (jScripts != null)
{
var root = UrlHelper.GenerateContentUrl("~/scripts/partials/",
helper.ViewContext.HttpContext);
result = jScripts.Aggregate("", (acc, fileName) =>
String.Format("<script src=\"{0}{1}.js\" " +
"type=\"text/javascript\"></script>\r\n", root, fileName));
}
return MvcHtmlString.Create(result);
}
}
As indicated by my TODO (I should get around to that) you could easily modify this to use an IOrderedEnumerable to guarantee order.
As I said not perfect and outputting a bunch of script src tags certainly creates some issues. I've been lurking as your discussion about the jQuery Tax has played out with Steve Souders, Stack Overflow, Twitter and your blog. At any rate, its inspired me to rework this helper to read the contents of the script files and then dump them to the rendered page in their own script tags rather than link tags. That change should help speed up page rendering.
Can you define a section at the bottom of the body element in your Master.cshtml file as follows:
#RenderSection("Footer", required: false)
Then in the individual .cshtml files you can use the following:
#section Footer {
<script type="text/javascript">
...
</script>
}
Rather than building some server-side infrastructure to handle a client-side concern, look at my answer to another question: https://stackoverflow.com/a/9198526/144604.
With RequireJS, http://requirejs.org, your scripts won't necessarily be at the bottom of the page, but they will be loaded asynchronously, which will help a lot with performance. Page rendering won't be halted while the scripts are executed. It also promotes writing Javascript as lots of small modules, and then provides a deployment tool to combine and minimize them when the site is published.
This is a bit hacky, but if your goal is to affect minimal changes on existing views (besides moving the rendered scripts), the way I've done it in the past (in Web Forms specifically but would also apply to MVC) was to override the TextWriter with one that pushes scripts to the bottom.
Basically you just write a TextWriter implementation and hook it up to your MVC base page that looks for <script src=" and captures the file name in an internal Queue, then when it starts to get Write calls for </body> it renders everything built up in its Queue. It could be done via regex but its probably pretty slow. This article shows an example of a TextWriter for moving ViewState but the same principal should apply.
In order to override for dependencies I then defined script files and dependent script files in my web.config, similar to this in situations where I needed ordering override:
<scriptWriter>
<add file="~/scripts/jquery.ui.js">
<add dependency="~/scripts/jquery.js" />
</add>
</scriptWriter>
Disclaimer Like I said, this is hacky, the better solution would be to use some sort of CommonJS / AMD like syntax in your partials (#Script.Require("~/scripts/jquery-ui.js")), basically you could write a function that if the master/layout page indicates its capturing script registration it can listen for all the child registrations, otherwise it can just output inline, so wouldn't hurt to just use it everywhere. Of course it may break intellisense.
So assuming some code in your master like:
#using(Script.Capture()) {
#RenderBody()
#Html.Partial("Partial")
}
And a partial of:
#Script.Require("~/scripts/jquery-ui.js")
Then you could just code something like this to handle it:
public class ScriptHelper : IDisposable
{
bool _capturing = false;
Queue<string> _list = new Queue<string>();
readonly ViewContext _ctx;
public ScriptHelper Capture()
{
_capturing = true;
return this;
}
public IHtmlString Require(string scriptFile)
{
_list.Enqueue(scriptFile);
if (!_capturing)
{
return Render();
}
return new HtmlString(String.Empty);
}
public IHtmlString Render()
{
IHtmlString scriptTags;
//TODO: handle dependencies, order scripts, remove duplicates
_list.Clear();
return scriptTags;
}
public void Dispose()
{
_capturing = false;
_ctx.Writer.Write(Render().ToHtmlString());
}
}
You may need to make the Queue ThreadStatic or use the HttpContext or something, but I think this gives the general idea.
If you have a situation where you cannot improve on the existing structure (ie. you have both MVC and old WebForms pages, javascript tags are included all over the place, etc...), you could have a look at some work I'm doing on an HTTP Module that does postprocessing on the output, much like mod_pagespeed does in Apache. Still early days though.
https://github.com/Teun/ResourceCombinator
If you can still choose to include your scripts in a predefined way, that is probably better.
Edit: the project mentioned by Sam (see comments) was indeed largely overlapping and far more mature. I deleted the ResourceCombinator project from GitHub.
I realise this ship has pretty much sailed but since I have some kind of solution (albeit not a perfect one) I thought I'd share it.
I wrote a blog post about approaches to script rendering that I use. In that I mentioned a library written by Michael J. Ryan that I have tweaked for my own purposes.
Using this it's possible to render a script anywhere using something like this:
#Html.AddClientScriptBlock("A script", #"
$(function() {
alert('Here');
});
")
And then trigger it's output in the layout page using this call just before the closing body tag:
#Html.ClientScriptBlocks()
You get no intellisense in Visual Studio using this approach. If I'm honest I don't really advise using this technique ; I'd rather have separate JS files for debug points and use HTML data attributes to drive any dynamic behaviour. But in case it is useful I thought I'd share it. Caveat emptor etc
Here's a list of relevant links:
My blog post
Michael J. Ryan's blog post
My helper on GitHub
I would suggest that you shouldn't be putting scripts directly onto your page.
Instead pull the JavaScript into a separate .js file and then reference it in the header.
I know it is not exactly what you are asking for, but I assume you are doing this for SEO purposes, and this should have the same effect as putting the script at the bottom of the page.
We use the ScriptRegistrar method from Telerik's (GPL open sourced) Asp.Net MVC library.
SImply include your javascript in any views/partials pages etc. as below. The Telerik library handles rendering all of this this at the bottom of the final outputted page.
<%
Html.Telerik().ScriptRegistrar()
.Scripts(scripts => scripts.AddSharedGroup('lightbox')
.OnDocumentReady(() => { %>
<%-- LIGHTBOX --%>
$('.images a').lightBox();
<% });%>
It also looks after grouping multiple css and js includes together into single requests.
http://www.telerik.com/products/aspnet-mvc.aspx
My humble option is to leave these kind of things to the professionals :-) Take a look at ControlJS, a lib to load your external javascript files without interrupting the page processing (async, defer, on demand etc). With ControlJS it doesn't mind "where" you will put the loading code, they will all load at the end or on demand.
This presentation of Steve Souders, the lib author, gives a good overview of the problem (and the solution).
Sam,
I wrote Portal for this very reason: http://nuget.org/packages/Portal You basically put a
#Html.PortalOut()
in your master / layout view and stuff HTML code in from views or partial views like this
#Html.PortalIn(#<text> $(function() { alert('Hi'); }); </text>)
You can use several "portals" by specifying a input and output key. Right now it adds code in the order it comes in (partials render first, from "top to bottom"). If you're in the same view you have to deal with the top to bottom restriction, i.e. you can't have an "out" portal before an "in" one, but this is not an issue when sending from views to layout. There are more examples in the Portal.cs file.
What about html Response Filter, that modifies your final html. Find all script tags before certain place and paste them at the end of body. It's also works for improving dataTables rendering, by marking them hidden initially and make js to show them.
No intervention or changes on actual code needed.
What are the best practices to create a site, with ability to develop plugins for it?
Like you want to create a blog module, and you want users or co-developers to add plugins to extend this module functionality.
Update:
Thanks for the ultra speed answers, but I think this is over kill for me. Isn't there a simpler solution, like I have seen blogengine plugin creation system is you just have to decorate the class plugin with [Extension].
I am kind of mid core developer, so I was thinking of base class, inheritance, interfaces, what do you think ?
Edit
I completely rewrote my answer based on your question edit.
Let me show you just how easy it is to implement a plugin architecture with just the minimal steps.
Step 1: Define an interface that your plugins will implement.
namespace PluginInterface
{
public interface IPlugin
{
string Name { get; }
string Run(string input);
}
}
Step 2: Create a plugin that implements IPlugin.
namespace PluginX
{
using PluginInterface;
public class Plugin : IPlugin
{
public string Name
{
get { return "Plugin X"; }
}
public string Run(string input)
{
return input;
}
}
}
Step 3: Run the plugin.
namespace PluginTest
{
using System;
using System.IO;
using System.Runtime.Remoting;
using PluginInterface;
class Program
{
static void Main( string[] args )
{
string pluginFile = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "PluginX.dll");
ObjectHandle handle = Activator.CreateInstanceFrom(pluginFile, "PluginX.Plugin");
IPlugin plugin = handle.Unwrap() as IPlugin;
string pluginName = plugin.Name;
string pluginResult = plugin.Run("test string");
}
}
}
Keep in mind, this is just the basic, most straightforward example of a plugin architechure. You can also do things such as
create a plugin host to run your plugin inside of it's own AppDomain
choose either interfaces, abstract classes, or attributes to decorate your plugins with
use reflection, interfaces, IL-emitted thunks or delegates to get the late binding job done
if your design so dictates.
It's valuable to separate technical and architecturas perspectives:
In code level MEF (Managed Extensibility Framework) is a good start. Here is a simple example.
Any other DI (Dependency Injection framework) can work well to (ie. Unity)
And it's good to see this problem in architectural level:
Web Client Software Factory from p&p. Here are not only technical but arcihtectural informations about "How to create composite web applications?". See examples.. There is Modularity Boundle package.
Spring Framework.
I think it's a fast and efficient if you read&try some of those frameworks. And ofcoz read the source if you find something interessing.
Edit
if you are searching for an extensible blog engine then try Blog Engine first. It's from ASP.NET community.
This sounds like a job for the Managed Extensibility Framework from Microsoft. It's in a preview release at the moment but it would seem to be a better bet than rolling your own framework for this. There are links to guides about how to use this on the site there.
If you would like to see a real, open source application that impliments this archecture take a look at DotNetNuke.