((LinkButton)e.Item.FindControl("my_label_name"))
.Attributes.Add("onclick","javascript:myJavaScriptFunction('"
+ data1_from_db + "','"+data2_from_db+"')");
I wrote this code (this code is in my default.aspx.cs) and this worked successfully at localhost but at server didn't work. And gives no error about working. Just it doesn't work. If any incomprehensible places have, please, ask me.
My guess would be that the call to the myJavaScriptFunction is probably failing. Your javascript file (.js) is probably either not included or not marked as content within the project and would not get copied as part of the server installation.
EDIT: Based on your comments to my answer, it seems that your javascript (.js file) is being included and called on your server. If that is true, it is possible to debug your javascript by either using (IE Developer Tools - F12) or something like FireBug in FireFox to see what's happening within your javascript.
Is that part of code an update you made to your page that was already on the server? If so, just try clearing your cache. This just might work.
Double check that there is data on the server side. Say for example you are pulling info from localhost and everything is fine because that data does exist in the database on your local machine, however this data may not exist on the server's database. Double check to make sure it does and that the table name / info matches and so does the connection string.
Related
I have a blazor server app (.net 6) that worked fine locally. I needed to load a configuration file for the Ebay api, so it is a YAML file kept in the structure of my project.
On the server I can hard code access to it with the following path and the site works perfectly:
$"D:/home/site/wwwroot/Config/ebay-config.yaml"
However, when I try to get the exact same result using the relative path method for blazor server I am screwing it up somehow.
$"wwwroot/Config/ebay-config.yaml"
That was the first thing I found and tried. It gives me an error that it can't find $"D:/home/site/wwwroot/wwwroot/Config/ebay-config.yaml". Clearly the relative file path works to an extent, but for some reason wwwroot is put in there twice and it fails.
Using $"{env.WebRootPath}/Config/ebay-config.yaml" gave a similar result, with two wwwroots.
Can someone please tell me what the correct method to use to get to the wwwroot folder from the web server? It is currently hosted on Azure if that matters.
Thanks in advance.
Just in case anyone else comes looking...
Per the comment from Lex I went back and looked at it again. Using the file locationg like this works on both localhost and azure server.
$"Config/ebay-config.yaml"
I have a DotNetNuke application I am trying to setup on localhost.
The application was working fine until I tried to change database connection. After reverting back the changes I made in conenctionStrings, I am getting error whenever I try to run it. The error is
The localhost page isn’t working
localhost redirected you too many times.
Try clearing your cookies.
ERR_TOO_MANY_REDIRECTS
Well, obiously I tried clearing cookies and also tried on multiple browsers but getting same result. Page is not working.
What can be the possible reason ?
Without seeing any source code the best answer I can give you is this, something in your code if forcing a redirect to iteself, which then forces another redirect to itself, etc.
It would basically by the same as doing this,
void DoSomething() {
// Infinite loop, activate!
DoSomething();
}
If you provide some source code or a link to it in the question I could give you a more detailed solution, but as the question stands this is the best I can do.
I had this issue a while ago, setting the trust level to full trust (in web.config) solved it for me.
Happy DNNing!
Michael
I need a little help with the C# example program of Google-Drive...
I used this so-called "tutorial"/"example":
https://developers.google.com/drive/examples/dotnet
And the code from here:
https://code.google.com/p/google-drive-sdk-samples/source/checkout
I uploaded my (only slightly modified) sourcecode here in case anybody doesn't have Mercurial (I didn't have Mercurial and no admin rights to install it either, and Mercurial is the only way to get the sourcecode...):
http://verzend.be/elt0k13enraw/DrEdit.rar.html
I always get
"Ressource cannot be found"
Requested URL: /oauth2callback
I don't find this astonishing, as no oauth2callback controller or handler is implemented...
I tried adding a Controller called oauth2callbackController and redirecting to another action in oauth2callbackController.Index, doing
return new RedirectResult("/about/about");
But that only creates a NULL-reference exception.
So i figured, maybe the wrong controller and redirected to
return new RedirectResult("/drive/Index");
But that only creates an infinite loop of redirect -> allow -> redirect - allow -> etc.
BTW, the config to change the API key + REDIRECT_URI is in
Models\ClientCredentials.cs
Note:
The problem aren't my modifications.
The sample also didn't work unmodified, with the exact same error.
All I did was removing EntityFramwork references, and throwing "Not implemented exception" when a method using entity was called.
Edit:
Additional information:
What I really wanted to do in the first place is to write a console service that exports my database, LZMA-compresses the exported content, encrypts that with OpenPGP, and uploads the database of my server to Google drive every day at 24:00 o'clock, without any user input.
I got export working without a problem, i got the LZMA compression working without a problem, I got the encryption with PGP working without a problem.
After the end of the working day (grrrr), when I was at home, I was even able to download the example-code with the mercurial installed on my Linux-machine at home, and bring it on the windows machine using SMB...
But now I can't get the sample for the Google-drive SDK working...
And moreover, what I really need is an example for a console service/daemon, not a web-application.
When I created the API key, I saw one could create a key for a service, but there is no example on how to write a Google-Drive service (console application), and no useful documentation as well (yea there is a reference, but it's only a reference, IntelliSense provides about the same)...
When configuring your app in the API Access tab of the APIs Console, you had to set the root (/) of your web server as the redirect URI and not /oauth2callback.
Assuming that your app is published at www.example.com, just go back to the APIs Console and set it to www.example.com instead of www.example.com/oauth2callback
I've got a really vexxing problem with an ASHX handler that renders a captcha image. The thing that makes it really vexxing is that it was working fine two months ago and when I went back to it again today it had stopped working.
What I've got is a page that throws in a captcha every so often. This is the markup from an example of a challenge:
<img class="challengedtl" src="Challenge.ashx?tkn=0057ea27-4d35-4850-9c6f-7a6fdc9818e2"/>
The GUID references a record in a SQL table that contains the actual content of the captcha as well as the status of the captcha challenge, i.e. has it been processed and if so did the user get it right etc.
On the page where this markup is found, the image displays as a broken jpeg. When I drop a breakpoint in the ASHX ProcessRequest() method I can see that the ASHX is never being called.
When I take the URL out of the source attribute and run it directly from the address bar in my browser, then I hit my break point in ProcessRequest and the captch image is rendered just fine.
I don't believe that my ASHX code is the problem, since it works when I call it directly. The problem seems to be with why the ASHX isn't being called by the main page. Given that this was working in February I am at a loss to explain what is going on.
I know that something has happened to my machine since then. I suspect a Windows Update or a service pack for something. The reason for this is that my captcha processing includes tracking the IP address of the caller. Back when this was working my local host was being registered as 127.0.0.1 (IPv4) but now it is being registered as ::1 (IPv6). Probably a red herring.
Does anyone know what might be causing this or do you have any suggestions for how to troubleshoot this problem?
Is the handler in the same folder as the page containing the html you posted above?
Here are the two key parts:
When I drop a breakpoint in the ASHX ProcessRequest() method I can see that the ASHX is never being called.
and
src="Challenge.ashx?tkn=0057ea27-4d35-4850-9c6f-7a6fdc9818e2"
Put those together, and what we can surmise that the path in your src attribute is wrong.
It's just an image tag. If the html loads it will send a request for that resource. Since your breakpoint is not hit, it can only mean that either you aren't testing somewhere that allows breakpoints or that it's sending the request to the wrong place.
It could be as simple as sending the request to the production version of the site, using the wrong schema (ie: https vs http), or missing a folder or port number somewhere. The browser should be able to give you the entire path of the resource -- make sure this matches what you expect.
I am working on a site which is programmed in C# .net. It uses a CMS called ADX Studio (a decision which predates my time there) which provides a shonky form of URL Rewriting (as far as I can tell it works by assigning an aspx page as the default 404 handler in IIS).
I have an web form which lives at a rewritten URL. I edited it so that the html form's action points back to the rewritten URL:
var u = new Uri(Request.RawUrl.Split(new char[1] { ';' }).Last());
userAdminForm.Action = u.PathAndQuery;
(kind of ugly but works based on what Request.RawUrl is on these rewritten URLs).
The "pretty" URL is something like this:
http://www.site.com/admin/user/edit/
On my development box (Windows XP/ IIS 5) when I initially tried POSTing back to URLs like this I got a HTTP 405 error. I worked around this by adding a script mapping so Aspnet_isapi.dll handles all (*) requests. And everything works fine on my development machine.
I just pushed my changes to the live server (Windows Server 2003 R2 and IIS 6) and the post fails silently. The page refreshes but all of my logic (from within an IsPostBack path in the code) doesn't get hit. No errors are displayed, it just doesn't work.
If I remove my code setting the .Action of the form then the postback works but it is posting to the ugly URL corresponding to the physical location of the aspx file rather than my page.
Am I missing a simple way to make this work? I don't want to be switching URL rewriting method or anything as this is a large legacy site and is unfortunately pretty dependent on ADX Studio so I don't want to do anything that will break that.
[edited because somehow the code above lost its code highlighting]
The issue is that the page's <form> tag is referencing the "ugly" url as the action. You can resolve that by completely removing the action tag from the form. Browsers will, by default, postback to the same page, ie. the "pretty" url.
This article explains how to accomplish an "actionless" form (~ two thirds of the way down) http://msdn.microsoft.com/en-us/library/ms972974.aspx
It seems like the problem is the same as it was on IIS 5. I can get it to work by doing the following in the IIS Manager:
Right click on the relevant website and select "Properties"
Choose the "Home Directory" tab
Click "Configuration" down in the "Application settings"
Click "Insert" next to the "Wildcard application maps"
Browse to the location of aspnet_isapi.dll (in my case: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll )
Untick "Check that file exists"
Click "OK" back through the Russian doll of dialogs.
This is basically the same as the approach that I linked to in the question for IIS5. However, it's not optimal because IIS is running every request through asp (even static files). Which seems like it can only slow things down. I'd like to be able to specify that asp only needs invoking for HTTP POST requests at least.
The weird thing is that IIS5 gave a HTTP 405 error when POSTing to an extension without a registered ISAPI extension but IIS6 just fails silently. And the page is being run through IIS (I can debug with a breakpoint in the Page_Load function) but IsPostBack (and IsCrossPagePostBack) don't get correctly set. Could it be related to the view state? Is there any alternative to my solution described above?
I've come to what I think is an optimal solution for this problem. It turns out that ADXStudio CMS does use the default 404 rule to do some form of URL rewriting. This has a problem with http POST:
when IIS initially executes a custom
URL on a 404 error, it changes POST to
GET, even if the client does a POST
request.
(thanks to elite brains' blog post about setting up IIS6 and ASP.NET MVC).
Rather than creating my own HttpModule I decided instead to use Ionics Isapi Rewrite Filter to rewrite my URLs. I then set the 404 error handler in IIS to the default. And I created this IIRF.ini file to redirect all requests to the same format as the 404 handler produced:
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /Default.aspx?404;http://%{HTTP_HOST}$1 [U,L]
And everything seems to work great. The advantage over my previous answer is that the rewrite code is low level and runs fast and the -f and -d switches mean that if a file actually exists it isn't re-written and so static files don't have the overhead of running through .net.