As a part of build output for a ASP.NET website project we create a bin folder. In order to validate that the build output is actually a functional website we load it into the IIS. And, the browse to it.
Is there a way to automate this in C# ?
I am not looking for a test framework to do this. Just a simple C# light-weight application that can point IIS to this bin folder, test the web application loads thats all.
Make your IIS VirtualDirectory/Application point to your bin folder, or a drop folder that you copy all the bin folders content when you do a Build.
Then do an IISReset:
System.Diagnostics.Process.Start("C:\Windows\System32\iisreset.exe");
Then hit the website using a Webclient:
WebClient client = new WebClient ();
client.Headers.Add ("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
Stream data = client.OpenRead (webUrl);
StreamReader reader = new StreamReader (data);
string s = reader.ReadToEnd ();
Console.WriteLine (s);
data.Close ();
reader.Close ();
Related
I have website (.net core 3.1) up on IIS (v10) on Windows (10)
website is calling external .exe that will stop IIS website (caller website), copy new files to iis website folder and start the site again. Problem is, when i stop website from external exe it doesn't shut down the website like admin does, then external program can't copy new files to website folder.
When i stop website manually i can copy and overwrite files it works as it should
Here is a snipped of website code that is calling external .exe:
using (var process = new Process())
{
process.StartInfo.FileName = _configuration.GetValue<string>("AppSettings:UpdaterPath");
process.StartInfo.Arguments = args;
process.StartInfo.CreateNoWindow = false;
process.StartInfo.UseShellExecute = true;
process.Start();
}
here is a snippet of my external exe code:
ServerManager server = new ServerManager();
var site = server.Sites.FirstOrDefault(s => s.Name == _settings.WebsiteName);
if (site != null)
{
//Cloning old version to backup
Log($"Copying old files to backup...");
CloneDirectory(_settings.WebsitePath, Path.Combine(_settings.BackupPath, _settings.WebsiteName));
//Copying new version to working dir
//if(!Directory.Exists(temp_folder)) Directory.CreateDirectory(temp_folder);
Log($"Extracting new files...");
ZipFile.ExtractToDirectory(temp_zip, temp_folder, true);
Log($"Stopping {_settings.WebsiteName}...");
site.Stop();
Log($"Copying new files...");
CloneDirectory(Path.Combine(_settings.FilesPath, "temp"), _settings.WebsitePath);
Log($"Starting {_settings.WebsiteName}...");
site.Start();
if (site.State == ObjectState.Started)
{
server.ApplicationPools[site.Applications["/"].ApplicationPoolName].Recycle();
}
}
CloneDirectory() just copies/overwrites files recursively.
I've tried stoping website pool, recycling, server.CommitChanges(), but nothing works because main dll files with real changes to website can't be copied because **
access is denied
**
this all works fine when i call exe as administrator
I've tried giving folder/files user privileges to IIS_IUSRS and my website default pool
I've tried giving website physical path credentials as administrator
I've tried giving website pool privileges identity set to admin acc, or NetworkService, LocalService
basically any combo i can think off
what i noticed is when i stop app pool in my exe it stops working after a few seconds
I assume there is a problem with privileges (cant stop iis, copy files etc)
I got some stupid question, but honestly I rly desperate right now.
In my .NET Core3 app, Im loading HTML template to variable with helps of absoluth path
var pathToFile = _env.ContentRootPath
+ Path.DirectorySeparatorChar.ToString()
+ "Templates.html"
var builder = new BodyBuilder();
using (StreamReader SourceReader = System.IO.File.OpenText(pathToFile))
{
builder.HtmlBody = SourceReader.ReadToEnd();
}
In my testing localhost environment it works fine, bcs its accessing the file on my disk.
But the problem here is, I cant reach this file if I run my app on Linux Docker environment.
How to properly load HTML files to variable on Docker Env with .NET Core3 ?
Thanks for your hints
Tested WebClient on both project type (WPF,Windows form application)
With the below code:
String[] txt = { "someURLstring", "otherURLstring" };
foreach (String str in txt)
{
using (WebClient wc = new WebClient())
{
wc.DownloadProgressChanged += wc_DownloadProgressChanged;
wc.DownloadFileCompleted += wc_DownloadFileCompleted;
await wc.DownloadFileTaskAsync(new Uri(str), "somename.rar");
}
}
Both projects are developed using Visual Studio 2015 with .net framework 4.5 on windows 10.
The issue is that, when those applications are not running on the PC that they are developed, the files that are downloaded are 95 KB each, but they are 2.5 MB each.
I tried to build them using .net framework 4.6 but still same issue.
Any ideas?
Both are tested on 2 other PC's using windows 7/10 with .net framework 4.6
EDIT:
I changed links to download files from my own webserver (file are .zip) and everything worked fine. Previous links though work only on my PC.
Any ideas?
Ok, well my bad I didn't mention that the first links that were used come from mediafire. Though they are using captcha system that's why other users can't download the files.
I am executing this code from MVC website.If i run this code from VS 2012 IIS express it is working but if I host the website on IIS server, it does not work. I tried debugging but there is some error code coming. If I also impersonate my id and pass the my id but all in vain.
Code:
System.Diagnostics.Process proc = new System.Diagnostics.Process();
Char[] chr = Password.ToCharArray();
System.Security.SecureString pwd = new System.Security.SecureString();
foreach (char c in chr)
{
pwd.AppendChar(c);
}
proc.StartInfo.FileName = path to WZZip.exe
//proc.StartInfo.Arguments = -ys20480 pathtosplit landingZone;
proc.StartInfo.UserName = UserName; //Comes from config file
proc.StartInfo.Password = pwd; //Comes from config file
proc.StartInfo.Domain = Domain; //Comes from config file
proc.StartInfo.RedirectStandardError = true;
proc.StartInfo.RedirectStandardOutput = true;
proc.StartInfo.UseShellExecute = false;
bool result = proc.Start();
When running a exe through iis You should start with the some questions:
A) Should I bypass built-in security that protects the IIS Server and Windows System ?
B) If you allow executables to run using paths into the OS or the IIS sever who else can use this ie "HAcker" ?
C) There have been changes to Windows and IIS Server to protect the system(s) from "Attacks" and to limit the Surface Area for exploits.
What Security Monitoring is being done for the system & IIS Server ?
What security protection will you implement for your executable ?
If you can justify the risks for A - C you can "Work Around" the restrictions.
If you understand the Risks Technet has tools to help with some of the issues http://blogs.technet.com/b/elevationpowertoys/ .
For Windows Vista Windows 7 Windows 2008 you have the Standard User and limits for Permissions and Rights
a thread http://forums.iis.net/p/1178151/1981859.aspx#1981859 for who can do what.
You can Search (Goole or Bing) for "Changes to Windows Destop Security" "Session(0) changes and impacts for Systems & Services"
you also can get a copy of the "Vista Developer Story" from Microsoft Download Center.
You should check the MSDN library for Application Compatability and how to Design using User Access Control (UAC).
Many of the security changes have been added to Windows 2003 and IIS 6.0 server.
Finally u can Add MIME Types in IIS
Open IIS Manager.
Browse to the IIS Site or subfolder where the ThinApp EXE .
Right click on that site or subfolder and select PROPERTIES.
Select the HTTP HEADERS tab.
Click on the MIME TYPES button.
In the extension box, type ".exe".
In the MIME TYPE box, type "application/octet-stream".
Click OK.
I have asp.net 2.0 web application. On my production server I "randlomly" cant reproduce the path error on my page which uses UpdatePanel and RadComboBox telerik control:
Webpage error details
User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; GTB7.2; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)
Timestamp: Mon, 14 Nov 2011 10:37:17 UTC
Message: Sys.ScriptLoadFailedException: The script 'http://www.mySite.com/ScriptResource.axd?d=K-bZ_cYro-TWH0gbbmdTlkin59eWVsDQYopNlGtfNYd9aZQqi22u0d_A5dwpqMXbaJR99E08UDAgSF7tPCaP0mpZH35-uv4YYRWnSX0mxLsZPGu-58i2Nrmb8UHNokeftpIW9wTPOvZOJJq4cLYfu3iV8EQ1&t=634475972033675436' could not be loaded.
Line: 5
Char: 36564
Code: 0
URI: http://www.mySite.com/ScriptResource.axd?d=qLp9xu4UQDU3wBn-LSS2bLlqFvY6K78U8bVN8Ado2bzP7ytCoarS92INypIVz4z3TbmYil4Bsu_vW_InD5PMZRw-1WJbZIeVuS8TpTL23g_GrfQ29YBzoTaZWO2T3kxiSZPDfk0zqFyT9qKbsPGSfNc4kjnqG509cXg82kYxOpPDJjpf0&t=634532699342719389
Do You know any cause for this error ? Thanks for help in advance.
I had a similar problem with a stylesheet missing. I added:
string sRequestUrl = Request.Url.ToString();
To my error message to get it to output the file in question. Is this a case of you having a script manager which is consolidating (minifying) stylesheets and JavaScript files into one and you are losing the Request Url in the .axd? If you do turn off the consolidation (minification) and you should get the exact file from the Request Url.
If that is the problem.