I'm struggling to get my IIS site to autostart. I'm using Quartz.Net inside it for nightly tasks but they aren't running because IIS disposes of it before they can run. I've attempted to set it to autostart and stay runninig by doing the following (using these instructions):
ApplicationHost.Config:
<configuration>
<configSections>
...
<system.applicationHost>
<applicationPools>
<add name="DefaultAppPool" enable32BitAppOnWin64="true" managedRuntimeVersion="v4.0" />
<add name="ASP.NET v4.0" enable32BitAppOnWin64="false" managedRuntimeVersion="v4.0" />
<add name="ASP.NET v4.0 Classic" managedRuntimeVersion="v4.0" managedPipelineMode="Classic" />
<add name="Classic .NET AppPool" managedRuntimeVersion="v4.0" managedPipelineMode="Classic" />
...
<add name="AUTOSTARTSITE" autoStart="true" managedPipelineMode="Integrated" startMode="AlwaysRunning">
<processModel identityType="NetworkService" />
</add>
<applicationPoolDefaults managedRuntimeVersion="v4.0">
<processModel identityType="NetworkService" />
</applicationPoolDefaults>
</applicationPools>
...
<sites>
<site name="AUTOSTARTSITE" id="10" serverAutoStart="true" serviceAutoStartEnabled="true" serviceAutoStartProvider="StartUpCode">
<application path="/" applicationPool="AUTOSTARTSITE">
<virtualDirectory path="/" physicalPath="E:\websites\AUTOSTARTSITE" />
</application>
<bindings>
<binding protocol="http" bindingInformation="*:80:AUTOSTARTSITE.com" />
</bindings>
<traceFailedRequestsLogging enabled="true" />
<logFile directory="%SystemDrive%\inetpub\logs\LogFiles" />
</site>
<siteDefaults>
<logFile logFormat="W3C" directory="%SystemDrive%\inetpub\logs\LogFiles" />
<traceFailedRequestsLogging directory="%SystemDrive%\inetpub\logs\FailedReqLogFiles" />
</siteDefaults>
<applicationDefaults applicationPool="DefaultAppPool" />
<virtualDirectoryDefaults allowSubDirConfig="true" />
</sites>
<serviceAutoStartProviders>
<add name="StartUpCode" type="StartUpCode, AUTOSTARTSITE" />
</serviceAutoStartProviders>
<webLimits />
</system.applicationHost>
...
And here is my startup code. I didn't put in a namespace, and I have it log that it runs so I can confirm the process is working. Unfortunatly, it does not run.
StartUpCode:
public class StartUpCode : System.Web.Hosting.IProcessHostPreloadClient
{
readonly log4net.ILog logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
public void Preload(string[] parameters)
{
SetupJobs();
logger.Info("Quartz Jobs Setup Successfully");
}
}
Despite these changes, it runs the same as before. Am I missing something obvious?
I think for all this stuff to work properly you need to install Application Initialization feature for IIS:
You can find more details on how it is supposed to work here.
We found generally that no matter what sometimes it just does not work. So there are really 2 solutions here:
Do not run scheduler in IIS, use Windows Service or scheduled task maybe
Write a pinger that will again be either windows service or scheduled task. It's probably one line with powershell to issue a GET to your site. That can run again as a service or scheduled task.
I would definitely prefer option 1 to not depend on IIS life-cycle for critical scheduling operations. Even though this approach with IIS hosting is still quite popular and we use it there is just too many problems with it in my experience.
We too had same issue using Quartz the reason was the IIS pool was shutting down after some idle time.So we had to prevent it from shutting down and it worked for Us.
Find more details here
Related
I have a small programm in which i copy paste stuff from A to B on the PC. The directory paths are written in the config, and when i (in the application) change the directory in the textbox it is updating the config file. I checked it, the value is immediatlely rewritten at the appropriate key. When i close the app and reopen it, it is updatet to the previously changed directory path, but i dont want to have to close the application and reopen is. I have a combobox and i want it to update as soon as the combobox reselect event triggers. But during the runtime (altough it is already changed in the config) it will not update the directory path shown in the app.
I read through and tried everything i found online and sadly nothing helped. Not every every kind of
ConfigurationManager.RefreshSection("appSettings");
THis is my config:
<appSettings file="">
<clear />
<add key="SourcepathClient" value="D:\xxx" />
<add key="SourcepathWin32" value="D:\xxx" />
<add key="DestinationpathUpdatePackages" value="D:\xxx" />
<add key="DestinationpathClient" value="D:\xxx" />
<add key="5_9_0-DestinationpathClient" value="D:\xxxt" />
<add key="5_9_0-DestinationpathUpdatePackages" value="D:\xxx" />
<add key="5_9_1-DestinationpathClient" value="D:\xxx" />
<add key="5_9_1-DestinationpathUpdatePackages" value="D:xxx" />
<add key="5_9_2-DestinationpathClient" value="D:\xxx" />
<add key="5_9_2-DestinationpathUpdatePackages" value="D:\xxx" />
</appSettings>
and this the code:
Configuration config = ConfigurationManager.OpenExeConfiguration(System.IO.Path.Combine(Directory.GetCurrentDirectory(), "UpdatePackager.exe"));
config.AppSettings.Settings[ComboBoxVersion.Text + "-DestinationpathClient"].Value = TextBoxDestinationpathClient.Text;
config.AppSettings.Settings[ComboBoxVersion.Text + "-DestinationpathUpdatePackages"].Value = TextBoxDestinationpathUpdatePackage.Text;
config.AppSettings.SectionInformation.ForceSave = true;
config.Save(ConfigurationSaveMode.Full);
ConfigurationManager.RefreshSection("appSettings");
I hope someone can help me.
Regards
i think there is no issue with code it issue related your access, it makes a difference if you run your application in IIS and run your test sample from Visual Studio. The ASP.NET process identity is the IIS account, ASPNET or NETWORK SERVICES (depending on IIS version).
Might need to grant ASPNET or NETWORK SERVICES Modify access on the folder where web.config resides.
My memory usage on my website causes a recycle every 10 minutes without fail on my Windows server. Memory normally is 140mb but all of a sudden it jumps up to 600mb or more and the website needs to be recyled. I have the following entries in my web.config file to stop bots from attacking my website. However, I get a syntax error for the filteringRules line although my website runs ok with the updated web.config file. I noticed that some of the bots still appear on my http log and am wondering if my filter is working properly. Does anyone see a problem with this code or is it actually working in spite of the syntax error I get in Visual Studio? Also, can all these bots cause memory spikes?
<requestFiltering>
<filteringRules> !-- this line gives me a syntax error
<filteringRule name="BlockSearchEngines" scanUrl="false"
scanQueryString="false">
<scanHeaders>
<clear />
<add requestHeader="User-Agent" />
</scanHeaders>
<appliesTo>
<clear />
</appliesTo>
<denyStrings>
<clear />
<add string="AhrefsBot" />
<add string="MJ12bot" />
<add string="ExtLinksfBot" />
<add string="Yeti" />
<add string="YandexBot" />
<add string="SemrushBot" />
<add string="DotBot" />
<add string="istellabot" />
<add string="Qwantify" />
<add string="GrapeshotCrawler" />
<add string="archive.org_bot" />
<add string="Applebot" />
<add string="ias_crawler" />
<add string="Uipbot" />
<add string="Cliqzbot" />
<add string="TinEye-bot" />
<add string="YandexImages" />
</denyStrings>
</filteringRule>
</filteringRules>
The syntax of your <filteringRules> entry looks correct (except for the closing tag typo).
The schema for web.config that Visual Studio uses doesn't seem to include filteringRules, for some reason. However, that schema is independent from what IIS knows and does, so it shouldn't matter.
Also, can all these bots cause memory spikes?
Just because a request is coming from a bot won't make it use a lot of memory. Unintentional memory spikes are usually application bugs -- memory leaks and the like. You can look at your app's memory use with a memory profiler. Allocating memory buffers and then storing references to them in a global variable/collection is a common cause. Failing to call Dispose() on all Disposable objects is another potential source of leaks.
I have to start two socket.io processes in my azure worker role. I followed the steps in this link
Below is my ServiceDefinition.csdef
<WorkerRole name="WorkerRole1">
<Startup>
<Task commandLine="setup_worker.cmd > log.txt" executionContext="elevated">
<Environment>
<Variable name="EMULATED" value="false"/>
<Variable name="RUNTIMEID" value="node" />
<Variable name="RUNTIMEURL" value="http://az413943.vo.msecnd.net/node/0.6.20.exe" />
</Environment>
</Task>
</Startup>
<Endpoints>
<InputEndpoint name="HttpIn" protocol="tcp" port="80" />
</Endpoints>
<Runtime>
<Environment>
<Variable name="PORT">
<RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/Endpoints/Endpoint[#name='HttpIn']/#port" />
</Variable>
<Variable name="EMULATED">
<RoleInstanceValue xpath="/RoleEnvironment/Deployment/#emulated" />
</Variable>
</Environment>
<EntryPoint>
<ProgramEntryPoint commandLine="node.cmd .\server.js" setReadyOnProcessStart="false" />
</EntryPoint>
</Runtime>
<Imports>
<Import moduleName="RemoteAccess" />
<Import moduleName="RemoteForwarder" />
<Import moduleName="Diagnostics" />
</Imports>
<LocalResources>
<LocalStorage name="WorkerLocalStorage" cleanOnRoleRecycle="false" sizeInMB="1024" />
</LocalResources>
In this I am starting server.js on runtime, but I would also like to start another socket.io script along with it. The reason being I dont want to use another worker role for a small application. Just to save cost. I tried to start it as a start up task but the worker role was hanging/cycling when i started the cloud service in emulator with no error info in the output dialog box. So am guessing the socket.io scripts can only be started in the runtime section. Is there any way I can start both my socket.io scripts in a single worker role?
You can do this multiple ways:
Start the process via a startup task
Start the process from the role entry point.
For #1, if you were seeing the role hanging/cycling then it is because of a bug in your startup task, not because Azure prevents you from running a socket.io script. See http://blogs.msdn.com/b/kwill/archive/2013/08/09/windows-azure-paas-compute-diagnostics-data.aspx for how to troubleshoot this issue, in particular the Troubleshooting Scenario 2 (http://blogs.msdn.com/b/kwill/archive/2013/08/26/troubleshooting-scenario-2-role-recycling-after-running-fine-for-2-weeks.aspx). Also, make sure you set the startup task as background so that the host bootstrapper is not waiting for the process to exit before continuing with the role startup process.
For #2, you would have to either modify node.cmd to spawn two processes, or switch to use a different role entry point and have that role entry point start the node.cmd along with your other script.
I need scheduling functionality on my .NET MVC website and I came across Quartz.net library which can do exactly what I need.
The problem is I'm running my site on a hosting (GoDaddy) and when I added Quartz.net 2.0.1 to my project I've got "that assembly does not allow partially trusted callers" exception. After some research I found out that many people have the same problem and some solved it by removing Common.Logging library from Quartz.net.
I followed some of the advice and removed all references to Common.Logging but I still have problems. It looks like it's not enough and now I'm getting Inheritance security rules violated while overriding member exception, more details:
Inheritance security rules violated while overriding member:
Quartz.Util.DirtyFlagMap`2<TKey,TValue>.GetObjectData
(System.Runtime.Serialization.SerializationInfo,
System.Runtime.Serialization.StreamingContext)'.
Security accessibility of the overriding method must match the
security accessibility of the method being overriden.
It looks like I really need to change something in Quartz.net to make it work.
Has anyone run Quartz.net on medium trust? If so what needs to be done? May be someone can suggest some alternatives?
Steinar's answer sent me in the right direction. Sharing the steps here that got QuartZNet to work in a medium trust hosting environment.
QuartzNet initially ran into permissions issues on medium trust, we needed to the do the following to fix the issue
(1) Downloaded QuartzNet code ( 2.1.0.400 ) from github and build it after making the following changes to AssemblyInfo.cs
Replaced
#if !NET_40
[assembly: System.Security.AllowPartiallyTrustedCallers]
#endif
with
[assembly: AllowPartiallyTrustedCallers]
#if NET_40
[assembly: SecurityRules(SecurityRuleSet.Level1)]
#endif
(2) Downloaded C5 code (v 2.1) and built it with
[assembly: AllowPartiallyTrustedCallersAttribute()
Ensure C5 is compiled in the same .NET version as Qartznet.
(3) Added the quartz section to web.config within TGH, section had requirepermission set to false. Common logging section also had requirepermission set to false, also configured it to use Common.Logging.Simple.NoOpLoggerFactoryAdapter.
<configSections>
<!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->
<sectionGroup name="common">
<section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" requirePermission="false" />
</sectionGroup>
<section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
</configSections>
<common>
<logging>
<factoryAdapter type="Common.Logging.Simple.NoOpLoggerFactoryAdapter, Common.Logging">
<arg key="showLogName" value="true" />
<arg key="showDataTime" value="true" />
<arg key="level" value="OFF" />
<arg key="dateTimeFormat" value="HH:mm:ss:fff" />
</factoryAdapter>
</logging>
</common>
<quartz>
<add key="quartz.scheduler.instanceName" value="QuartzScheduler" />
<add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" />
<add key="quartz.threadPool.threadCount" value="10" />
<add key="quartz.threadPool.threadPriority" value="2" />
<add key="quartz.jobStore.misfireThreshold" value="60000" />
<add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" />
</quartz>
(4) Initialised the scheduler using constructor with namecollection as the parameter, namecollection was the quartz section picked up from web.config.
In global.asax
QuartzScheduler.Start();
The class
public class QuartzScheduler
{
public static void Start()
{
ISchedulerFactory schedulerFactory = new StdSchedulerFactory((NameValueCollection)ConfigurationManager.GetSection("quartz"));
IScheduler scheduler = schedulerFactory.GetScheduler();
scheduler.Start();
IJobDetail inviteRequestProcessor = new JobDetailImpl("ProcessInviteRequest", null, typeof(InviteRequestJob));
IDailyTimeIntervalTrigger trigger = new DailyTimeIntervalTriggerImpl("Invite Request Trigger", Quartz.TimeOfDay.HourMinuteAndSecondOfDay(0, 0, 0), Quartz.TimeOfDay.HourMinuteAndSecondOfDay(23, 23, 59), Quartz.IntervalUnit.Second, 1);
scheduler.ScheduleJob(inviteRequestProcessor, trigger);
}
}
public class InviteRequestJob : IJob
{
public void Execute(IJobExecutionContext context)
{
RequestInvite.ProcessInviteRequests();
}
}
I recommend building Common.Logging yourself rather than removing it from the project. You can get the latest source from http://netcommon.sourceforge.net/downloads.html.
I guess the second problem had to do with that the C5.dll wasn't trusted either. I would also just build that myself. The source can be found here: http://www.itu.dk/research/c5/.
Although there are other options than building the dlls (http://stackoverflow.com/questions/3072359/unblocking-a-dll-on-a-company-machine-how) I personally prefer to build the dlls myself unless I absolutely trust the downloaded product.
My application is using AppFabric for our distributed caching model in a production web farm of windows web 5 servers. The application is a .net4 c# web application. We are encountering some problems with AppFabric and have some questions regarding the setup of such. The main issue we have is if one of the web 5 servers is restarted, the site on the other servers will also go down for a short period of time with appfabric exceptions like the following appearing in our event logs:
Message: ErrorCode:SubStatus:There is a temporary failure. Please retry later.
ErrorCode:SubStatus:Region referred to does not exist. Use CreateRegion API to fix the error.
We have a cache provider wrapper class that creates the datacachefactory object etc and is used as the intermediatory between the web application and appfabric. This is a singleton class so only one instance of the datacachefactory object is created on the Init of the class.
The second error above I believe I have found the reason for, in our code the region was being created on the Init ie at the very start, but if a node comes out of the cluster that contains the region in its memorary, then the above error is a result. To resolve this issue, the region should be attempted to be created on every request appfabric - but only creating it if it does not exist - does this sound correct?
Regarding the other error, I believe it may be down to the configruation. This is the cluster config xml file:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="dataCache" type="Microsoft.ApplicationServer.Caching.DataCacheSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
</configSections>
<dataCache size="Small">
<caches>
<cache consistency="StrongConsistency" name="App1Cache"
secondaries="1">
<policy>
<eviction type="Lru" />
<expiration defaultTTL="10" isExpirable="true" />
</policy>
</cache>
<cache consistency="StrongConsistency" name="App2Cache"
secondaries="1">
<policy>
<eviction type="Lru" />
<expiration defaultTTL="10" isExpirable="true" />
</policy>
</cache>
<cache consistency="StrongConsistency" name="App3Cache"
secondaries="1">
<policy>
<eviction type="Lru" />
<expiration defaultTTL="10" isExpirable="true" />
</policy>
</cache>
<cache consistency="StrongConsistency" name="default">
<policy>
<eviction type="Lru" />
<expiration defaultTTL="10" isExpirable="true" />
</policy>
</cache>
</caches>
<hosts>
<host replicationPort="22236" arbitrationPort="22235" clusterPort="22234"
hostId="724664608" size="1228" leadHost="true" account="SERVER1\user"
cacheHostName="AppFabricCachingService" name="SERVER1"
cachePort="22233" />
<host replicationPort="22236" arbitrationPort="22235" clusterPort="22234"
hostId="598646137" size="1228" leadHost="true" account="SERVER2\user"
cacheHostName="AppFabricCachingService" name="SERVER2"
cachePort="22233" />
<host replicationPort="22236" arbitrationPort="22235" clusterPort="22234"
hostId="358039700" size="1228" leadHost="true" account="SERVER3\user"
cacheHostName="AppFabricCachingService" name="SERVER3"
cachePort="22233" />
<host replicationPort="22236" arbitrationPort="22235" clusterPort="22234"
hostId="929915039" size="1228" leadHost="false" account="SERVER4\user"
cacheHostName="AppFabricCachingService" name="SERVER4"
cachePort="22233" />
<host replicationPort="22236" arbitrationPort="22235" clusterPort="22234"
hostId="1752630351" size="1228" leadHost="false" account="SERVER5\user"
cacheHostName="AppFabricCachingService" name="SERVER5"
cachePort="22233" />
</hosts>
<advancedProperties>
<securityProperties>
<authorization>
<allow users="everyone" />
</authorization>
</securityProperties>
</advancedProperties>
</dataCache>
</configuration>
Note: we have multiple we caches set up as we have multiple applications using appfabric, and seeing same issues with them all.
And this is the web.config entry in the application on each of the servers:
<dataCacheClient requestTimeout="15000" channelOpenTimeout="3000" maxConnectionsToServer="1">
<localCache isEnabled="true" sync="TimeoutBased" ttlValue="300" objectCount="10000" />
<clientNotification pollInterval="300" maxQueueLength="10000" />
<hosts>
<host name="SERVER1" cachePort="22233" />
<host name="SERVER2" cachePort="22233" />
<host name="SERVER3" cachePort="22233" />
<host name="SERVER4" cachePort="22233" />
<host name="SERVER5" cachePort="22233" />
</hosts>
<transportProperties connectionBufferSize="131072" maxBufferPoolSize="268435456" maxBufferSize="8388608" maxOutputDelay="2" channelInitializationTimeout="60000" receiveTimeout="600000" /></dataCacheClient>
Anyone see a problem with the above? As you can see we have 3 lead hosts and 2 secondaries.
Some questions I have following on from this are:
I have read about having a local cache - what is the technical benefit of this? ie. will this give a local copy of the data per node.
What is the best practice regarding ports? Are the above ports correct or could there be conflicts with the same ports being used?
The 3 lead hosts and 2 secondaries, is this a recommended split? Does it mean there are 3 copies of the data?
When we are restarting the servers, we attempt to never restart the lead hosts at the same time.
Thanks for any feedback on this!
We make extensive use of AppFabric caching. You are going to see the
Message: ErrorCode:SubStatus:There is a temporary failure. Please retry later.
fairly often. It's probably best to write yourself a wrapper around AppFabric that automates retries when this error is thrown. You really want to use exponential backoff, but failing that randomizing the retry period may be enough.
The cache configuration in the Web.config file is only used to create the cache factory. It will contact one of the hosts and obtain the cluster configuration from that. The only benefit to listing all hosts in your Web.config is so that if a host is down it can contact another host. Even if you only listed a single host, provided that was present your caching would work fine.
Using a local cache is likely to improve performance if you read objects more frequently than you write them. You're going to have to tune the size of that by experimentation.