Sharepoint cannot see newly deployed features and will not activate them - c#

I've been updating a SP2010 solution which integrates an external content source into search via BCS. This solution deploys a feature (featureA) to the farm scope. I split it into two features, one (FeatureA) deploying to the farm scope, and one (featureB) to the site scope.
My update script does this:
Deactivate FeatureA on the farm
Update-SPSolution with the new wsp file (same name)
Activate FeatureA on the farm
Activate FeatureB on the two sites (on two different web apps)
The script bombs on the last two steps, saying
Enable-SPFeature : The Feature is either not found or not a Farm Level Feature. Use Url parameter to specify the scope of the Feature.
for the first one (farm), and
Enable-SPFeature : The Feature is not a Farm Level Feature and is not found in a Site level defined by the Url http://url-site
on the second one (sites)
This is a test run on the CI server, which means it will also crash on the production server.
However, deploying the package on my machine, and activating the features, works fine.
I've checked, the features are actually present in the SharePoint folder, so the deployment seems to have gone ok. I can't work out why SharePoint can't see them though. If I run Get-SPFeature, they are not in the list.
I've tried iisreset, to no avail.
EDIT:
I've managed to get SharePojnt to notice the two features, by using Install-SPFeature.
However, it still won't enable FeatureB, but errors out with:
Enable-SPFeature : Attempted to perform an unauthorized operation.
I'm at a bit of a loss once again.

You cannot use Update-SPSolution when new files have been added to the solution package.
From Update-SPSolution:
The Update-SPSolution cmdlet upgrades a deployed SharePoint solution in the farm. Use this cmdlet only if a new solution contains the same set of files and features as the deployed solution. If files and features are different, the solution must be retracted and redeployed by using the Uninstall-SPSolution and Install-SPSolution cmdlets, respectively.
For more information, see Adding Features during Solution Update

Related

Deployed app function has empty host.json and no host keys

I'm having a weird issue with my Azure App Function and I can't find anything on this.
I republished my function without changing its code, but suddenly the function stopped working and I'm getting this message as soon as I navigate to the function's page on Azure:
Error:
Error retrieving master key.
If I navigate to the function's settings, I can see that no keys have been generated and that host.json file is emptpy. Browsing my functions' files using Kudu, however, shows that file contents are correct.
Two more things make this weirder:
The function correctly works locally
If I take the code for another function and I deploy it on this one, my function works correctly, meaning that it's not an issue related to my function's configuration but rather to its code
Do you guys have any pointer on this?
EDIT:
Let me add more details on this.
Let's say I have 2 solutions, A.sln and B.sln.
I also have 2 App Functions on Azure, let's say F_1 and F_2.
A.sln and B.sln have the very same structure, the only difference is in business logic.
Same applies for F_1 and F_2, their only differences are the related storage accounts, as each function has its own.
Currently A.sln is deployed on F_1 and B.sln on F_2, and the only one working is F_1.
If I deploy A.sln on F_2, F_2 starts working, so my idea is that there's something wrong in B.sln's code because A.sln works with the very same configuration.
The Function App has a reference to a Storage account in application settings AzureWebJobsDashboard, AzureWebJobsStorage and WEBSITE_CONTENTAZUREFILECONNECTIONSTRING (if you are running on a consumption plan). Either clearing out this storage or simply recreating it fixed the problem.
I would also recommend creating separate storage accounts for every Function app - at least as long as these hard-to-find bugs are present. It is a lot easier to fix these kind of issues when they are only affecting a single Function app.
I don't know if this is the case here, but I found out that in my case (new deployment of Function App v3) host.json is empty on Azure, if there is a comment line in it. Removing comments solved my problem and host.json file is now deployed properly.
One of the reasons could be, the key inside the storage account might have been rotated. So, the connection strings referenced inside the AzureWebJobsDashboard and AzureWebJobsStorage of the azure function will be different.
Solution: Go to the storage account referenced in AzureWebJobsDashboard and AzureWebJobsStorage -> Access Keys -> Copy the connection string under key1 and use this for the AzureWebJobsDashboard and AzureWebJobsStorage.

VS2013 publish Web deployment task failed The file is in use

I am using VS2013 Premium to publish a site to Windows Server 2012.
All files publish ok except these:
SqlServerTypes\x64\msvcr100.dll
SqlServerTypes\x64\SqlServerSpatial110.dll
SqlServerTypes\x86\msvcr100.dll
SqlServerTypes\x86\SqlServerSpatial110.dll
I get this kind of errors for each of the above files I tried to publish:
Web deployment task failed. (The file 'msvcr100.dll' is in use. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE.)
Interrestingly, these files were published the first time (when they were not on the server), then they are no longer overwritten. Tried with 2 different web servers.
I have followed the guide here:
http://blogs.msdn.com/b/webdev/archive/2013/10/30/web-publishing-updates-for-app-offline-and-usechecksum.aspx
...But it only managed to put the site offline (VS is placing the app_offline.htm) but publish still fails with the same error.
All other files publish perfectly.
Any ideas?
You can take you app offline during publishing which hopefully should free up the lock on the file and allow you to update it.
I blogged about this a while back. The support outlined was shipped inside of the Azure SDK and Visual Studio Update. I don't remember the exact releases but I can find out if needed. Any update dating around/after that blog post should be fine.
Prerequisites:
VS 2012 + VS update / VS 2013 + VS Update / VS2015
MSDeploy v3
Note: if you are publishing from a CI server the CI server will need the updates above as well
Edit the publish profile
In VS when create a Web Publish profile the settings from the dialog are stored in Properties\PublishProfiles\ as files that end with .pubxml. Note: there is also a .pubxml.user file, that file should not be modified
To take your app offline in the .pubxml file add the following property.
<EnableMSDeployAppOffline>true</EnableMSDeployAppOffline>
Notes
ASP.NET Required
The way that this has been implemented on the MSDeploy side is that an app_offline.htm file is dropped in the root of the website/app. From there the asp.net runtime will detect that and take your app offline. Because of this if your website/app doesn't have asp.net enabled this function will not work.
Cases where it may not work
The implementation of this makes it such that the app may not strictly be offline before publish starts. First the app_offline.htm file is dropped, then MSDeploy will start publishing the files. It doesn't wait for ASP.NET to detect the file and actually take it offline. Because of this you may run into cases where you still run into the file lock. By default VS enables retrys so usually the app will go offline during one of the retrys and all is good. In some cases it may take longer for ASP.NET to respond. That is a bit more tricky.
In the case that you add <EnableMSDeployAppOffline>true</EnableMSDeployAppOffline> and your app is not getting taken offline soon enough then I suggest that you take the app offline before the publish begins. There are several ways to do this remotely, but that depends on your setup. If you only have MSDeploy access you can try the following sequence:
Use msdeploy.exe to take your site offline by dropping app_offline.htm
Use msdeploy.exe to publish your app (_make sure the sync doesn't delete the app_offline.htm file_)
Wait some amount of time
Publish the site
Use msdeploy.exe to bring the app online by deleting app_offline.htm
I have blogged how you can do this at http://sedodream.com/2012/01/08/howtotakeyourwebappofflineduringpublishing.aspx. The only thing that is missing from that blog post is the delay to wait for the site to actually be taken offline. You can also create a script that just calls msdeploy.exe directly instead of integrating it into the project build/publish process.
I have found the reason why the solution at
http://blogs.msdn.com/b/webdev/archive/2013/10/30/web-publishing-updates-for-app-offline-and-usechecksum.aspx
did not work for the original poster, and I have a workaround.
The issue with the EnableMSDeployAppOffline approach is that it only recycles the app domain hosting the application. It does not recycle the app pool worker process (w3wp.exe) which the app domain lives in.
Tearing down and recreating the app domain will not affect the Sql Server Spatial dlls in question. Those dlls are unmanaged code which are manually loaded via interop LoadLibray calls. Therefore the dlls live outside the purview of the app domain.
In order to release the files locks, which the app pool process puts on them, you need to either recycle the app pool, or unload the dlls from memory manually.
The Microsoft.SqlServer.Types nuget package ships a class which is used to load the Spatial dlls called SqlServerTypes.Utilities. You can modify the LoadNativeAssemblies method to unload the unmanaged dlls when the app domain is unloaded. With this modification when msdeploy copys the app_offline.htm the app domain will unload and then unload the managed dlls as well.
[DllImport("kernel32.dll", SetLastError = true)]
internal extern static bool FreeLibrary(IntPtr hModule);
private static IntPtr _msvcrPtr = IntPtr.Zero;
private static IntPtr _spatialPtr = IntPtr.Zero;
public static void LoadNativeAssemblies(string rootApplicationPath)
{
if (_msvcrPtr != IntPtr.Zero || _spatialPtr != IntPtr.Zero)
throw new Exception("LoadNativeAssemblies already called.");
var nativeBinaryPath = IntPtr.Size > 4
? Path.Combine(rootApplicationPath, #"SqlServerTypes\x64\")
: Path.Combine(rootApplicationPath, #"SqlServerTypes\x86\");
_msvcrPtr = LoadNativeAssembly(nativeBinaryPath, "msvcr100.dll");
_spatialPtr = LoadNativeAssembly(nativeBinaryPath, "SqlServerSpatial110.dll");
AppDomain.CurrentDomain.DomainUnload += (sender, e) =>
{
if (_msvcrPtr != IntPtr.Zero)
{
FreeLibrary(_msvcrPtr);
_msvcrPtr = IntPtr.Zero;
}
if (_spatialPtr != IntPtr.Zero)
{
FreeLibrary(_spatialPtr);
_spatialPtr = IntPtr.Zero;
}
};
}
There is one caveat with this approach. It assumes your application is the only one running in the worker process that is using the Spatial dlls. Since app pools can host multiple applications the file locks will not be released if another application has also loaded them. This will prevent your deploy from working with the same file locked error.
There are known issues with IIS and file-locks (why they aren't solved yet i dont know).
The question i want to ask however is if you even need to re-deploy these files?
I recognize the file-names and recall them to be system-files which should either already be present on the server or simply not need to be re-deployed.
I am not very experienced when it comes to IIS but i have ran into this problem before and several of my more experienced co-workers have told me that this is as i said a known IIS-issue and i believe the answer to your question is:
Avoid deploying unnecessary files.
try again
Reset website
try again
iisreset
I think what would be the easiest thing to do is to make these dll's as CopyLocal as true. I am assuming these dll's are pulled out from program files folder. Try marking them as copylocal true and do a deployment.Try to stop any IIS local process running in your local machine.
Watch out you don't have one of those new-fangled cloud backup services running that is taking file locks - and also you don't have things open in explorer or a DLL inspection tool.
I think it's kind of ridiculous that MS doesn't make better provisions for this problem. I find that 9 times out of 10 my deployment works just fine, but then as our traffic increases that can become 1 in 10 times.
I am going to solve the problem with :
two applications MySite.A and MySite.B, where only one is running at a time.
I always then deploy to the dormant site.
If there's a problem during the deployment it will never cause the whole site to go down.
If there's a major problem after deployment you can revert back very easily.
Not quite sure how I'm implementing it, but I think this is what I need to do.

Feature deactivation code behind in Sandbox solution

Just a question to clarify my doubts here !
I created a Sandbox solution with Visual Studio 2010 for SharePoint 2010.
Solution contains just a list instance, and when the feature is deployed a list gets created on the site.
Now, I also wish to delete the list when the feature is deactivated.
For which I wrote below code in EventReceiver.cs.
public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
{
using (SPSite site = new SPSite("http://sitecollection"))
{
SPWeb web = site.RootWeb;
SPList list = web.Lists["listname"];
list.Delete();
list.Update();
web.Update();
}
}
While this does delete the list on feature deactivation, my question is,
How come this project is STILL a sandbox solution (no dll deployment to GAC) as it contains server-side & and a code behind file?
Thanks,
Tushar
Sandbox solutions can use server side code. The difference is that the code runs in separate windows service on server and not in w3wp process or owstimer. The cost is that you do not have access to all server side functionality (you cannot deploy timer jobs using sandbox solutions for example). You can read more about sandbox solutions here.

How to use ServerManager to read IIS sites, not IIS express, from class library OR how do elevated processes handle class libraries?

I have some utility methods that use Microsoft.Web.Administration.ServerManager that I've been having some issues with. Use the following dead simple code for illustration purposes.
using(var mgr = new ServerManager())
{
foreach(var site in mgr.Sites)
{
Console.WriteLine(site.Name);
}
}
If I put that code directly in a console application and run it, it will get and list the IIS express websites. If I run that app from an elevated command prompt, it will list the IIS7 websites. A little inconvenient, but so far so good.
If instead I put that code in a class library that is referenced and called by the console app, it will ALWAYS list the IIS Express sites, even if the console app is elevated.
Google has led me to try the following, with no luck.
//This returns IIS express
var mgr = new ServerManager();
//This returns IIS express
var mgr = ServerManager.OpenRemote(Environment.MachineName);
//This throws an exception
var mgr = new ServerManager(#"%windir%\system32\inetsrv\config\applicationhost.config");
Evidently I've misunderstood something in the way an "elevated" process runs. Shouldn't everything executing in an elevated process, even code from another dll, be run with elevated rights? Evidently not?
Thanks for the help!
Make sure you are adding the reference to the correct Microsoft.Web.Administration, should be v7.0.0.0 that is located under c:\windows\system32\inetsrv\
It looks like you are adding a reference to IIS Express's Microsoft.Web.Administraiton which will give you that behavior
Your question helped me find the answer for PowerShell, so if the Internet is searching for how to do that:
$assembly = [System.Reflection.Assembly]::LoadFrom("$env:systemroot\system32\inetsrv\Microsoft.Web.Administration.dll")
# load IIS express
$iis = new-object Microsoft.Web.Administration.ServerManager
$iis.Sites
# load IIS proper
$iis = new-object Microsoft.Web.Administration.ServerManager "$env:systemroot\system32\inetsrv\config\applicationhost.config"
$iis.Sites
CAUTION! Using this approach we have seen seemingly random issues such as "unsupported operation" exceptions, failure to add/remove HTTPS bindings, failure to start/stop application pools when running in IIS Express, and other problems. It is unknown whether this is due to IIS being generally buggy or due to the unorthodox approach described here. In general, my impression is that all tools for automating IIS (appcmd, Microsoft.Web.Administration, PowerShell, ...) are wonky and unstable, especially across different OS versions. Good testing is (as always) advisable!
EDIT: Please also see the comments on this answer as to why this approach may be unstable.
The regular Microsoft.Web.Administration package installed from NuGet works fine. No need to copy any system DLLs.
The obvious solution from the official documentation also works fine:
ServerManager iisManager = new ServerManager(Environment.SystemDirectory + #"inetsrv\config\applicationHost.config");
This works even if you execute the above from within the application pool of IIS Express. You will still see the configuration of the "real" IIS. You will even be able to add new sites, as long as your application runs as a user with permission to do so.
Note, however that the constructor above is documented as "Microsoft internal use only":
https://msdn.microsoft.com/en-us/library/ms617371(v=vs.90).aspx
var iisManager = new ServerManager(Environment.SystemDirectory + "\\inetsrv\\config\\applicationhost.config");
This works perfectly. No need to change any references

Web service call slow from dev solution

Ok, I got an asp.net web service using WSE2. It runs on a xp machine.
And I got the front end asp.net application in my win7 machine.
both in Framework 3.5.
In production environnement everthing is fine.
The problem is, when I run the "developpement" version of the front end, web services calls takes forever. And by forever, I mean eternity. Here we count eternity in minutes.
By "developpement" version, I mean that I run the instance that is bound to the visual studion (2008) solution. I use the local IIS web server.
My first thought was for a network/firewall problem between my two machines. But if, form visual studio, I "publish" the site to another virtual folder, then everithing works fine.
So I have http://localhost/MyDevApp and http://localhost/MyPublishedApp.
Both uses default app pool. Both have identical web.config. As far as I know, both virtual directories have exact same parameters.
But http://localhost/MyDevApp is terribly when calling web services, and http://localhost/MyPublishedApp runs at light's speed.
It is like this since 3 days now.
Doing some debuggin I can say that :
MyWebServiceRequest request = new MyWebServiceRequest ();
request.Url = "http://mywebserviceurl";
request.RequestSoapContext.Tokes.Add (MyUsernameToken);
//All these previous lines executes correctly, rapidly.
//THIS is the slow one.
request.CallWebServiceMethod ();
Does anyone have the slightest idea what the problem can be?
Edit
I also tried changing the virtual directory from my web site's property to something different (say http://localhost/MydDevApp2), with the same result.
Edit 2
Maybe it can be in cause, the site binded to the solution reside in c:\Projets\MySolution\MyDevApp while the "published" one is under c:\inetpub\wwwroot\MyPublishedApp. The c:\Projets folder is excluded from the antivirus scans, so normally it should be faster rather than slower.
Edit 3
I created another workspace (the solution is under team server source control) in c:\inetpub\wwwroot\Other, changed the url of the web project, compiled and run : no problem. So it really seems that the physical path where the files resides is causing this, while the fact of beeing bound to the solution is not.
Edit 4 (August 19)
Well, it seems that there not much to do. It's been about 10 days since my last update, and now the site under the new workspace is beginning to slow down too. So I moved the workspace on disk, now in c:\inetpub\wwwroot\Other2, and you know what? That's running fine again. Perhaps I will have to move it again in about 10 days.
Edit 5
I flagged my question to move to serverfault, finally it does not have nothing to do with programmation, as I first thought.
Hard to say without seeing your machine in config, but often times it means you have some kind of a problem with your dns server or hosts file that is causing the process to be slow to resolve the service.
Also, if you are using a proxy server, make sure you are bypassing it for any urls that call the service.
Finally, it appears that wse2 traces where on :
<microsoft.web.services2>
<diagnostics>
<trace enabled="true" input="InputTrace.log" output="OutputTrace.log" />
</diagnostics>
<policy>
<cache name="policyCache.config" />
</policy>
</microsoft.web.services2>
While the log file was growing more and more, it slow down and down... just disabling the trace solves the problem.

Categories

Resources