First of all, let me say that I know that is is bad practice, not good, probably not allowed (technically) etc, etc, etc... to force stop another service from my app.
However, there ARE some use cases to warrant this need. In my case, for example, there is a 3rd party service that is installed by my app because of my reference to it ("it" is a barcode scanning SDK). The SDK states that I must call method to something called
GetScannerService();
I have observed that this call will either start the service or grab the instance to it if it is already running.
Furthermore, there are some calls that have to be done during onStop and onDestroy of my app which effectively will stop this third party service.
All that said, I have seen cases where this service gets stuck in a weird state. I have no control over the code (and bugs) in this package. Yes, I have reached out to them but have been unsuccessful so far to get them to fix the root cause. When it is stuck in this state, I can see it in the list of running services (and sometimes it is listed in the cached ones) but when my app calls to GetScannerService, it throws an exception that basically states the service cannot be started...but it already is.
So, when this happens, if I manually to the running services list and find it (again, sometimes it is in cached) and click force stop, this will fix things and my app works as expected again...until it happens again that is.
So, I want and need to have my app control this service. The thought is on startup, when I do the first call to GetScannerService, if it returns the exception, I will basically force stop it so that I can then call again and have it started. In other words, I want to automate the force stop function.
I know technically this is not allowed but I have also read that there are ways to do it, even if you don't have root.
So far, I can get the list of all running services and I can see the service in question in my list. Which means I also have access to a lot of information about the running service. But, what I have tried does not work. I have tried to KillBackgroundProcesses but it did not work, the service is still in the list.
Here is what I have tried so far:
private void Button_Click(object sender, EventArgs e)
{
var am = (ActivityManager)this.Application.ApplicationContext.GetSystemService(Context.ActivityService);
var taskList = am.GetRunningServices(serviceListLimit);
List<string> serviceNames = new List<string>();
foreach(var t in taskList)
{
serviceNames.Add(t.Service.PackageName);
}
var adapter = new ArrayAdapter<string>(this, Android.Resource.Layout.SimpleListItem1, serviceNames);
services.Adapter = adapter;
if (serviceNames.Contains(emdkServiceName))
testKillService(am, emdkClass);
}
private void testKillService(ActivityManager am, Class emdkClass)
{
am.KillBackgroundProcesses(emdkServiceName);
}
So, I can list them and see it in the list as well as grab details about the item in the list. Anybody know how I can force stop it?
You can use the Process class:
void killProcess (int pid)
You can get the "pid" from the ActivityManager.RunningAppProcessInfo.
Related
I'm diagnosing why a particular service omits a call to System.ComponentModel.Container(), and I would like to understand the purpose of this object, and what it is used for.
Ideally, I'd also like to understand what is expected to happen if the class level variable is set to null.
private void InitializeComponent()
{
components = new System.ComponentModel.Container();
this.ServiceName = "Service1";
}
One theoretical explanation is to hide the service from net service and the services.mmc. There are likely better approaches to accomplish this, without risking runtime operation of the service.
I am new at messaging architectures, so I might be going at this the wrong way. But I wanted to introduce NServiceBus slowly in my team by solving a tiny problem.
Appointments in Agenda's have states. Two users could be looking at the same appointment in the same agenda, in the same application. They start this application via a Remote session on a central server. So if user 1 updates the state of the appointment, I'd like user 2 to see the new state 'real time'.
To simulate this or make a proof of concept if you will, I made a new console application. Via NuGet I got both NServiceBus and NServiceBus.Host, because as I understood from the documentation I need both. And I know in production code it is not recommended to put everything in the same assembly, but the publisher and subscriber will most likely end up in the same assembly though...
In class Program method Main I wrote the following code:
BusConfiguration configuration = new BusConfiguration();
configuration.UsePersistence<InMemoryPersistence>();
configuration.UseSerialization<XmlSerializer>();
configuration.UseTransport<MsmqTransport>();
configuration.TimeToWaitBeforeTriggeringCriticalErrorOnTimeoutOutages(new TimeSpan(1, 0, 0));
ConventionsBuilder conventions = configuration.Conventions();
conventions.DefiningEventsAs(t => t.Namespace != null
&& t.Namespace.Contains("Events"));
using (IStartableBus bus = Bus.Create(configuration))
{
bus.Start();
Console.WriteLine("Press key");
Console.ReadKey();
bus.Publish<Events.AppointmentStateChanged>(a =>
{
a.AppointmentID = 1;
a.NewState = "New state";
});
Console.WriteLine("Event published.");
Console.ReadKey();
}
In class EndPointConfig method Customize I added:
configuration.UsePersistence<InMemoryPersistence>();
configuration.UseSerialization<XmlSerializer>();
configuration.UseTransport<MsmqTransport>();
ConventionsBuilder conventions = configuration.Conventions();
conventions.DefiningEventsAs(t => t.Namespace != null
&& t.Namespace.Contains("Events"));
AppointmentStateChanged is a simple class in the Events folder like so:
public class AppointmentStateChanged: IEvent {
public int AppointmentID { get; set; }
public string NewState { get; set; }
}
AppointmentStateChangedHandler is the event handler:
public class AppointmentStateChangedHandler : IHandleMessages<Events.AppointmentStateChanged> {
public void Handle(Events.AppointmentStateChanged message) {
Console.WriteLine("AppointmentID: {0}, changed to state: {1}",
message.AppointmentID,
message.NewState);
}
}
If I start up one console app everything works fine. I see the handler handle the event. But if I try to start up a second console app it crashes with: System.Messaging.MessageQueueException (Timeout for the requested operation has expired). So i must be doing something wrong and makes me second guess that I don't understand something on a higher level. Could anyone point me in the right direction please?
Update
Everthing is in the namespace AgendaUpdates, except for the event class which is in the AgendaUpdates.Events namespace.
Update 2
Steps taken:
Copied AgendaUpdates solution (to AgendaUpdates2 folder)
In the copy I changed MessageEndpointMappings in App.Config the EndPoint attribute to "AgendaUpdates2"
I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
In the copy I added this line of code to EndPointConfig: configuration.EndpointName("AgendaUpdates2");
I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
In the copy I added this line of code to the Main methodin the Program class:
configuration.EndpointName("AgendaUpdates2");
Got original exception again after pressing key.
--> I tested it by starting 2 visual studio's with the original and the copied solution. And then start both console apps in the IDE.
I'm not exactly sure why you are getting that specific exception, but I can explain why what you are trying to do fails. The problem is not having publisher and subscriber in the same application (this is possible and can be useful); the problem is that you are running two instances of the same application on the same machine.
NServiceBus relies on queuing technology (MSMQ in your case), and for everything to work properly each application needs to have its own unique queue. When you fire up two identical instances, both are trying to share the same queue.
There are a few things you can tinker with to get your scenario to work and to better understand how the queuing works:
Change the EndPointName of your second instance
Run the second instance on a separate machine
Separate the publisher and subscriber into separate processes
Regardless of which way you go, you will need to adjust your MessageEndpointMappings (on the consumer/subscriber) to reflect where the host/publisher queue lives (the "owner" of the message type):
http://docs.particular.net/nservicebus/messaging/message-owner#configuring-endpoint-mapping
Edit based on your updates
I know this is a test setup/proof of concept, but it's still useful to think of these two deployments (of the same code) as publisher/host and subscriber/client. So let's call the original the host and the copy the client. I assume you don't want to have each subscribe to the other (at least for this basic test).
Also, make sure you are running both IDEs as Administrator on your machine. I'm not sure if this is interfering or not.
In the copy I changed MessageEndpointMappings in App.Config the EndPoint attribute to "AgendaUpdates2" I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
Since the copy is the client, you want to point its mapping to the host. So this should be "AgendaUpdates" (omit the "2").
In the copy I added this line of code to EndPointConfig: configuration.EndpointName("AgendaUpdates2"); I got MSMQ exception: "the queue does not exist or you do not have sufficient permissions to perform the operation"
In the copy I added this line of code to the Main methodin the Program class: configuration.EndpointName("AgendaUpdates2"); Got original exception again after pressing key
I did not originally notice this, but you don't need to configure the endpoint twice. I believe your EndPointConfig is not getting called, as it is only used when hosting via the NSB host executable. You can likely just delete this class.
This otherwise sounds reasonable, but remember that your copy should not be publishing if its the subscriber, so don't press any keys after it starts (only press keys in the original).
If you want to publisher to also be the receiver of the message, you want to specify this in configuration.
This is clearly explained in this article, where the solution to your problem is completely at the end of the article.
Generally with services, the task you want to complete is repeated, maybe in a loop or maybe a trigger or maybe something else.
I'm using Topshelf to complete a repeated task for me, specifically I'm using the Shelf'ing functionality.
The problem I'm having is how to handle the looping of the task.
When boot strapping the service in Topshelf, you pass it a class (in this case ScheduleQueueService) and indicate which is its Start method and it's Stop method:
Example:
public class QueueBootstrapper : Bootstrapper<ScheduledQueueService>
{
public void InitializeHostedService(IServiceConfigurator<ScheduledQueueService> cfg)
{
cfg.HowToBuildService(n => new ScheduledQueueService());
cfg.SetServiceName("ScheduledQueueHandler");
cfg.WhenStarted(s => s.StartService());
cfg.WhenStopped(s => s.StopService());
}
}
But in my StartService() method I am using a while loop to repeat the task I'm running, but when I attempt to stop the service through Windows services it fails to stop and I suspect its because the StartService() method never ended when it was originally called.
Example:
public class ScheduledQueueService
{
bool QueueRunning;
public ScheduledQueueService()
{
QueueRunning = false;
}
public void StartService()
{
QueueRunning = true;
while(QueueRunning){
//do some work
}
}
public void StopService()
{
QueueRunning = false;
}
}
what is a better way of doing this?
I've considered using the .NET System.Threading.Tasks to run the work in and then maybe closing the thread on StopService()
Maybe using Quartz to repeat the task and then remove it.
Thoughts?
Generally, how I would handle this is have a Timer event, that fires off a few moments after StartService() is called. At the end of the event, I would check for a stop flag (set in StopService()), if the flag (e.g. your QueueRunning) isn't there, then I would register a single event on the Timer to happen again in a few moments.
We do something pretty similar in Topshelf itself, when polling the file system: https://github.com/Topshelf/Topshelf/blob/v2_master/src/Topshelf/FileSystem/PollingFileSystemEventProducer.cs#L80
Now that uses the internal scheduler type instead of a Timer object, but generally it's the same thing. The fiber is basically which thread to process the event on.
If you have future questions, you are also welcomed to join the Topshelf mailing list. We try to be pretty responsive on there. http://groups.google.com/group/topshelf-discuss
I was working on some similar code today I stumbled on https://stackoverflow.com/a/2033431/981 by accident and its been working like a charm for me.
I don't know about Topshelf specifically but when writing a standard windows service you want the start and stop events to complete as quickly as possible. If the start thread takes too long windows assumes that it has failed to start up, for example.
To get around this I generally use a System.Timers.Timer. This is set to call a startup method just once with a very short interval (so it runs almost immediately). This then does the bulk of the work.
In your case this could be your method that is looping. Then at the start of each loop check a global shutdown variable - if its true you quit the loop and then the program can stop.
You may need a bit more (or maybe even less) complexity than this depending on where exactly the error is but the general principle should be fine I hope.
Once again though I will disclaim that this knowledge is not based on topshelf, jsut general service development.
This is rather weird issue that I am facing with by WCF/Silverlight application. I am using a WCF to get data from a database for my Silverlight application and the completed event is not triggering for method in WCF on some systems. I have checked the called method executes properly has returns the values. I have checked via Fiddler and it clearly shows that response has the returned values as well. However the completed event is not getting triggered. Moreover in few of the systems, everything is fine and I am able to process the returned value in the completed method.
Any thoughts or suggestions would be greatly appreciated. I have tried searching around the web but without any luck :(
Following is the code.. Calling the method..
void RFCDeploy_Loaded(object sender, RoutedEventArgs e)
{
btnSelectFile.IsEnabled = true;
btnUploadFile.IsEnabled = false;
btnSelectFile.Click += new RoutedEventHandler(btnSelectFile_Click);
btnUploadFile.Click += new RoutedEventHandler(btnUploadFile_Click);
RFCChangeDataGrid.KeyDown += new KeyEventHandler(RFCChangeDataGrid_KeyDown);
btnAddRFCManually.Click += new RoutedEventHandler(btnAddRFCManually_Click);
ServiceReference1.DataService1Client ws = new BEVDashBoard.ServiceReference1.DataService1Client();
ws.GetRFCChangeCompleted += new EventHandler<BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs>(ws_GetRFCChangeCompleted);
ws.GetRFCChangeAsync();
this.BusyIndicator1.IsBusy = true;
}
Completed Event....
void ws_GetRFCChangeCompleted(object sender, BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs e)
{
PagedCollectionView view = new PagedCollectionView(e.Result);
view.GroupDescriptions.Add(new PropertyGroupDescription("RFC"));
RFCChangeDataGrid.ItemsSource = view;
foreach (CollectionViewGroup group in view.Groups)
{
RFCChangeDataGrid.CollapseRowGroup(group, true);
}
this.BusyIndicator1.IsBusy = false;
}
Please note that this WCF has lots of other method as well and all of them are working fine.... I have problem with only this method...
Thanks...
As others have noted, a look at some of your code would help. But some things to check:
(1) Turn off "Enable Just My Code" under Debug/Options/Debugging/General, and set some breakpoints in the Reference.cs file, to see whether any of the low-level callback methods there are getting hit.
(2) Confirm that you're setting the completed event handlers, and on the right instance of the proxy client. If you're setting the event handlers on one instance, and making the call on another, that could result in the behavior you're describing.
(3) Poke around with MS Service Trace Viewer, as described here, and see if there are any obvious errors (usually helpfully highlighted in red).
Likely there are other things you could check, but this will keep you busy for a day or so :-).
(Edits made after code posted)
(4) You might want to try defining your ws variable at the class level rather than the function. In theory, having an event-handler defined on it means that it won't get garbage collected, but it's still a little odd, in that once you're out of the function, you don't have a handle to it anymore, and hence can't do important things like, say, closing it.
(5) If you haven't already, try rebuilding your proxy class through the Add Service Reference dialog box in Visual Studio. I've seen the occasional odd problem pop up when the web service has changed subtly and the client wasn't updated to reflect the changes: some methods will get called successfully, others won't.
(6) If you're likely to have multiple instances of a proxy client open at the same time, consider merging them into one instance (and use the optional "object userState" parameter of the method call to pass the callback, so you don't run into the nasty possibility of multiple event handlers getting assigned). I've run into nasty problems in the past when multiple instances were stepping on each other, and my current best practice is to structure my code in such a way that there's only ever one client instance open at a time. I know that's not necessarily what MS says, but it's been my experience.
This issue is because of special characters in one of the fields returned from DB which browser was not able to render. After considerable debug n search over the web, was able to find this out. Used Regular expressions to remove these special characters in WCF, the new returned values from the method was successfully rendered in various browsers on different system. :)
Make sure you have checked 'Generate asynchronous operations' in your service reference. Right-click on the service reference and check the box. This solved it for me.
Apologies for the indescriptive title, however it's the best I could think of for the moment.
Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down.
The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this:
public sealed class BulkFileLoader {
static BulkFileLoader instance = null;
int currentCount = 0;
BulkFileLoader()
public static BulkFileLoader Instance
{
// Instanciate the instance class if necessary, and return it
}
public void Go() {
// kick of 'ProcessFile' thread
}
public void GetCurrentCount() {
return currentCount;
}
private void ProcessFile() {
while (more rows in the import file) {
// insert the row into the database
currentCount++;
}
}
}
The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method.
This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class.
I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine.
In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it.
Can anyone see a solution to my problem?
You could create a Windows Service which will expose, say, a WCF endpoint which will be its API. Through this API you'll be able to query services' status and add more files for processing.
You should make your "Bulk Uploader" a service, and have your other processes speak to it via IPC.
You need a service because your upload takes hours. And it sounds like you'd like it to run unattended if necessary,, and you'd like it to be detached from the calling thread. That's what services do well.
You need some form of Inter-Process Communication because you'd like to send information between processes.
For communicating with your service see NetNamedPipeBinding
You can then send "Job Start" and "Job Status" commands and queries whenever you feel like to your background service.