Prevent Silverlight Out-Of-Browser App from opening twice - c#

i'm trying to prevent a Silverlight OOB App from opening twice, but i have no idea how.
I tried with creating a FileStream direct after app launch with "FileShare.None", to throw an error when the App is open twice and tries to open the file, but its ugly and doesn't work because the FileStream seems to be released after about 30 seconds..
FileStream s2 = new FileStream(path, FileMode.OpenOrCreate,FileAccess.ReadWrite, FileShare.None);
Any idea how i could approach this?
Thanks, phil

One way to achieve this is to use the local messaging channel between Silverlight applications. This scenario is mentioned in the MSDN, I will expand a little bit more here.
The LocalMessageReceiver class allow you to register to a messaging service primarily intended to communicate between different Silverlight applications.
The trick is that you can only register once with the same name in a particular scope. So as a consequence, if the first instance registers itself using your application name, any other instance doing the same afterwards will trigger an exception. Then you just have to catch that exception and deal with it, depending on what you want to do (close the instance, display a message, etc.)
Here's the code I use:
private bool CheckSingleInstance()
{
try
{
var receiver = new LocalMessageReceiver("application name", ReceiverNameScope.Global, LocalMessageReceiver.AnyDomain);
receiver.Listen();
return true;
}
catch (ListenFailedException)
{
// A listener with this name already exists
return false;
}
}
An advantage of this solution is that it works whether your instances are in-browser or out-of-browser.

Related

MS AI: Duplicate event with Persistence Channel

We're introducing Application Insights into out Desktop app. Since the user might be off-line when using the app, we're using a PersistenceChannel to make sure the events can be sent in a later session, and we call flush when the app is shutting down ( Dispose() of our Tracker):
public ApplicationInsightsTracker()
{
this.client = new TelemetryClient();
this.client.InstrumentationKey = InstrumentationKey;
TelemetryConfiguration.Active.TelemetryChannel = new PersistenceChannel();
TelemetryConfiguration.Active.TelemetryChannel.DeveloperMode = true;
}
~ApplicationInsightsTracker()
{
this.Dispose();
}
public override void Dispose()
{
this.client.Flush();
GC.SuppressFinalize(this);
}
public override void TrackEvent(ITrackerEvent trackerEvent)
{
try
{
this.client.TrackEvent(trackerEvent.Name, trackerEvent.Properties);
}
catch (Exception e)
{
Debug.WriteLine(string.Format("Failed to track event {0}. Exception message {1}", trackerEvent.Name, e.Message));
}
}
We're also using continuous export to send the event data from Application Insights to an Azure Blob database. We connect Power BI to the Blob database, and the other day the refresh functionality stopped working. We investigated and it turns out we were loading 2 events with the same unique ID. Looking into the blobs we found 2 consecutive blobs with the same event:
blob1.blob - Holds 1 event
{"event":...,"internal":{"data":{"id":"8709bb70-e6b1-11e5-9080-f77f0d66d988"..."data":{..."eventTime":"2016-03-10T11:15:53.9378827Z"}..."user":{..."anonId": "346033da-012d-4cc4-9841-836e5d8f8e32"..."session":{"id":"cb668d2f-9755-4afd-97c2-66cc3504349a"...
blob2.blob - Holds 3 events
{"event":...,"internal":{"data":{"id":"8709bb70-e6b1-11e5-9080-f77f0d66d988"..."data":{..."eventTime":"2016-03-10T11:15:53.9378827Z"}..."user":{..."anonId": "346033da-012d-4cc4-9841-836e5d8f8e32"..."session":{"id":"cb668d2f-9755-4afd-97c2-66cc3504349a"...
{"event":...
{"event":...
As you can see the first event on both blobs is the same. We were running tests on the PersistenceChannel having the machine connected / disconnected from the network, and somewhere along the line AI did this.
We're not entirely sure if this is a problem in how we're using it, or a flaw with the library. As you can imagine getting duplicate events through can be quite a pain (specially if you're building a model externally).
Are we doing something odd with AI, or is this a known issue?
I checked with the team that does export, and they said
the current export pipeline has opportunities for duplicate exports
And it is something they are looking into.
So it doesn't look like you are doing anything wrong, this is just a case you'll need to be aware of, and work around for now.
Exported data from AppInsights may contain dupes.
If you are exporting all your data to Power BI, then you can use Power Query's built-in duplicate removal feature.

Can I redirect output from a C DLL into my c# log4net output

I have a C# application which in turn loads a C or C++ dll (which in turn loads other C/C++ dlls). Within the C# application I use a log4net logger to capture all the output into a series of log files. My application runs as a windows service so there is no console/output window for normal printfs or output written into stdout/stderr to go to.
Is there a way to setup the C# application to direct stdout/stderr (from the DLLs) and turn each line into a log4net output. Or is there some way within the C/C++ DLL to connect the stdout/stderr streams to the log4net output?
I found some solution (here : http://bytes.com/topic/c-sharp/answers/822341-dllimport-stdout-gets-eaten) that indicated I needed to put a call into my C DLL like this : setvbuf(stdout, NULL, _IONBF, 0); Though, I don't know what that does, it doesn't do what I want. I assume I'd also need a similar line for stderr. In either case, google seemed to think those lines simply take care of buffering and not redirection into log4net.
I assume I need some sort of function override which snags the console writes (from a loaded DLL in another language) and converts them into mLog.InfoFormat("{0}", consoleString); sorts of calls. I'm new to c# and not even sure what terms to google in order to find such an override (if its even possible).
Not sure if this complicates the problem, but my C# application is multithreaded and some of the DLLs have multiple threads as well. I assume that just means I need a lock of some sort inside the method that handles the console output and writes it into the log4net framework(maybe) or maybe the normal serialization of log4net will handle it for me.
Turns out those did the trick once I figured out how to use them. I setup two named pipes(or two ends of the same pipe?). One I connected to stdout and had it do a log message in log4net of whatever came through the pipe.
internal static void InfoLogWriter(Object threadContext)
{
mLog.Info("InfoLogWriterthread started");
int id = Process.GetCurrentProcess().Id; // make this instance unique
var serverPipe = new NamedPipeServerStream("consoleRedirect" + id, PipeDirection.In, 1);
NamedPipeClientStream clientPipe = new NamedPipeClientStream(".", "consoleRedirect" + id, PipeDirection.Out, PipeOptions.WriteThrough);
mLog.Info("Connecting Client Pipe.");
clientPipe.Connect();
mLog.Info("Connected Client Pipe, redirecting stdout");
HandleRef hr11 = new HandleRef(clientPipe, clientPipe.SafePipeHandle.DangerousGetHandle());
SetStdHandle(-11, hr11.Handle); // redirect stdout to my pipe
mLog.Info("Redirection of stdout complete.");
mLog.Info("Waiting for console connection");
serverPipe.WaitForConnection(); //blocking
mLog.Info("Console connection made.");
using (var stm = new StreamReader(serverPipe))
{
while (serverPipe.IsConnected)
{
try
{
string txt = stm.ReadLine();
if (!string.IsNullOrEmpty(txt))
mLog.InfoFormat("DLL MESSAGE : {0}", txt);
}
catch (IOException)
{
break; // normal disconnect
}
}
}
mLog.Info("Console connection broken. Thread Stopping.");
}
Also have a function to push all that to another thread so it doesn't block my main thread when it hits the various blocking calls.
internal static void RedirectConsole()
{
mLog.Info("RedirectConsole called.");
ThreadPool.QueueUserWorkItem(new System.Threading.WaitCallback(InfoLogWriter));
// TODO enqueue and item for error messages too.
}
I'm having trouble with it disconnecting and have to reconnect the pipes, but I'll figure out a reconnect solution. I'm guessing that happens when DLLs get swapped back out of memory or perhaps when I need to read but there isn't anything currently ready to be read? I've also got to setup another pair to snag stderr and redirect it as well, using Error logs for that one. Probably want to get rid of the magic number (-11) and use the normal enums as well (STD_ERROR_HANDLE, etc)

The process cannot access the file because it is being used by another process

I am trying to do the following:
var path = Server.MapPath("File.js"));
// Create the file if it doesn't exist or if the application has been restarted
// and the file was created before the application restarted
if (!File.Exists(path) || ApplicationStartTime > File.GetLastWriteTimeUtc(path)) {
var script = "...";
using (var sw = File.CreateText(path)) {
sw.Write(script);
}
}
However occasionally the following error is sometimes thrown:
The process cannot access the file '...\File.js' because it is being
used by another process
I have looked on here for similar questions however mine seems slightly different from the others. Also I cannot replicate it until the server is under heavy load and therefore I wish to make sure it is correct before I upload the fix.
I'd appreciate it if someone could show me how to fix this.
Thanks
It sounds like two requests are running on your server at the same time, and they're both trying to write to that file at the same time.
You'll want to add in some sort of locking behavior, or else write a more robust architecture. Without knowing more about what specifically you're actually trying to accomplish with this file-writing procedure, the best I can suggest is locking. I'm generally not a fan of locking like this on web servers, since it makes requests depend on each other, but this would solve the problem.
Edit: Dirk pointed out below that this may or may not actually work. Depending on your web server configuration, static instances may not be shared, and the same result could occur. I've offered this as a proof of concept, but you should most definitely address the underlying problem.
private static object lockObj = new object();
private void YourMethod()
{
var path = Server.MapPath("File.js"));
lock (lockObj)
{
// Create the file if it doesn't exist or if the application has been restarted
// and the file was created before the application restarted
if (!File.Exists(path) || ApplicationStartTime > File.GetLastWriteTimeUtc(path))
{
var script = "...";
using (var sw = File.CreateText(path))
{
sw.Write(script);
}
}
}
}
But, again, I'd be tempted to reconsider what you're actually trying to accomplish with this. Perhaps you could build this file in the Application_Start method, or even just a static constructor. Doing it for every request is a messy approach that will be likely to cause issues. Particularly under heavy load, where every request will be forced to run synchronously.

Starting application from service running as SYSTEM that can interact with the user

I currently have a single application that needs to be started from a windows service that i am coding in .net 3.5. This application is currently running as the user who ran the service, in my case the SYSTEM user. If running as the SYSTEM user it does not show the application to the users desktop. Thoughts? advice?
//constructor
private Process ETCHNotify = new Process();
//StartService()
ETCHNotify.StartInfo.FileName = baseDir + "\\EtchNotify.exe";
ETCHNotify.StartInfo.UseShellExecute = false;
//BackgroundWorkerThread_DoWork()
if (!systemData.GetUserName().Equals(""))
{
// start ETCHNotify
try {
ETCHNotify.Start();
}
catch (Exception ex)
{
systemData.Run("ERR: Notify can't start: " + ex.Message);
}
}
I only execute the try/catch if the function i have written GetUserName() (which determines the username of the user running explorer.exe) is not null
again to reiterate: desired functionality is that this starts ETCHNotify in a state that allows it to interact with the currently logged in user as determined by GetUserName()
Collage of some post found around (this and this)
Note that as of Windows Vista, services are strictly forbidden from interacting directly with a user:
Important: Services cannot directly interact with a user as of Windows
Vista. Therefore, the techniques mentioned in the section titled Using
an Interactive Service should not be used in new code.
This "feature" is broken, and conventional wisdom dictates that you shouldn't have been relying on it anyway. Services are not meant to provide a UI or allow any type of direct user interaction. Microsoft has been cautioning that this feature be avoided since the early days of Windows NT because of the possible security risks.
There are some possible workarounds, however, if you absolutely must have this functionality. But I strongly urge you to consider its necessity carefully and explore alternative designs for your service.
Use WTSEnumerateSessions to find the right desktop, then CreateProcessAsUser to start the application on that desktop (you pass it the handle of the desktop as part of the STARTUPINFO structure) is correct.
However, I would strongly recommend against doing this. In some environments, such as Terminal Server hosts with many active users, determining which desktop is the 'active' one isn't easy, and may not even be possible.
A more conventional approach would be to put a shortcut to a small client app for your service in the global startup group. This app will then launch along with every user session, and can be used start other apps (if so desired) without any juggling of user credentials, sessions and/or desktops.
Ultimately in order to solve this i took the advice of #marco and the posts he mentioned. I have created the service to be entirely independent of the tray application that interacts with the user. I did however install the Tray application via registry 'start up' methods with the service. The Service installer will now install the application which interacts with the user as well... This was the safest and most complete method.
thanks for your help everyone.
I wasn't going to answer this since you already answered it, (and it's oh, what? going on 2.5 years OLD now!?) But there are ALWAYS those people who are searching for this same topic, and reading the answers...
In order to get my service to Interact with the Desktop, no matter WHAT desktop, nor, how MANY desktops, nor if the service was even running on the SAME COMPUTER as the desktop app!! None of that matters with what I got here... I won't bore you with the details, I'll just give you the meat and potatoes, and you and let me know if you want to see more...
Ok. First thing I did was create an Advertisement Service. This is a thread that the service runs, opens up a UDP socket to listen for broadcasts on the network. Then, using the same piece of code, I shared it with the client app, but it calls up Advertise.CLIENT, rather than Advertise.SERVER... The CLIENT opens the port I expect the service to be on, and broadcasts a message, "Hello... Is there anybody out there??", asking if they're there ANY servers listening, and if so, reply back to THIS IP address with your computer name, IP Address and port # where I can find the .NET remoting Services..." Then it waits a small amount of time-out time, gathers up the responses it gets, and if it's more than one, it presents the user with a dialog box and a list of services that responded... The Client then selects one, or, if only ONE responded, it will call Connect((TServerResponse) res); on that, to get connected up. At this point, the server is using Remoting Services with the WellKnownClientType, and WellKnownServerType to put itself out there...
I don't think you are too interested in my "Auto-Service locater", because a lot of people frown on UDP, even more so when your app start broadcasting on large networks. So, I'm assuming you'd be more interested in my RemotingHelper, that gets the client connected up to the server. It looks like this:
public static Object GetObject(Type type)
{
try {
if(_wellKnownTypes == null) {
InitTypeCache();
}
WellKnownClientTypeEntry entr = (WellKnownClientTypeEntry)_wellKnownTypes[type];
if(entr == null) {
throw new RemotingException("Type not found!");
}
return System.Activator.GetObject(entr.ObjectType, entr.ObjectUrl);
} catch(System.Net.Sockets.SocketException sex) {
DebugHelper.Debug.OutputDebugString("SocketException occured in RemotingHelper::GetObject(). Error: {0}.", sex.Message);
Disconnect();
if(Connect()) {
return GetObject(type);
}
}
return null;
}
private static void InitTypeCache()
{
if(m_AdvertiseServer == null) {
throw new RemotingException("AdvertisementServer cannot be null when connecting to a server.");
}
_wellKnownTypes = new Dictionary<Type, WellKnownClientTypeEntry>();
Dictionary<string, object> channelProperties = new Dictionary<string, object>();
channelProperties["port"] = 0;
channelProperties["name"] = m_AdvertiseServer.ChannelName;
Dictionary<string, object> binFormatterProperties = new Dictionary<string, object>();
binFormatterProperties["typeFilterLevel"] = "Full";
if(Environment.UserInteractive) {
BinaryServerFormatterSinkProvider binFormatterProvider = new BinaryServerFormatterSinkProvider(binFormatterProperties, null);
_serverChannel = new TcpServerChannel(channelProperties, binFormatterProvider);
// LEF: Only if we are coming form OUTSIDE the SERVICE do we want to register the channel, since the SERVICE already has this
// channel registered in this AppDomain.
ChannelServices.RegisterChannel(_serverChannel, false);
}
System.Diagnostics.Debug.Write(string.Format("Registering: {0}...\n", typeof(IPawnStatServiceStatus)));
RegisterType(typeof(IPawnStatServiceStatus),m_AdvertiseServer.RunningStatusURL.ToString());
System.Diagnostics.Debug.Write(string.Format("Registering: {0}...\n", typeof(IPawnStatService)));
RegisterType(typeof(IPawnStatService), m_AdvertiseServer.RunningServerURL.ToString());
System.Diagnostics.Debug.Write(string.Format("Registering: {0}...\n", typeof(IServiceConfiguration)));
RegisterType(typeof(IServiceConfiguration), m_AdvertiseServer.RunningConfigURL.ToString());
}
[SecurityPermission(SecurityAction.Demand, Flags=SecurityPermissionFlag.RemotingConfiguration, RemotingConfiguration=true)]
public static void RegisterType(Type type, string serviceUrl)
{
WellKnownClientTypeEntry clientType = new WellKnownClientTypeEntry(type, serviceUrl);
if(clientType != RemotingConfiguration.IsWellKnownClientType(type)) {
RemotingConfiguration.RegisterWellKnownClientType(clientType);
}
_wellKnownTypes[type] = clientType;
}
public static bool Connect()
{
// Init the Advertisement Service, and Locate any listening services out there...
m_AdvertiseServer.InitClient();
if(m_AdvertiseServer.LocateServices(iTimeout)) {
if(!Connected) {
bConnected = true;
}
} else {
bConnected = false;
}
return Connected;
}
public static void Disconnect()
{
if(_wellKnownTypes != null) {
_wellKnownTypes.Clear();
}
_wellKnownTypes = null;
if(_serverChannel != null) {
if(Environment.UserInteractive) {
// LEF: Don't unregister the channel, because we are running from the service, and we don't want to unregister the channel...
ChannelServices.UnregisterChannel(_serverChannel);
// LEF: If we are coming from the SERVICE, we do *NOT* want to unregister the channel, since it is already registered!
_serverChannel = null;
}
}
bConnected = false;
}
}
So, THAT is meat of my remoting code, and allowed me to write a client that didn't have to be aware of where the services was installed, or how many services were running on the network. This allowed me to communicate with it over the network, or on the local machine. And it wasn't a problem to have two or more people running the app, however, yours might. Now, I have some complicated callback code in mine, where I register events to go across the remoting channel, so I have to have code that checks to see if the client is even still connected before I send the notification to the client that something happened. Plus, if you are running for more than one user, you might not want to use Singleton objects. It was fine for me, because the server OWNS the objects, and they are whatever the server SAYS they are. So, my STATS object, for example, is a Singleton. No reason to create an instance of it for EVERY connection, when everyone is going to see the same data, right?
I can provide more chunks of code if necessary. This is, of course, one TINY bit of the overall picture of what makes this work... Not to mention the subscription providers, and all that.
For the sake of completeness, I'm including the code chunk to keep your service connected for the life of the process.
public override object InitializeLifetimeService()
{
ILease lease = (ILease)base.InitializeLifetimeService();
if(lease.CurrentState == LeaseState.Initial) {
lease.InitialLeaseTime = TimeSpan.FromHours(24);
lease.SponsorshipTimeout = TimeSpan.FromSeconds(30);
lease.RenewOnCallTime = TimeSpan.FromHours(1);
}
return lease;
}
#region ISponsor Members
[SecurityPermissionAttribute(SecurityAction.LinkDemand, Flags=SecurityPermissionFlag.Infrastructure)]
public TimeSpan Renewal(ILease lease)
{
return TimeSpan.FromHours(12);
}
#endregion
If you include the ISponsor interface as part of your server object, you can implement the above code.
Hope SOME of this is useful.
When you register your service, you can tell it to allow interactions with the desktop. You can read this oldie link http://www.codeproject.com/KB/install/cswindowsservicedesktop.aspx
Also, don't forget that you can have multiple users logged in at the same time.
Apparently on Windows Vista and newer interacting with the desktop has been made more difficult. Read this for a potential solution: http://www.codeproject.com/KB/cs/ServiceDesktopInteraction.aspx

Issue writing to single file in Web service in .NET

I have created a webservice in .net 2.0, C#. I need to log some information to a file whenever different methods are called by the web service clients.
The problem comes when one user process is writing to a file and another process tries to write to it. I get the following error:
The process cannot access the file because it is being used by another process.
The solutions that I have tried to implement in C# and failed are as below.
Implemented singleton class that contains code that writes to a file.
Used lock statement to wrap the code that writes to the file.
I have also tried to use open source logger log4net but it also is not a perfect solution.
I know about logging to system event logger, but I do not have that choice.
I want to know if there exists a perfect and complete solution to such a problem?
The locking is probably failing because your webservice is being run by more than one worker process.
You could protect the access with a named mutex, which is shared across processes, unlike the locks you get by using lock(someobject) {...}:
Mutex lock = new Mutex("mymutex", false);
lock.WaitOne();
// access file
lock.ReleaseMutex();
You don't say how your web service is hosted, so I'll assume it's in IIS. I don't think the file should be accessed by multiple processes unless your service runs in multiple application pools. Nevertheless, I guess you could get this error when multiple threads in one process are trying to write.
I think I'd go for the solution you suggest yourself, Pradeep, build a single object that does all the writing to the log file. Inside that object I'd have a Queue into which all data to be logged gets written. I'd have a separate thread reading from this queue and writing to the log file. In a thread-pooled hosting environment like IIS, it doesn't seem too nice to create another thread, but it's only one... Bear in mind that the in-memory queue will not survive IIS resets; you might lose some entries that are "in-flight" when the IIS process goes down.
Other alternatives certainly include using a separate process (such as a Service) to write to the file, but that has extra deployment overhead and IPC costs. If that doesn't work for you, go with the singleton.
Maybe write a "queue line" of sorts for writing to the file, so when you try to write to the file it keeps checking to see if the file is locked, if it is - it keeps waiting, if it isn't locked - then write to it.
You could push the results onto an MSMQ Queue and have a windows service pick the items off of the queue and log them. It's a little heavy, but it should work.
Joel and charles. That was quick! :)
Joel: When you say "queue line" do you mean creating a separate thread that runs in a loop to keep checking the queue as well as write to a file when it is not locked?
Charles: I know about MSMQ and windows service combination, but like I said I have no choice other than writing to a file from within the web service :)
thanks
pradeep_tp
Trouble with all the approached tried so far is that multiple threads can enter the code.
That is multiple threads try to acquire and use the file handler - hence the errors - you need a single thread outside of the worker threads to do the work - with a single file handle held open.
Probably easiest thing to do would be to create a thread during application start in Global.asax and have that listen to a synchronized in-memory queue (System.Collections.Generics.Queue). Have the thread open and own the lifetime of the file handle, only that thread can write to the file.
Client requests in ASP will lock the queue momentarily, push the new logging message onto the queue, then unlock.
The logger thread will poll the queue periodically for new messages - when messages arrive on the queue, the thread will read and dispatch the data in to the file.
To know what I am trying to do in my code, following is the singletone class I have implemented in C#
public sealed class FileWriteTest
{
private static volatile FileWriteTest instance;
private static object syncRoot = new Object();
private static Queue logMessages = new Queue();
private static ErrorLogger oNetLogger = new ErrorLogger();
private FileWriteTest() { }
public static FileWriteTest Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new FileWriteTest();
Thread MyThread = new Thread(new ThreadStart(StartCollectingLogs));
MyThread.Start();
}
}
}
return instance;
}
}
private static void StartCollectingLogs()
{
//Infinite loop
while (true)
{
cdoLogMessage objMessage = new cdoLogMessage();
if (logMessages.Count != 0)
{
objMessage = (cdoLogMessage)logMessages.Dequeue();
oNetLogger.WriteLog(objMessage.LogText, objMessage.SeverityLevel);
}
}
}
public void WriteLog(string logText, SeverityLevel errorSeverity)
{
cdoLogMessage objMessage = new cdoLogMessage();
objMessage.LogText = logText;
objMessage.SeverityLevel = errorSeverity;
logMessages.Enqueue(objMessage);
}
}
When I run this code in debug mode (simulates just one user access), I get the error "stack overflow" at the line where queue is dequeued.
Note: In the above code ErrorLogger is a class that has code to write to the File. objMessage is an entity class to carry the log message.
Alternatively, you might want to do error logging into the database (if you're using one)
Koth,
I have implemented Mutex lock, which has removed the "stack overflow" error. I yet have to do a load testing before I can conclude whether it is working fine in all cases.
I was reading about Mutex objets in one of the websites, which says that Mutex affects the performance. I want to know one thing with putting lock through Mutex.
Suppose User Process1 is writing to a file and at the same time User Process2 tries to write to the same file. Since Process1 has put a lock on the code block, will Process2 will keep trying or just die after the first attempet iteself.?
thanks
pradeep_tp
It will wait until the mutex is released....
Joel: When you say "queue line" do you
mean creating a separate thread that
runs in a loop to keep checking the
queue as well as write to a file when
it is not locked?
Yeah, that's basically what I was thinking. Have another thread that has a while loop until it can get access to the file and save, then end.
But you would have to do it in a way where the first thread to start looking gets access first. Which is why I say queue.

Categories

Resources