I have a network application that uses Lua scripts. Upon starting the application I create a global Lua state and load all script files, that contain various functions, and for every client that connects I create a Lua thread, for that connection.
// On start
var GL = luaL_newstate();
// register functions...
// load scripts...
// On connection
connection.State = lua_newthread(GL);
When a request that uses a script comes in, I get the global function and call it.
var NL = connection.State;
var result = lua_resume(NL, 0);
if (result != 0 && result != LUA_YIELD)
{
// error...
result = 0;
}
if (result == 0)
{
// function returned...
}
Now, some scripts require a response to something from the client, so I yield in those functions, to wait for it. When the response comes in, the script is resumed with lua_resume(NL, 1).
// Lua
text("How are you?")
local response = select("Good", "Bad")
// Host
private int select(IntPtr L)
{
// send response request...
return lua_yield(L, 1);
}
// On response
lua_pushstring(NL, response);
var result = lua_resume(NL, 1);
// ...
My problem is that I need to be able to cancel that yield, and return from the Lua function, without executing any more code in the Lua function, and without adding additional code to the scripts. In other words, I basically want to make the Lua thread throw an exception, get back to the start, and forget it ever executed that function.
Is that possible?
One thing I thought might work, but didn't, was calling lua_error. The result was an SEHException on the lua_error call. I assume because the script isn't currently running, but yielding.
While I didn't find a way to wipe a thread's slate clean (I don't think it's possible), I did find a solution in figuring out how lua_newthread works.
When the thread is created, the reference to it is put on the "global" state's stack, and it doesn't get collected until it's removed from there. All you have to do to clean up the thread is removing it from the stack with lua_remove. This requires you to create new threads regularly, but that's not much of a problem for me.
I'm now keeping track of the created threads and their index on the stack, so I can removed them when I'm done with them for whatever reason (cancel, error, etc). All other indices are updated, as the removal will shift the ones that came after it.
if (sessionOver)
{
lua_remove(GL, thread.StackIndex);
foreach (var t in threads)
{
if (t.StackIndex > thread.StackIndex)
t.StackIndex--;
}
}
Related
In the previous day I am looking for a way to make my code fully asynchronous. So that when called by a rest API, I' ll get an immediate response meanwhile the process is running in the background.
To do that I simply used
tasks.Add(Task<bool>.Run( () => WholeProcessFunc(parameter) ))
where WholeProcessFunc is the function that make all the calculations(it may be computationally intensive).
It works as expected however I read that it is not optimal to wrap the whole process in a Task.Run.
My code need to compute different entity framework query which result depends on the previous one and contains also foreach loop.
For instance I can' t understand which is the best practice to make async a function like this:
public async Task<List<float>> func()
{
List<float> acsi = new List<float>();
using (var db = new EFContext())
{
long[] ids = await db.table1.Join(db.table2 /*,...*/)
.Where(/*...*/)
.Select(/*...*/).ToArrayAsync();
foreach (long id in ids)
{
var all = db.table1.Join(/*...*/)
.Where(/*...*/);
float acsi_temp = await all.OrderByDescending(/*...*/)
.Select(/*...*/).FirstAsync();
if (acsi_temp < 0) { break; }
acsi.Add(acsi_temp);
}
}
return acsi;
}
In particular I have difficulties with the foreach loop and the fact that the result of a query is used in the next .
Finally with the break statement which I don't get how to translate it. I read about cancellation token, could it be the way ?
Is wrapping up all this function in a Task.Run a solid solution ?
In the previous day I am looking for a way to make my code fully asynchronous. So that when called by a rest api, I' ll get an immediate response meanwhile the process is running in the background.
Well, that's one meaning of the word "asynchronous". Unfortunately, it's completely different than the kind of "asynchronous" that async/await does. async yields to the thread pool, not the client (browser).
It works as expected however I read that it is not optimal to wrap the whole process in a Task.Run.
It only seems to work as expected. It's likely that once your web site gets higher load, it will start to fail. It's definite that once your web site gets busier and you do things like rolling upgrades, it will start to fail.
Is wrapping up all this function in a Task.Run a solid solution ?
Not at all. Fire-and-forget is inherently dangerous.
A proper solution should be a basic distributed architecture:
A durable queue, such as an Azure Queue or Rabbit (if properly configured to be durable).
An independent processor, such as an Azure Function or Win32 Service.
Then the ASP.NET app will encode the work to be done into a queue message, enqueue that to the durable queue, and then return. Some time later, the processor will retrieve the message from that queue and do the actual work.
You can translate your code to return an IAsyncEnumerable<...>, that way the caller can process the results as they are obtained. In an asp.net 5 MVC endpoint, this includes writing serialised json to the browser;
public async IAsyncEnumerable<float> func()
{
using (var db = new EFContext())
{
//...
foreach (long id in ids)
{
//...
if(acsi_temp<0) { yield break; }
yield return acsi_temp;
}
}
}
public async Task<IActionResult> ControllerAction(){
if (...)
return NotFound();
return Ok(func());
}
Note that if your endpoint is an async IAsyncEnumerable coroutine. In asp.net 5, your headers would be flushed before your action even started. Giving you no way to return any http error codes.
Though for performance, you should try rework your queries so you can fetch all the data up front.
In my application several co-routines are downloading files in parallel, and there is a chance that they can try do download the same file to the same path - resulting into a collision and an access exception. I've tried solving this by using a static collection private static List<string> filesBeingDownloadedNow = new List<string>(); which holds filePath for files which are currently being downloaded.
It looks like this:
private static List<string> filesBeingDownloadedNow = new List<string>();
private IEnumerator DoDownloadCoroutine(string url, string filePath)
{
filesBeingDownloadedNow.Add(filePath);
var webRequest = new UnityWebRequest(url)
{
method = UnityWebRequest.kHttpVerbGET,
downloadHandler = new DownloadHandlerFile(filePath)
{
removeFileOnAbort = true
}
};
yield return webRequest.SendWebRequest();
if (webRequest.isNetworkError || webRequest.isHttpError)
{
// do something if the file failed to download
}
else
{
// do something with the downloaded file
}
filesBeingDownloadedNow.Remove(filePath);
}
and elsewhere:
if (filesBeingDownloadedNow.Contains(filePath))
{
// start another coroutine which simply waits until
// filesBeingDownloadedNow doesn't contain filePath anymore
// and then works with the downloaded file
// as if this coroutine downloaded it itself
}
else
{
StartCoroutine(DoDownloadCoroutine(url, filePath));
}
Now, this works perfectly fine, but there is a rare case where the gameObject on which this coroutine runs can be destroyed before the file download coroutine finishes. So, yield return webRequest.SendWebRequest(); will be the last thing ever called here, and neither success nor failure can be checked, and filesBeingDownloadedNow.Remove(filePath); is never reached.
I tested this, and it seems that UnityWebRequest.SendWebRequest still downloads the file in full - however, it's unclear when it will end or if the file gets downloaded successfully at all. It simply "gets lost in the ether".
I could simply trying doing exception handling around File.Open or whatever other things I'd like to do with the downloaded file (and actually should do that in any case). However, UnityWebRequest doesn't allow handling exceptions since it's called with yield return.
Even if before calling webRequest.SendWebRequest(); I check if the file already exists with if (File.Existst(filePath)), that doesn't help if 2 different UnityWebRequests try downloading the same file at almost the same time. It can be that the 1st UnityWebRequest has only just connected to the server and hasn't created a file at filePath yet, and the 2nd coroutine will see that the file doesn't exist and assume that no one is downloading it right now, and still attempt a download with webRequest.SendWebRequest();, resulting in a collision and an exception which I can't handle.
So basically, there is a combination of factors:
Co-routines can be silently killed at any moment without anything like "finally" to do clean up or anything before it's killed
UnityWebRequest.SendWebRequest doesn't allow handling exceptions (?)
Delay before the downloaded file actually appears at the location makes it impossible to check if a file is already being downloaded
Frankly, I also really dislike the static list here, but it seems that since files are a global resource anyway, making it non-static solves nothing. UnityWebRequest.SendWebRequest will keep downloading the file on its own no matter what I do with that list.
You never waited for the download handler to finish, it will result in weird behaviour sometimes if you try to use it before it finished, better yield null until the handler isDone flag is true, that is it finishes handling the downloaded data, depending on its type.
and YES, the request finishes downloading but sometimes the handler doesn't finish its 'handling', the same instant the download is done.
UnityWebRequest webRequest = UnityWebRequest.Get(url);
webRequest.downloadHandler = new DownloadHandlerFile(filePath) { removeFileOnAbort = true };
// 1) Wait until the download is done.
yield return webRequest.SendWebRequest();
if (webRequest.isNetworkError || webRequest.isHttpError)
{
// do something if the file failed to download
}
else
{
// 2) Wait until the handler itself finishes.
while (!webRequest.downloadHandler.isDone)
yield return null;
// Now you can safely do something with the downloaded file
}
I've got a routine called GetEmployeeList that loads when my Windows Application starts.
This routine pulls in basic employee information from our Active Directory server and retains this in a list called m_adEmpList.
We have a few Windows accounts set up as Public Profiles that most of our employees on our manufacturing floor use. This m_adEmpList gives our employees the ability to log in to select features using those Public Profiles.
Once all of the Active Directory data is loaded, I attempt to "auto logon" that employee based on the System.Environment.UserName if that person is logged in under their private profile. (employees love this, by the way)
If I do not thread GetEmployeeList, the Windows Form will appear unresponsive until the routine is complete.
The problem with GetEmployeeList is that we have had times when the Active Directory server was down, the network was down, or a particular computer was not able to connect over our network.
To get around these issues, I have included a ManualResetEvent m_mre with the THREADSEARCH_TIMELIMIT timeout so that the process does not go off forever. I cannot login someone using their Private Profile with System.Environment.UserName until I have the list of employees.
I realize I am not showing ALL of the code, but hopefully it is not necessary.
public static ADUserList GetEmployeeList()
{
if ((m_adEmpList == null) ||
(((m_adEmpList.Count < 10) || !m_gotData) &&
((m_thread == null) || !m_thread.IsAlive))
)
{
m_adEmpList = new ADUserList();
m_thread = new Thread(new ThreadStart(fillThread));
m_mre = new ManualResetEvent(false);
m_thread.IsBackground = true;
m_thread.Name = FILLTHREADNAME;
try {
m_thread.Start();
m_gotData = m_mre.WaitOne(THREADSEARCH_TIMELIMIT * 1000);
} catch (Exception err) {
Global.LogError(_CODEFILE + "GetEmployeeList", err);
} finally {
if ((m_thread != null) && (m_thread.IsAlive)) {
// m_thread.Abort();
m_thread = null;
}
}
}
return m_adEmpList;
}
I would like to just put a basic lock using something like m_adEmpList, but I'm not sure if it is a good idea to lock something that I need to populate, and the actual data population is going to happen in another thread using the routine fillThread.
If the ManualResetEvent's WaitOne timer fails to collect the data I need in the time allotted, there is probably a network issue, and m_mre does not have many records (if any). So, I would need to try to pull this information again the next time.
If anyone understands what I'm trying to explain, I'd like to see a better way of doing this.
It just seems too forced, right now. I keep thinking there is a better way to do it.
I think you're going about the multithreading part the wrong way. I can't really explain it, but threads should cooperate and not compete for resources, but that's exactly what's bothering you here a bit. Another problem is that your timeout is too long (so that it annoys users) and at the same time too short (if the AD server is a bit slow, but still there and serving). Your goal should be to let the thread run in the background and when it is finished, it updates the list. In the meantime, you present some fallbacks to the user and the notification that the user list is still being populated.
A few more notes on your code above:
You have a variable m_thread that is only used locally. Further, your code contains a redundant check whether that variable is null.
If you create a user list with defaults/fallbacks first and then update it through a function (make sure you are checking the InvokeRequired flag of the displaying control!) you won't need a lock. This means that the thread does not access the list stored as member but a separate list it has exclusive access to (not a member variable). The update function then replaces (!) this list, so now it is for exclusive use by the UI.
Lastly, if the AD server is really not there, try to forward the error from the background thread to the UI in some way, so that the user knows what's broken.
If you want, you can add an event to signal the thread to stop, but in most cases that won't even be necessary.
Something tells me this might be a stupid question and I have in fact approached my problem from the wrong direction, but here goes.
I have some code that loops through all the documents in a folder - The alphabetical order of these documents in each folder is important, this importance is also reflected in the order the documents are printed. Here is a simplified version:
var wordApp = new Microsoft.Office.Interop.Word.Application();
foreach (var file in Directory.EnumerateFiles(folder))
{
fileCounter++;
// Print file, referencing a previously instantiated word application object
wordApp.Documents.Open(...)
wordApp.PrintOut(...)
wordApp.ActiveDocument.Close(...)
}
It seems (and I could be wrong) that the PrintOut code is asynchronous, and the application sometimes gets into a situation where the documents get printed out of order. This is confirmed because if I step through, or place a long enough Sleep() call, the order of all the files is correct.
How should I prevent the next print task from starting before the previous one has finished?
I initially thought that I could use a lock(someObject){} until I remembered that they are only useful for preventing multiple threads accessing the same code block. This is all on the same thread.
There are some events I can wire into on the Microsoft.Office.Interop.Word.Application object: DocumentOpen, DocumentBeforeClose and DocumentBeforePrint
I have just thought that this might actually be a problem with the print queue not being able to accurately distinguish lots of documents that are added within the same second. This can't be the problem, can it?
As a side note, this loop is within the code called from the DoWork event of a BackgroundWorker object. I'm using this to prevent UI blocking and to feedback the progress of the process.
Your event-handling approach seems like a good one. Instead of using a loop, you could add a handler to the DocumentBeforeClose event, in which you would get the next file to print, send it to Word, and continue. Something like this:
List<...> m_files = Directory.EnumerateFiles(folder);
wordApp.DocumentBeforeClose += ProcessNextDocument;
...
void ProcessNextDocument(...)
{
File file = null;
lock(m_files)
{
if (m_files.Count > 0)
{
file = m_files[m_files.Count - 1];
m_files.RemoveAt(m_files.Count - 1);
}
else
{
// Done!
}
}
if (file != null)
{
PrintDocument(file);
}
}
void PrintDocument(File file)
{
wordApp.Document.Open(...);
wordApp.Document.PrintOut(...);
wordApp.ActiveDocument.Close(...);
}
The first parameter of Application.PrintOut specifies whether the printing should take place in the background or not. By setting it to false it will work synchronously.
In my c# application multiple clients will access the same server, to process one client ata a time below code is written.In the code i used Moniter class and also the queue class.will this code affect the performance.if i use Monitor class, then shall i remove queue class from the code.
Sometimes my remote server machine where my application running as service is totally down.is the below code is the reasond behind, coz all the clients go in a queue, when i check the netstatus -an command using command prompt, for 8 clients it shows 50 connections are holding in Time-wait...
Below is my code where client acces the server ...
if (Id == "")
{
System.Threading.Monitor.Enter(this);
try
{
if (Request.AcceptTypes == null)
{
queue.Enqueue(Request.QueryString["sessionid"].Value);
string que = "";
que = queue.Dequeue();
TypeController.session_id = que;
langStr = SessionDatabase.Language;
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
TypeController.session_id = "";
filter.Execute();
Request.Clear();
return filter.XML;
}
else
{
TypeController.session_id = "";
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
filter.Execute();
}
}
finally
{
System.Threading.Monitor.Exit(this);
}
}
Locking this is pretty wrong, it won't work at all if every thread uses a different instance of whatever class this code lives in. It isn't clear from the snippet if that's the case but fix that first. Create a separate object just to store the lock and make it static or give it the same scope as the shared object you are trying to protect (also not clear).
You might still have trouble since this sounds like a deadlock rather than a race. Deadlocks are pretty easy to troubleshoot with the debugger since the code got stuck and is not executing at all. Debug + Break All, then Debug + Windows + Threads. Locate the worker threads in the thread list. Double click one to select it and use Debug + Call Stack to see where it got stuck. Repeat for other threads. Look back through the stack trace to see where one of them acquired a lock and compare to other threads to see what lock they are blocking on.
That could still be tricky if the deadlock is intricate and involves multiple interleaved locks. In which case logging might help. Really hard to diagnose mandelbugs might require a rewrite that cuts back on the amount of threading.