I'm currently researching the ability for a T-SQL trigger to fire off the printing of an SSRS report when records are inserted into a table. The closest thing I've found to accomplish this are in ScottLenart's comments here. I have a few parameters I need to pass to the report and I want to send the print job to a specific network printer. I'm wondering if this is something I could build into a SQL CLR assembly (though I know that seems like the wrong way to use SQL CLR), or if using the xp_cmdshell to kick off some custom c# app that prints it is my best approach.
I figure I may have to look into using some kind of queue to put the print requests into when the trigger fires so that it doesn't block a bunch of other queries while things are printing, or something, but I'm trying to figure out how to get the document printed as close as I can to when the record is created or updated in the database.
I'm looking to deploy this with SQL Server 2012
It is probably doable, but this is not something you should do. Imagine you're in the middle of transaction, holding locks and blocking access to resources, and someone has to feed paper to the printer?
If you really have a strong business case on doing it this way, use service broker to make the call async (so the transaction may commit and release the resources).
Use a trigger to insert records into a "print queue" table of some sort. Have a scheduled SQL Server job that emulates ScottLenart's process from there onward. dean is right - don't have the trigger doing the actual execution of the report/print operation; merely use the trigger to pass/prep that workload to another process.
Related
I am developing a program in WPF.Net, and I need to know when somebody makes a change over any table of the database.
The idea is receive a event from the database when it was changed. I was reading a lot of articles but I can't find a method to resolve my problem.
Kind Regards
The best solution is to use a message queue. After your app commits a change to the database, the app also publishes a message on the message queue. Other clients then just wait for notifications on that message queue.
There are a few other common solutions, but all of them have disadvantages.
Polling. If a client is interested in recent changes, they run a query searching for new data every N seconds.
The downside is you have to keep polling even during times when there are no changes. You might have to poll very frequently, depending on how promptly you need to notice the changes. This adds to database load just to support the polling queries.
Also it costs more if you have many clients all polling for queries. In one system I supported, the database was struggling to process 30,000 queries per second just for clients running polling.
Change Data Capture. Using the binary log as a de facto message queue, because it records all the changes. Use a client tool such as Debezium, or write your own binlog tail client (this is a lot of work).
The downside is the binlog records all changes, not just those you want to be notified about. You have to filter it somehow. Also you have to learn how to use Debezium or equivalent tool.
Triggers. Write a trigger on the table that invokes a UDF to post notification outside the database. This is a bad idea, because the trigger executes when your insert/update/delete executes, not when the transaction commits. Clients could be notified of changes before the changes are committed, so if they go query the database right after they get the notification, the change is not visible to them yet.
Also a disadvantage because it requires you install a UDF extension in MySQL Server. MySQL doesn't normally have any way of posting an external notification.
I'm not a C# developer so I can't suggest specific code. But the general methods above are similar regardless of which language the app is written in.
I don't think this is possible with MySQL, DBs like MondgoDB have this sort of feature.
You may like to use the method described in this answer.
Essentially have date/time fields on rows where you can pull data since a certain date time. Or you could use a CQRS/Event stratagem and maybe use a message queue.
I have MySQL database, and program in C# and winforms. It will run on few PC's over the lan network. I want to know if theres a way to check if theres changes in database without doing request each second.
Example:
Admin will delete record from database, on workers pc I want to refresh list automatically.
Don't worry about the performance of polling. With a suitable INDEX, MySQL can handle 100 polls/second -- regardless of dataset size.
Let's see SHOW CREATE TABLE and the tentative SELECT to perform the "poll"; I may have further tips.
Also, let's see the admin's query that needs to 'trigger' the workers into action. Surely the admin can do something simple to cause that. Doing a checksum of a table is terribly heavyweight.
You should consider the admin action to be encoded in a Stored Procedure, thereby separating the polling mechanism from the admin action.
JavaScript can use AJAX to do an INSERT (or call a Stored proc).
This mantra may apply: "Don't queue it, just do it." That is, the admin/javascript action can call an application program (PHP/Java/VB/whatever) to actually do the action, no need for queuing. (Please provide more details so we can discuss whether this might be the real solution.)
I have a C# console App application using linq to send queries to the database and update records... There are like more than 40000(forty thousand records) to be updated in the database... Now I would like to develop something to keep track of the process as to what queries/requests are sent at a specific time and what is the response from the database. So I can be able to see if the query sent by the process/application is running slow or not...
I have tried to use System.Diagnostics but all I can get is how long the process is running or at what time the process has started running. I have thought of re-writing something to grab the query at the beginning and the end of the process but I somehow would like to know how many records are processed, how many are still pending and how many have failed or were successful in real time not at the end of the whole query or process.
At the moment what we do is, we practically login into the database and keep on executing a specific SQL query on the SSMS platform to keep track of how many records were successful/failed/still pending but I would like to automate the whole process so that management with no technical knowledge can be able to see whats happening. Any guidance or advice as to how I can achieve this?
There are many tools that you can use, First like said in comments you can use Sql Server Profiler, another tool that I like to use is LinqPad with this tool you can see the results and the Sql that is sent. you can then modify the linq query and run again all inside the editor.
A low-fi but quick to build solution. If you need to capture the performance of database queries which I believe gets called from your C# code, you can attempt at wrapping the query execution code block with start() and stop() functions of a stopwatch and consider logging the results into a DB table.
Basically, this is what I want to do. Imagine a textbox with:
Computer1
Computer2
Computer3
I want to put these into a multi-lined textbox on a ASP .NET page, and have it iterate through these objects and run a LINQ-SQL query on each one. Basically, the LINQ-SQL query would return the PC's with what software is installed (this might end up in something like an excel spreadsheet but I can work with this once I have the data back).
My challenge is this: I want to run the query asynchronously, and update the user after each computer has had the LINQ-SQL query run against it (something like Checking Computer2... Checking Computer 3...). The query is pretty complex, and at the end it has to "add up" all the machines data and put it into a datatable for exporting.
The main thing that has got me stumped is the async for the LINQ SQL, and how to update the user with what computer we are "currently" on.
Any help would be awesome :)
Executing each of the SQL query lines, you are expecting the results to stream as soon as they finish execution. Classic HTTP communication is not well sutied for that type of process (Although possible of course).
I would go about it this way:
- Create a SignalR server
- Post an SQL line to the server and get back a "job ID"
- On the client listen to SignalR callbacks from the server which would push a "job ID" and the results associated with that job.
- As more jobs results are pushed from the server update the UI about the progress, outstanding jobs etc.
Note that you can also post the query lines using SignalR and thus unifying the programming model.
On the server, once a "job" has been submitted, you would want to spin up a Task to handle the query execution. If you want the jobs to be executed in order and serially, you could use a queue of some sort.
Also note that the order in which the responses are streamed to the client is not guaranteed (when executing in parallel) since the first job may run longer then the second job for example.
I am working on a exe to export SQL to Access, we do not want to use DTS as we have multiple clients each exporting different views and the overhead to setup and maintain the DTS packages is too much.
*Edit: This process is automated for many clients every night, so the whole process has to be kicked off and controlled within a cursor in a stored procedure. This is because the data has to be filtered per project for the export.
I have tried many ways to get data out of SQL into Access and the most promising has been using Access interop and running a
doCmd.TransferDatabase(Access.AcDataTransferType.acImport...
I have hit a problem where I am importing from views, and running the import manually it seems the view does not start returning data fast enough, so access pops up a MessageBox dialog to say it has timed out.
I think this is happening in interop as well, but because it is hidden the method never returns!
Is there any way for me to prevent this message from popping up, or increasing the timeout of the import command?
My current plan of attack is to flatten the view into a table, then import from that table, then drop the flattened table.
Happy for any suggestions how to tackle this problem.
Edit:
Further info on what I am doing:
We have multiple clients which each have a standard data model. One of the 'modules' is a access exporter (sproc). It reads the views to export from a parameter table then exports. The views are filtered by project, and a access file is created for each project (every view has project field)
We are running SQL 2005 and are not moving to SQL 2005 quickly, we will probably jump to 2008 in quite a few months.
We then have a module execution job which executes the configured module on each database. There are many imports/exports/other jobs that run in this module execution, and the access exporter must be able to fit into this framework. So I need a generic SQL -> Access exporter which can be configured through our parameter framework.
Currently the sproc calls a exe I have written and my exe opens access via interop, I know this is bad for a server BUT the module execution is written so only a single module is executing at a time, so the procedure will never be running more than one instance at a time.
Have you tried using VBA? You have more options configuring connections, and I'm sure I've used a timeout adjustment in that context in the past.
Also, I've generally found it simplest just to query a view directly (as long as you can either connect with a nolock, or tolerate however long it takes to transfer); this might be a good reason to create the intermediate temp table.
There might also be benefit to opening Acces explicitly in single-user mode for this stuff.
We've done this using ADO to connect to both source and destination data. You can set connection and command timeout values as required and read/append to each recordset.
No particularly quick but we were able to leave it running overnight
I have settled on a way to do this.
http://support.microsoft.com/kb/317114 describes the basic steps to start the access process.
I have made the Process a class variable instead of a local variable of the ShellGetApp method. This way when I call the Quit function for access, if it doesn't close for whatever reason I can kill the process explicitly.
app.Quit(Access.AcQuitOption.acQuitSaveAll);
if (!accessProcess.HasExited)
{
Console.WriteLine("Access did not exit after being asked nicely, killing process manually");
accessProcess.Kill();
}
I then have used a method timeout function here to give the access call a timeout. If it times out I can kill the access process as well (timeout could be due to a dialog window popping up and I do not want the process to hang forever. I got the timeout method here.
Implement C# Generic Timeout
I'm glad you have a solution that works for you. For the benefit of others reading this, I'll mention that SSIS would have been a possible solution to this problem. Note that the difference between SSIS and DTS is pretty much night and day.
It is not difficult to parameterize the export process, such that for each client, you could export a different set of views. You could loop over the lines of a text file having the view names in it, or use a query against a configuration database to get the list of views. Otherparameters could come from the same configuration database, on a per-client and/or per-view basis.
If necessary, there would also be the option of performing per-client pre- and post-processing, by executing a child process, or pacakge, if such is configured.