I am working on a exe to export SQL to Access, we do not want to use DTS as we have multiple clients each exporting different views and the overhead to setup and maintain the DTS packages is too much.
*Edit: This process is automated for many clients every night, so the whole process has to be kicked off and controlled within a cursor in a stored procedure. This is because the data has to be filtered per project for the export.
I have tried many ways to get data out of SQL into Access and the most promising has been using Access interop and running a
doCmd.TransferDatabase(Access.AcDataTransferType.acImport...
I have hit a problem where I am importing from views, and running the import manually it seems the view does not start returning data fast enough, so access pops up a MessageBox dialog to say it has timed out.
I think this is happening in interop as well, but because it is hidden the method never returns!
Is there any way for me to prevent this message from popping up, or increasing the timeout of the import command?
My current plan of attack is to flatten the view into a table, then import from that table, then drop the flattened table.
Happy for any suggestions how to tackle this problem.
Edit:
Further info on what I am doing:
We have multiple clients which each have a standard data model. One of the 'modules' is a access exporter (sproc). It reads the views to export from a parameter table then exports. The views are filtered by project, and a access file is created for each project (every view has project field)
We are running SQL 2005 and are not moving to SQL 2005 quickly, we will probably jump to 2008 in quite a few months.
We then have a module execution job which executes the configured module on each database. There are many imports/exports/other jobs that run in this module execution, and the access exporter must be able to fit into this framework. So I need a generic SQL -> Access exporter which can be configured through our parameter framework.
Currently the sproc calls a exe I have written and my exe opens access via interop, I know this is bad for a server BUT the module execution is written so only a single module is executing at a time, so the procedure will never be running more than one instance at a time.
Have you tried using VBA? You have more options configuring connections, and I'm sure I've used a timeout adjustment in that context in the past.
Also, I've generally found it simplest just to query a view directly (as long as you can either connect with a nolock, or tolerate however long it takes to transfer); this might be a good reason to create the intermediate temp table.
There might also be benefit to opening Acces explicitly in single-user mode for this stuff.
We've done this using ADO to connect to both source and destination data. You can set connection and command timeout values as required and read/append to each recordset.
No particularly quick but we were able to leave it running overnight
I have settled on a way to do this.
http://support.microsoft.com/kb/317114 describes the basic steps to start the access process.
I have made the Process a class variable instead of a local variable of the ShellGetApp method. This way when I call the Quit function for access, if it doesn't close for whatever reason I can kill the process explicitly.
app.Quit(Access.AcQuitOption.acQuitSaveAll);
if (!accessProcess.HasExited)
{
Console.WriteLine("Access did not exit after being asked nicely, killing process manually");
accessProcess.Kill();
}
I then have used a method timeout function here to give the access call a timeout. If it times out I can kill the access process as well (timeout could be due to a dialog window popping up and I do not want the process to hang forever. I got the timeout method here.
Implement C# Generic Timeout
I'm glad you have a solution that works for you. For the benefit of others reading this, I'll mention that SSIS would have been a possible solution to this problem. Note that the difference between SSIS and DTS is pretty much night and day.
It is not difficult to parameterize the export process, such that for each client, you could export a different set of views. You could loop over the lines of a text file having the view names in it, or use a query against a configuration database to get the list of views. Otherparameters could come from the same configuration database, on a per-client and/or per-view basis.
If necessary, there would also be the option of performing per-client pre- and post-processing, by executing a child process, or pacakge, if such is configured.
Related
Ok, I guess I need to make myself a bit clearer, sorry.
I DO NOT have problem with the app it self. it runs single or multiple instances just fine. we have mechanism built in to prevent cross-instance interference, file/record blocking etc..
the issue is that sometimes when running several instances, (some one go and click on myapp.exe several times manually ) one or more instances can crush.
be it from a bad data in the file, loss db connection, what ever. I am still trying to figure out some of the unexplained crushes and hanged instances sources.
what I am looking for is a way for me to setup a monitoring process that
A. can identify each instance of the app running as separate entity.
B. check if that one instance is running and not hanged / crushed , this maybe mediated but me changing the code and force the app to quit if a fatal error is detected, as in I crushed and can not recover I quit, kind of setup.
C. start new additional instances up to the total count of desired instances running. that is I want to have a minimal count or app running at the same time. if current count is less start up new until desired count is reached.
what would be the best approach to have this setup.
I have a custom inhouse application written on C++ and C# mixed API.
it is a file processing parser for EDI process, that takes data from flat files or Flat file representation from a DB table (we have other modules that store a contents of flat file in DB varchar field) and process and store data in appropriate DB tables.
it works fine for the most part, but several clients have a need to run multiple instances of it to speed up the process.
problem is that when the app crashes, there is no automatic way I know to identify the instance that crashed , shut it down and restart.
need help to identify my options here.
how can I run the same EXE multiple time yet monitor and manage each in the event of instance misbehaving.
a code change is possible but C++ is not my main language so it will be a nightmare for me to do any extensive changes.
PS> the app is a mix of C++ and C# modules with MSSQL DB backend
thanks.
the process is monitored with MSSQL SPs. an issue can be identified by luck of new records processed and other factors
my needs are to be able to
1. start several instances of an app at the same time.
2. monitor each instance of running EXE and if crush is detected kill and restart that one instance. I have a kill switch in the app that can shut it down gracefully, but need a way to monitor and restart it. but only that one.
I'm currently researching the ability for a T-SQL trigger to fire off the printing of an SSRS report when records are inserted into a table. The closest thing I've found to accomplish this are in ScottLenart's comments here. I have a few parameters I need to pass to the report and I want to send the print job to a specific network printer. I'm wondering if this is something I could build into a SQL CLR assembly (though I know that seems like the wrong way to use SQL CLR), or if using the xp_cmdshell to kick off some custom c# app that prints it is my best approach.
I figure I may have to look into using some kind of queue to put the print requests into when the trigger fires so that it doesn't block a bunch of other queries while things are printing, or something, but I'm trying to figure out how to get the document printed as close as I can to when the record is created or updated in the database.
I'm looking to deploy this with SQL Server 2012
It is probably doable, but this is not something you should do. Imagine you're in the middle of transaction, holding locks and blocking access to resources, and someone has to feed paper to the printer?
If you really have a strong business case on doing it this way, use service broker to make the call async (so the transaction may commit and release the resources).
Use a trigger to insert records into a "print queue" table of some sort. Have a scheduled SQL Server job that emulates ScottLenart's process from there onward. dean is right - don't have the trigger doing the actual execution of the report/print operation; merely use the trigger to pass/prep that workload to another process.
http://rockingtechnology.blogspot.co.uk/2011/06/oracle-backup-and-restore-code-in-cnet.html
As per the proposed code in the above article, more specifically:
ProcessStartInfo psi = new ProcessStartInfo();
psi.FileName = "C:/oracle/product/10.2.0/db_1/BIN/exp.exe";
Process process = Process.Start(psi);
process.WaitForExit();
process.Close();
How can I expect the database to be affected with regards to interruption of CRUD operations from elsewhere once calling Process.Start(psi) and, hence, executing exp.exe?
Using Oracle's exp.exe process - will the sessions of all users currently writing to the db in question be killed, for example? I'd imagine (or at least hope) not, but I haven't been able to find documentation to confirm this.
EXP and IMP are not proper backup and recover tools. They are intended for exchanging data and data structures between Oracle databases. This is also true for their replacement, Data Pump (EXPDP and IMPDP).
Export unloads to a file so it won't affect any users on the system. However if you want a consistent set of data you need to use the CONSISTENT=Y parameter if there are any other users connecting to the system .
Interestingly Data Pump does not have a CONSISTENT parameter. It unloads tables (or table partitions) as single transactions but the only way to guarantee consistency across all database objects is to use the FLASHBACK_SCN parameter (or kick all your users off the system).
"It is all in aid of DR."
As a DR solution this will work, with the following provisos.
The users will lose all data since the last export (obvious)
You will need to ensure the export is consistent across all objects
Imports take time. A lot of time if you have many tables or a lot of data. Plus indexes, etc
Also remember to export the statistics as well as the data.
You're really asking what effects the (old) Oracle export tool (exp) has on the database. It's a logical backup so you can think of the effects generally the same way you would think of running multiple SELECT queries against your database. That is, other sessions don't get killed but normal locking mechanisms may prevent them from accessing data until exp is done with it and this could, potentially, lead to timeouts.
EXP is the original export utility. It is discontinued and not supported in the most recent version (11g).
You can use EXPDP instead, although the export files are written on the server instead of the client machine.
Both utilities issue standard SELECT commands to the database, and since readers don't interfere with concurrency in Oracle (writer don't block readers, readers don't block readers), this will not block your other DB operations.
Since it issues statements however, it may increase the resource usage, especially IO, which could impact performance for concurrent activity.
Whatever tool you use, you should spend some time learning about the options (also since you may want to use it as a logical copy, make sure you test the respective import tools IMP and IMPDP). Also a word of warning: these tools are not backup tools. You should not rely on them for backup.
I have a project ongoing at the moment which is create a Windows Service that essentially moves files around multiple paths. A job may be to, every 60 seconds, get all files matching a regular expression from an FTP server and transfer them to a Network Path, and so on. These jobs are stored in an SQL database.
Currently, the service takes the form of a console application, for ease of development. Jobs are added using an ASP.NET page, and can be editted using another ASP.NET page.
I have some issues though, some relating to Quartz.NET and some general issues.
Quartz.NET:
1: This is the biggest issue I have. Seeing as I'm developing the application as a console application for the time being, I'm having to create a new Quartz.NET scheduler on all my files/pages. This is causing multiple confusing errors, but I just don't know how to institate the scheduler in one global file, and access these in my ASP.NET pages (so I can get details into a grid view to edit, for example)
2: My manager would suggested I could look into having multiple 'configurations' inside Quartz.NET. By this, I mean that at any given time, an administrator can change the applications configuration so that only specifically chosen applications run. What'd be the easiest way of doing this in Quartz.NET?
General:
1: One thing that that's crucial in this application is assurance that the file has been moved and it's actually on the target path (after the move the original file is deleted, so it would be disastrous if the file is deleted when it hasn't actually been copied!). I also need to make sure that the files contents match on the initial path, and the target path to give peace of mind that what has been copied is right. I'm currently doing this by MD5 hashing the initial file, copying the file, and before deleting it make sure that the file exists on the server. Then I hash the file on the server and make sure the hashes match up. Is there a simpler way of doing this? I'm concerned that the hashing may put strain on the system.
2: This relates to the above question, but isn't as important as not even my manager has any idea how I'd do this, but I'd love to implement this. An issue would arise if a job is executed when a file is being written to, which may be that a half written file will be transferred, thus making it totally useless, and it would also be bad as the the initial file would be destroyed while it's being written to! Is there a way of checking of this?
As you've discovered, running the Quartz scheduler inside an ASP.NET presents many problems. Check out Marko Lahma's response to your question about running the scheduler inside of an ASP.NET web app:
Quartz.Net scheduler works locally but not on remote host
As far as preventing race conditions between your jobs (eg. trying to delete a file that hasn't actually been copied to the file system yet), what you need to implement is some sort of job-chaining:
http://quartznet.sourceforge.net/faq.html#howtochainjobs
In the past I've used the TriggerListeners and JobListeners to do something similar to what you need. Basically, you register event listeners that wait to execute certain jobs until after another job is completed. It's important that you test out those listeners, and understand what's happening when those events are fired. You can easily find yourself implementing a solution that seems to work fine in development (false positive) and then fails to work in production, without understanding how and when the scheduler does certain things with regards to asynchronous job execution.
Good luck! Schedulers are fun!
I have an importer process which is running as a windows service (debug mode as an application) and it processes various xml documents and csv's and imports into an SQL database. All has been well until I have have had to process a large amount of data (120k rows) from another table (as I do the xml documents).
I am now finding that the SQL server's memory usage is hitting a point where it just hangs. My application never receives a time out from the server and everything just goes STOP.
I am still able to make calls to the database server separately but that application thread is just stuck with no obvious thread in SQL Activity Monitor and no activity in Profiler.
Any ideas on where to begin solving this problem would be greatly appreciated as we have been struggling with it for over a week now.
The basic architecture is c# 2.0 using NHibernate as an ORM data is being pulled into the actual c# logic and processed then spat back into the same database along with logs into other tables.
The only other prob which sometimes happens instead is that for some reason a cursor is being opening on this massive table, which I can only assume is being generated from ADO.net the statement like exec sp_cursorfetch 180153005,16,113602,100 is being called thousands of times according to Profiler
When are you COMMITting the data? Are there any locks or deadlocks (sp_who)? If 120,000 rows is considered large, how much RAM is SQL Server using? When the application hangs, is there anything about the point where it hangs (is it an INSERT, a lookup SELECT, or what?)?
It seems to me that that commit size is way too small. Usually in SSIS ETL tasks, I will use a batch size of 100,000 for narrow rows with sources over 1,000,000 in cardinality, but I never go below 10,000 even for very wide rows.
I would not use an ORM for large ETL, unless the transformations are extremely complex with a lot of business rules. Even still, with a large number of relatively simple business transforms, I would consider loading the data into simple staging tables and using T-SQL to do all the inserts, lookups etc.
Are you running this into SQL using BCP? If not, the transaction logs may not be able to keep up with your input. On a test machine, try turning the recovery mode to Simple (non-logged) , or use the BCP methods to get data in (they bypass T logging)
Adding on to StingyJack's answer ...
If you're unable to use straight BCP due to processing requirements, have you considered performing the import against a separate SQL Server (separate box), using your tool, then running BCP?
The key to making this work would be keeping the staging machine clean -- that is, no data except the current working set. This should keep the RAM usage down enough to make the imports work, as you're not hitting tables with -- I presume -- millions of records. The end result would be a single view or table in this second database that could be easily BCP'ed over to the real one when all the processing is complete.
The downside is, of course, having another box ... And a much more complicated architecture. And it's all dependent on your schema, and whether or not that sort of thing could be supported easily ...
I've had to do this with some extremely large and complex imports of my own, and it's worked well in the past. Expensive, but effective.
I found out that it was nHibernate creating the cursor on the large table. I am yet to understand why, but in the mean time I have replaced the large table data access model with straight forward ado.net calls
Since you are rewriting it anyway, you may not be aware that you can call BCP directly from .NET via the System.Data.SqlClient.SqlBulkCopy class. See this article for some interesting perforance info.