Determining what stored procedures are run from an application - c#

We have a third party application where data is entered manually one record at a time where the user reads data from an Excel spreadsheet.
I have been asked to enable a way to upload data to the SQL Server database from the Excel spreadsheets. This would save a ton of time and prevent mistakes during manual data entry. I've done this type of work before but with in-house programs. I need to find what processes are run when the Save button is clicked. Is there a way to determine this or something similar (like exactly what tables\triggers are involved besides the ones I already know of)?

To discover the database activity the best way is to set up a test copy of the application and a test database where there is only 1 person working. Then start a SQL Profiler trace and record what happens in the database after clicking the Save button.
That said, do not be at all surprised if you do not get a complete picture of what you need. This will not reveal anything that happens to the data within the client application before being sent to the database. Reverse engineering an application can be just as error prone as the manual effort involved with normal data entry. On top of it, there is no guarantee that that process will remain the same for updated versions of the application.

If you don't have access to the original source code, then you can enter something like this in SSMS to see which queries have been run recently by a specific machine (substitute in your host name in line 2 and then walk through the process to get a pretty good idea of what is going on). This was adapted from an existing answer on SO to work for a single host, but I have no idea how to track it down to give proper credit...
declare #host varchar(50)
set #host='MyComputerName'
SELECT TEXT
FROM sys.dm_exec_connections
CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle)
WHERE session_id in
(SELECT des.session_id FROM sys.dm_exec_sessions des WHERE des.is_user_process = 1 AND host_name = #host);

Related

Best way to read and write time-critical data?

I have .txt files that are overwritten with data from software every 5-10 seconds, I then have a wpf application that reads and displays this data every second. Here are my issues:
Currently the text files are stored on a server and there are a bunch of users running this application to view this "live" data.
HOWEVER, due to:
An I/O bug in windows
The files "lock" up periodically and cause all of the applications to lock up (can't even close in task manager).
Therefore I decided to have the data copied from the text files to SQL, however from my understanding there's no way to overwrite the data in the SQL table. One must Drop the Table and Create a new one. This cause a 10+ second delay updating the data, which cannot happen.
My question is, there HAS to be a way to rapidly read and write data from somewhere, be it a database, etc. I am not sure where else to turn.
My constraints:
I'm stuck with Server 2008, have to use these text file, and I have to display it on my wpf application. Does anyone have any suggestions for a method that can handle this type of I/O?
All help is greatly appreciated, I'm at a complete loss..
It seems like you may not have extensive experience with database technology, so let me propose something different:
string text = System.IO.File.ReadAllText(path);
Then perhaps you can take the text and do what you want with it, dump it in a queue for action in another part of the application.
ReadAllText has some exceptions that are thrown:
https://msdn.microsoft.com/en-us/library/ms143368(v=vs.110).aspx
I'd be on the look out for UnauthorizedAccessException as you said, the file seems to lock up when multiple users are accessing it.

loading lots of Azure blob data in a WPF app

I've been given a task to build a prototype for an app. I don't have any code yet, as the solution concepts that I've come up with seem stinky at best...
The problem:
the solution consist of various Azure projects which do stuff to lots of data stored in Azure SQL db-s. Almost every action that happens creates a gzipped log file in blob storage. So that's one .gz file per log entry.
We should also have a small desktop (WPF) app that should be able to read, filter and sort these log files.
I have absolutely 0 influence on how the logging is done, so this is something that can not be changed to solve this problem.
Possible solutions that I've come up with (conceptually):
1:
connect to the blob storage
open the container
read/download blobs (with applied filter)
decompress the .gz files
read and display
The problem with this is that, depending on the filter, this could mean a whole lot of data to download (which is slow), and process (which will also not be very snappy). I really can't see this as a usable application.
2:
create a web role which will run a WCF or REST service
the service will take the filter params and other stuff and return a single xml/json file with the data, the processing will be done on the cloud
With this approach, will I run into problems with decompressing these files if there's a lot of them (will it take up extra space on the storage/compute instance where the service is running).
EDIT: what I mean by filter is limit the results by date and severity (info, warning, error). The .gz files are saved in a structure that makes this quite easy, and I will not be filtering by looking into the files themselves.
3:
some other elegant and simple solution that I don't know of
I'd also need some way of making the app update the displayed logs in real time, which i suppose would need to be done with repeated requests to the blob storage/service.
This is not one of those "give me code" questions. I am looking for advice on best practices, or similar solutions that worked for similar problems. I also know this could be one of those "no one right answer" questions, as people have different approaches to problems, but I have some time to build a prototype, so I will be trying out different things, and I will select the right answer, which will be the one that showed a solution that worked, or the one that steered me in the right direction, even if it does take some time before I actually build something and test it out.
As I understand it, you have a set of log file in Azure Blob storage that are formatted in a particular way (gzip) and you want to display them.
How big are these files? Are you displaying every single piece of information in the log file?
Assuming that if this is a log file, it is static and historical...meaning that once the log/gzip file is created it cannot be changed (you are not updating the gzip file once it is out on Blog storage). Only new files can be created...
One Solution
Why not create an worker role/job process that periodically goes out and scans the blob storage and builds a persisted "database" so that you can display. Nice thing about this is that you are not putting the unzipping/business logic to extract the log file in a WPF app or UI.
1) I would have the worker role scan the log file in Azure Blob storage
2) Have some kind of mechanism to track which ones where processed and a current "state" maybe the UTC date of the last gzip file
3) Do all the unzipping/extracting of the log file in the worker role
4) Have the worker role place the content in a SQL database, Azure Table Storage or Distributed Cache for access
5) Access can be done by a REST service (ASP.NET Web API/Node.js etc)
You can add more things if you need to scale this out, for example run this as a job to re-do all of the log files from a given time (refresh all). I don't know the size of your data so I am not sure if that is feasable.
Nice thing about this is that if you need to scale your job (overnight), you can spin up 2, 3, 6 worker roles...extract the content, pass the result to a Service Bus or Storage Queue that would insert into SQL, Cache etc for access.
Simply storing the blobs isn't sufficient. The metadata you want to filter on should be stored somewhere else where it's easy to filter and retrieve all the metadata. So I think you should split this into 2 problems:
A. How do I efficiently list all "gzips" with their metadata and how
can I apply a filter on these gzips in order to show them in my client
application.
Solutions
Blobs: Listing blobs is slow and filtering is not possible (you could group in a container per month or week or user or ... but that's not filtering).
Table Storage: Very fast, but searching is slow (only PK and RK are indexed)
SQL Azure: You could create a table with a list of "gzips" together with some other metadata (like user that created the gzip, when, total size, ...). Using a stored procedure with a few good indexes you can make search very fast, but SQL Azure isn't the most scalable solution
Lucene.NET: There's an AzureDirectory for Windows Azure which makes it possible to use Lucene.NET in your application. This is a super fast search engine that allows you to index your 'documents' (metadata) and this would be perfect to filter and return a list of "gzips"
Update: Since you only filter on date and severity you should review the Blob and Table options:
Blobs: You can create a container per date+severity (20121107-low, 20121107-medium, 20121107-high ...). Assuming you don't have too many blobs per data+severity, you can simply list the blobs directly from the container. The only issue you might have here is that a user will want to see all items with a high severity from the last week (7 days). This means you'll need to list the blobs in 7 containers.
Tables: Even though you say table storage or db aren't an option, do consider table storage. Using partitions and row keys you can easily filter in a very scalable way (you can also use CompareTo to get a range of items (for example, all records between 1 and 7 november). Duplicating data is perfectly acceptable in Table Storage. You could include some data from the gzip in the Table Storage entity in order to show it in your WPF application (the most essential information you want to show after filtering). This means you'll only need to process the blob when the user opens/double clicks the record in the WPF application
B. How do I display a "gzip" in my application (after double clicking on a search result for example)
Solutions
Connect to the storage account from the WPF application, download the file, unzip it and display it. This means that you'll need to store the storage account in the WPF application (or use SAS or a container policy), and if you decide to change something in the backend of how files are stored, you'll also need to change the WPF application.
Connect to a Web Role. This Web Role gets the blob from blob storage, unzips it and sends it over the wire (or send it compressed in order to speed up the transfer). In case something changes in how you store files, you only need to update the Web Role

How do I restrict the usage of a C# application

I want to restrict the use of any exe file to specific number of iteration, lets say 10. After that limit is reached user shall not be able to run the exe file, or on running the exe file for the 11th time, he / she shall be greeted with a message "Exceeeded Trial Run" .
This is very much possible in C, like this - http://www.gidforums.com/t-22362.html
An example to accessing the PE header is here - http://code.cheesydesign.com/?p=572 , but it checks the timeststamp, whereas I want the number of occurrences the application has been launched .
I dont want to change the registry.
All suggestions are welcome.
Barring the existing comment about whether you should do this or not, the only other option besides not modifying the registry is to save something to a file in an encrypted fashion. Installing the app or exe would create the file and each launch of the application would decrypt, update, encrypt the file. But even then, that is subject o a user changing things without you wanting it. Security through obscurity is always a pain.
The surest way to prevent a user from exceeding some number of trial runs is to issue them a registration code (a GUID would work well) and then keep track of the remaining trial runs on your own database server. It would be exceedinly difficult to guess another user's GUID and impossible for them to hack the trials remaining (short of hacking into your server).
When the application runs, it could simply hit a small web service that would return the status of the software. If the web service cannot be reached, then the application would ask the user to connect to the internet and try again.
Short of that, there are not many options that could not be easily, easily hacked. Even if you encrypted the number of trials left, all the user would need to do is copy the file to somewhere else, then when they've reached their limit delete the original file and replace it with the copy... repeat ad infinitum.
The nice thing about this model is that, when the user purchases the full version, all you need to do is update your database and grant them full access.
If you wanted to let fully-paid users continue using the software without needing to connect to the internet, then on the first connection to the web server after paying the software could store a key file somewhere confirming the user's paid subscription. You could even create a hash based on the user's registration number to ensure that one user cannot use another user's key file.
If the subscription is annual, then a paid user's application could requery the server whenever an internet connection is available and recheck to make sure their registration is still valid. Or your key file could contain some encrypted date at which it would no longer be valid.
EDIT: A trial run based on a date would be much easier to implement. You could provide a key file with an encrypted date. Since the date would not change, the user would have a much hard time hacking the key file. Even if they borrowed or stole someone else's, they'd only get an extra week or two (depending on your trial period) before that, too, would become invalid. The difference is that a date based key file is static, making it much hard to spoof.
Now, another alternative is to combine the two approaches. You could have a countdown with an encrypted date in the same key file. That would ensure that, even if the user attempts to copy/replace the key file, the trial would still eventually end (maybe 10 uses/1 month, whichever is reached first).

ASP.NET, log file and database - need tips

I'm planning to develop an application that will read a log file and display statistics.
The first question, I guess, is to know if I need a database or not?
Will it be quicker to run queries against the database ; or read the file each time a user wants to see the statistics?
If I choose the database method, I will have to read the log file and update the database on a regular basis (between 1 and 10 minutes).
Is this article still good do you think (as it's from 2005): http://www.codeproject.com/KB/aspnet/ASPNETService.aspx
Or is it better to develop a Windows service? In that case, can I add the Windows Serice in my ASP.NET project in Visual Studio, or does it need to be
You mentioned ASP.NET so I believe it is a web application. In such case I would suggest to use Data Base, this is more robust, flexible and distributed solution.
Any way consider using log4net and then you can easily switch on file/DB ouput in any time by simply adding an other one appender section into the configuration file.
If I choose the database method, I will have to read the log file and
update the database on a regular basis (between 1 and 10 minutes)
Exactly, you're going to have to do it anyway. The Database basically just becomes another bottleneck at that point. For this type of app, there's no need to do anything other than read the file when the user requests to see it, and display them the results on the fly.
No need to have a windows service either. I mean, I don't know all your details, but I'm assuming the log file is in a directory on your machine, so just access it, open it, parse it, and display it to the user when they choose to see it on the front end.
If the only data you going to work is LOG files, you don't need any database.
But I assume that your application would do parse logs files, create some statistics and STORE it somewhere, to make possible to users to get back and see statistics for some period of time. It is not cool if any time you will be "re-calculating" that statistics again (further more, you might loose original log files till that time).
Even if you could store it to some files also, I do not recommed that at all. Don't be afraid of using Database, don't be concered on application performace on such early stage. Do the most that helps you to solve the problem.. and as for me using Database will solve your problem;

Updating local SQL Server databases with ClickOnce Deployment

I'm building an application which will use some settings and a local SQL Server. My question is, when it comes time to update the application; will the settings or data be overwritten?
What happens if I want to change some tables around in the future?
Frankly, I've always thought that ClickOnce's way of handling data is dangerous. If you deploy a database with ClickOnce, it puts it in the DataDirectory. Then when you deploy an update to the application, it copies the database forward to the folder where the next version of the app is installed. But if the database changes, it copies it forward to the folder + \pre, and puts a new one in the datadirectory. If you don't realize you changed it, it replaces it anyway. If you so much as open a SQLCE database and check out the data structures, wham it gets deployed. Surprise!
I think storing the data in another folder under the user's profile makes more sense, and is safer. Then YOU can choose when to update your database. This article will show how to move your data so it's safe from ClickOnce updates.
Additionally, when you DO want to make changes to your database, you can use SQL statements to do so, such as "ALTER TABLE" and so on. I've created a script and deployed it as one long string (with carriage returns in it) and had the application split the resource apart by carriage return and execute the statements one by one. You get the general idea.
One comment about user settings -- You can change these programmatically via the UI (i.e. give the user the capability). But note that if you change the certificate of your application and are running a high enough version of .NET (3.5, 4), it won't cause you a problem per se, but it DOES have a different identity as a ClickOnce application, and the user settings are not carried forward when the next update is published. For this reason, I also rolled my own XML file for config data, and I store it in LocalApplicationData as well.
User-level settings will not be overwritten during an update via ClickOnce, but you can push new application-level settings, because the [YourExeName].exe.config file will be overwritten during an update.
If you need to overwrite user-level settings, you will have to do this programmatically.

Categories

Resources