Up-to-date examples for using the Google Drive APs That Really Work - c#

The company I work for has many gigabytes of data on the Google Drive cloud. A lot of this data is contained in individual files of several Gigabytes.
We would like to operate on this data:
a file at a time
remotely (without downloading a whole file to local disk
with COTS applications that see the cloud as a local hard disk and may access the file non-linearly i.e. doing seeks and partial fetches from within a file
on Windows
I am trying to write a virtual disk driver that would appear to a COTS application as just another disk drive with read-only permissions. I am working in C# .net Visual Studio. There is nothing to require this development environment it's just one that I am comfortable using and should have the capabilities to do the job. I could be convinced to switch environments if this is a blocking impediment.
My biggest problem is that Google is changing their APIs faster than they are documenting the changes. I have found examples for downloading a (partial) file using HTTP and for reading directories to get file IDs but when I try to run them they just don't work. If they would work I'd have the building blocks for putting together a disk interface.
For example:
Installing the Google Drive API and Client Library was a challenge. The direction page https://developers.google.com/drive/quickstart-cs and its subsequent links to the library were clearly written before the latest V2 authentication library was released this month (October 2013). The C# code in the quick-start example produces all types of warnings about deprecated and obsolete functions that are going to go away with the next release but no guidance (yet) on the new equivalents.
https://developers.google.com/drive/v2/reference/files/list shows how to list files on a drive. When I execute the Try It! at the bottom of the page without parameters I get a list of all my files. When I try to limit it with a search query string parameter q on the page) I get a 500 Internal Server error. This is before I try to use the program that should do this.
Once I have a file ID (or I take several from the above page execution without queries, the page developers.google.com/drive/v2/reference/files/get has a Try It! for retrieving a file. I am always getting a 401 error (invalid ID).
I am very tempted to put this project aside for a few weeks and see what Google comes up with in new documentation. Of course that won't please my boss. Alternatively, is there anyone else trying to work with the Google Drive API in a similar manner that is willing to share/collaborate?

Related

How to configure BDE to connect to an existing Paradox db with .net application?

Hoping in can find someone who is familiar with this scenario. I've not touched Paradox in over 12 years, and most posts I find on the subject are 5+ years old.
I have created a C# application to read a Paradox .db file using OLEDB and the Borland Database Engine (BDE) to retrieve a transaction code. I have no control over the Paradox system, my only access to it is through a UNC path to the .db files. The original developers of the system are no long in business, so there is no support. Hence the reason I'm having to "hack" a solution to my customers problem.
For development I copied the relevant .db files locally and had no problem accessing them. However when I try to access the live files, the error I get is that they are under the control of a different .LCK file. Deleting the .LCK file solves the access issue, but obviously that is not the solution.
The existing system has the "server" a Win7 machine, and 3 clients running what I assume is the Borland / Paradox application.
Is there a way to configure BDE (or some other workaround) to allow access to the live files using the existing .LCK files? I've tried with different values in the BDE "NET DIR" setting.
My only other option that I can see is to load the entire .db file into memory, (I have code that does that without BDE) then find my transaction value, which is not ideal as that takes around 15 seconds, and as this is a Point of Sale application, it needs to be much faster. Select statements via OLEDB work perfectly in this regard.

Getting code from Windows Azure VM

OK, so my hard disk just crashed. Big deal. All my web dev code that was on it went along with it, and now I'm running ddrescue on Ubuntu trying to recover whatever data I can recover. The hard disk keeps disconnecting and sometimes it can quit responding for a long time so it's really a pain in the ass.
Anyway, back to the main topic--I have my web dev code which was packaged and uploaded to Azure; now what I'm wondering is if it's possible to obtain all my .cs files from the VM. I noticed approot and siteroot folders, but all I saw were the views, the .asax file, some other misc, stuff, nothing with the .cs extension.
Is there any way I can get a copy of the code I packaged? or (as a last resort) any way to get the .cspkg file and work from there?
The site you are seeing on the web role and inside the cspkg file is the output of the compile, so you can't get the original .cs files out of them. That said, you can use a tool like Reflector, Just Decompile, or a variety of other decompilers out there to reverse engineer your compiled bits into something that will be very close to the original C# code (not I'm assuming this is your own code, or code that doesn't have a provision against reverse engineering). This at least will let you use the bits on the webrole to get the majority of your code back, then review it to see how good a job it did.
Note, you can open the cspkg file. It's just a zip file. You can rename with a .zip file extension and open it up, but you won't find the .cs files in there. The only time you find this to be the case is if you have multiple websites within a single web role. The default packager for Windows Azure doesn't compile the additional sites, only packages up all the files in their root directory. Not at all helpful for actual deployments really, but this won't likely help you.
You are likely well ahead of me on this, but I'd recommend using a personal source control system of some sort to avoid this issue in the future.

Stand alone Datacontainer/Database

I work in a call center and we need to use a lot of web based tools and work with a lot of information. They way we need to work is not efficient, so I made myself a couple of C# Windows Application to make my work a bit easier.
The problem is that those computers a locked en secured in a very high level. Almost all website's are blocked, we can't use USB drives to get data on the pc, the only way to get data to my account at work is to mail it compressed in a 7z file. We can't install software, drivers etc. I luckily have write access to the program data folder to save some data. But the only way I can store data is to put it all in .txt files. I've tried a lot of standalone databases but I'm also limited in space because we've got 30MB. So a standalone version of xampp (or similar software) is almost 40 MB so I can't use it.
Does anybody know I type of database to store my data is (mostly text and integers)? I prefer a single file which i can drop in the program data folder. I prefer it also to get the data in the same way like getting it from a database, dataset or something similar.
You may want to look into Infobright Community Edition which can give you incredible compression ratios on average from 40:1. Infobright is exactly like mysql and very compact.
Disclaimer: the author is affiliated with Infobright.

Sync services like Dropbox, theory behind file indexing?

I have realised that by using the Amazon S3 service directly, I can save myself a lot of money. Instead of buying a client like GoodSync or Jungle Disk I thought it would be interesting to create my own Windows syncing application, which would sync my files to S3.
I have discovered that I can use FileSystemWatcher to monitor for changes to files and directories, but I am looking for the theory behind how other services like Dropbox index their files. Things like comparing the file size of a file with the size recorded in an index somewhere on the client PC, then using this information to determine whether to sync or not.
I am using C# and references to different libraries or code samples I could use would be helpful, but I am mainly looking for the best way to index files and for someone to point me in the right direction.
Thanks
I've went down this path myself. In fact, now that Mozy dropped their unlimited plan and Carbonite chooses to NOT backup certain files...like 3GP files and *.dat files unless you routinely go in and manually add them, I am very disgruntled with online backups.
But your question was on syncing. Dropbox does it the best. But it's expensive. But I'm not sure S3 would be any cheaper.
Anyway, you will have a lot of hurdles. In my experiences, the problems I ran into are:
1) Propagating deletes
2) FileSystemWatcher simply missing events such as rapidly adding files to a folder then deleting them
3) etc..
Now some ideas on how I would tackle this again:
1) Keep a small SQLite db for files names/path locally
2) Copy files to a tmp directory before sending to S3.
3) On file changes/updates/deletions/etc store that meta information in SQLite
Anyway just some ideas.

Searching directories for tons of files?

I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out.
What's the best way to programatically list, and iterate through, this many files?
Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked?
I've worked on a SAN system in the past with telephony audio recordings which had issues with numbers of files in a single folder - that system became unusable somewhere near 5,000 (on Windows 2000 Advanced Server with an application in C#.Net 1.1)- the only sensible solution that we came up with was to change the folder structure so that there were a more reasonable number of files. Interestingly Explorer would also time out!
The convention we came up with was a structure that broke the structure up in years, months and days - but that will depend upon your system and whether you can control the directory structure...
Definitely split them up. That said, stay as far away from the Indexing Service as you can.
None. .NET relies on underlying Windows API calls that really, really hate that amount of files themselves.
As Ronnie says: split them up.
You could use DOS?
DIR /s/b > Files.txt
You could also look at either indexing the files yourself, or getting a third part app like google desktop or copernic to do it and then interface with their index. I know copernic has an API that you can use to search for any file in their index and it also supports mapping network drives.

Categories

Resources