I have a client application that currently accesses static HTML files from a server via HTTPS.
Since these files never change, I would like my application to access them from the local file system. However, I do not want the user to be able to modify the HTML, so I would like to somehow embed the files in my assembly, so nobody can tinker with them. Is this possible?
Assuming the local file system has some sort of "date last modified" for each file, just store the files in the file system. Also calculate a hash of the contents for each file and keep a private (hidden) record of what you know the hashes should be. Then when your app fetches a file, have it also fetch the date last changed and recalculate the contents hash too. If either the date last changed or the hash is different from what you know it should be, have your app fetch a fresh copy via https: and use that instead.
(If a user [hacker?] is really determined to thwart your scheme and has physical access to the computer, they'll find a way no matter what you do. If physical access is possible, you can't be perfect [although you can prevent "most" modifications by making it so time-consuming the user decides it's not worth it].)
Related
Database File and Application that reads the db. The application has a registration component added. If the user doesn't want to register they can simply download the open source application, copy the database to the new folder, run a batch file and the database opens in the application, completely eliminating the registration and whatever extra features were added.
I want to keep the database file inhouse, even if it means adding the db file into the resources of the main application. The file does require data to be written to that file.
I've gone as far as converting the batch file to an exe file and loading the database file or even renaming the database file to something obscure like abc.exe (Even though its a db file it can be renamed to anything)
Database file is renamed to an exe file for the time being, I would prefer to either have it encrypted somehow or somehow placed into the resources of my main application and accessed that way, I am just trying to limit the way the software can be pirated.
Encryption:
You can encrypt SQLite databases using extensions such as SQLite Encryption Extension. The usefulness of such encryption depends on what you are trying to do. If your application can read the keys to decrypt it, so can a hacker that can run your application. You can use Windows Data Protection API to manage the keys so that if someone copied the database from one windows machine to another, the database would be unreadable; but again, if the hacker can access the source machine, they can obtain the keys just like your application (but it would protect against a "dumb" user from just copying the files over).
Putting it in your "main application resources": If you mean embedding the database within the EXE, you are out of luck if you have a requirement to write the data. Generally speaking, an EXE cannot modify itself (though depending on OS/version/user permission/absense of antimalware agents, etc, you might theoretically accomplish a self-modifying EXE; but, if you want your app to work most of the time in the wild, this strategy won't succeed). Even if you did succeed in an EXE that read itself, loaded the embedded blob as a database, modified that database in memory, then rewrote the entire EXE with database exported as a new blob (of different size than the original, wreaking havoc on the assembly), it wouldn't help. The attacker can do what your app does and access the data. Do yourself a favor and follow the operating system's guidelines for writing user data. For Windows, this is generally reading and writing files to your Local App Data folder.
Renaming a SQLite database to have an EXE extension. What are you trying to accomplish? Obscurity? Renaming it to EXE might fool some users (certainly not the motivated user I've described above), but it also might accidentally fool anti-malware / anti-virus software running on your legitimate user's operating systems into thinking your application is writing malformed executables (which would be suspicious) and shut your application down, or at least prevent it from working correctly. This will cause your users to not use your application or a mountain of support for you. What does it gain? It stops a "dumb" user from trying to open it in a SQLite query tool?
All that said, if you want to limit your user's abilities to read the data stored on their own storage devices, you really can't stop a determined user. You can stop the less savvy users. The majority of users cannot run a reflector on a C# assembly and figure out what it is doing, but many can. If you want to stop the less savvy users, encryption of the data will stop most of them, and it will be the least likely approach you've discussed to prevent your application from working "in the wild".
I'm writing a short C# program that will iterate over each user on a given Windows system, check in their local app data folder for a specific directory, and operating on it if it exists.
I'm looking for a good way to resolve the absolute file path to each users' local app data folder. I can use Environment.SpecialFolder.LocalApplicationData but that will only work for the user that is currently running the program.
I know I can cobble together a few different utilities and make a couple assumptions about where a user's local data is usually stored to make it work for 99% of the time (determine a list of all users, go through each one and find/guess where their app data is, etc.), but I was hoping there was a more elegant solution?
I am working on a small application to allow me modify files and version each file before each change. What I would like the app to do is uniquely mark each file so that whenever the same file is opened up, the history for that particular file can be pulled back up. I am not using any of the big version control tools for this. How do I do this pro grammatically please?
Simple solution. Use a verison control which already exists (eg. Git) but if your really want to do this then try this.
Each time you create a new version copy the previous version of the file into a separate hidden directory and have a config file in that directory which holds the checksum of that file. Checksum will "more than likely" be unique since its a hashed value of the file (each time file changes, checksum will be different - you need to calculate the checksum yourself.)
When you open a file just check if there is that config file in the directory and compare the checksum with the checksum of what's already open. If they are the same then you are on the same file. That's how it works.
You could use checksums to optimise it. So if a user goes in to a file changes things, changes back to the way they were and saves. Checksum should return the same thing (unless you include modified date and time etc.)
Each folder should have a name which follows a pattern (filenameVn.n eg. someTextFile.txt.v1.0) then you will be able to figure out what the directory you are navigating to in the history should say.
Another approach would be to simply copy the file and append some tag onto the end of it (checksum maybe? version number?) so then you wouldn't need extra folders.
Yet another approach would be to call the files whatever the checksum recorded and store the history of versions (along with corresponding checksums) in a separate config file and then refer to it when you want to figure out what the file that you want to access is called. So each version will be refered to based on its own checksum (like in Git.)
So to sum up each file version would be stored somewhere, you will be able to validate if they are the same (so you can optimise by avoiding storing multiple files with no changes in them and wasting space) and you will be able to dynamically determine where each version is and get access to it.
Hope it gives you a bit more understanding of how to get started.
I am using asp.net mvc and have a section where a user can upload images. I am wondering where should I store them.
I was following this tutorial and he seems to store it in the app_data. I however read another person say it should only hold your database.
So not sure what the advantages are for using app_data. I am on a shared hosting so I don't know if that makes a difference.
Edit
I am planning to store the path to the images in the database. I will be then using them in a image tag and rendering them to the user when they come to my site. I have a file uploader that only will expect images(check will be client and server)
The tutorial is a simple example - and if you read the comments, the original code just saved to an uploads directory, no app_data in sight.
It was changed to app_data because that's a special folder - one that will not allow execution of code.
And you have understood correctly - app_data is really there for holding file based databases. That's the meaning of the folder. As such saving images into it doesn't feel like the right thing to do.
If you are certain only images will get uploaded (and you control that), I would suggest an /uploads directory - a reserved location for images that also will not allow code execution (something that you should be able to control via IIS).
I would say that depends on what you will do with that images later. If you use those images in an img tag, you could save them somewhere in the Content/ folder structure.
If you do not need them reachable from the outside, or need to stream them changed back, you might store them out of the web root if the hoster allows for that.
I wouldn't store them in app_data, as I - personally - think that it's more a convention to store there a database. Most developers not familiar with that product wouldn't look there for the image.
But: you could store binaries in a db (even though it is probably not the best thing to do), so a reference in a db pointing to a file in the same directory makes sense again.
It's more an opinion thing than a technical question though, I think.
I prefer to store them in the database. When storing images on the file system I've found it can be a bit harder to manage them. With a database you can quickly rename files, delete them, copy them, etc. You can do the same when they're on the file system, but it takes some scripting knowledge.
Plus I prefer not to manage paths and file locations, which is another vote for the database. Those path values always make their way into the web.config and it can become more difficult to deploy and manage.
I'm writing a program that deals with the logs generated by the clients server. How can I detect where the user is storing them? It feels invasive to search all files, but what if they're being stored outside of the root. Is this acceptable, what if I make the user click "detect" first? Regardless, what if they've been renamed and reformatted? Is it possible to read the server settings themselves from my external program? I want this to work on linux and windows servers. I need WC3 Extended format w/ several fields enabled that are not naturally. I also don't want it to return null if it's enabled but no log has been yet created. I don't want to force the user (assumed dumb) to play with settings.
Any ideas?
Hardcode where you expect them to be in the common case, and if they're not there, ask the user about it. Doing more "magic" than that seems like a recipe for over-complexity and mistakes.
If the user is specifying the location of the log file, then either you should have the user locate the file(s) themselves or keep track of these locations somewhere else when they are saved. You don't need to be doing a full (or large partial) drive search.