I am quite new in game development with Unity and was wondering how the "export game" function of unity works. I not yet used this function in unity, but I've read that it will generate some .exe file from your complete game. I also read that it will create a "data" folder or something like that.
My question is: What exactly is stored in this "data" folder? And how can I write logic to save my own files (e.g. files which contain save states, settings, configurations, etc.) in some file inside this directory (which is then shipped with the complete game / created in the local game directory after the user e.g. saved his game the first time? Can i e.g. save those files in a relative path (e.g. ./MyGame/data/savegames)?
And which types of files can I create? Text / Binary? Or can I even use some relational Database (some small one like HSQLDB)?
And how are things like models, sounds, animations and other assets treated? Are they all packaged within the .exe file which is my complete game? Or do i have some seperate folders with the shipped game for them?
Thank you!
The data folder (named the same as the exe file, but _Data on the end instead of .exe, which can be safely renamed to just Data) contains all of the dlls that actually run the game (even a blank Unity project will have them! The unity engine itself compiles to several dlls) as well as any Resources you might have (tip: stop using resources and use Asset Bundles instead).
Omitting this folder would be very bad indeed!
As for reading/writing other data from the hard disk (which is not possible on all platforms--looking at you web deployment) I would recommend using your own folder, eg. RuntimeData which could contain external audio, image, or video files as well as mutable data such as save games or screenshots. Pretty much anything you'd be ok with your users modifying without seriously breaking stuff (modding is "in" these days).
As for the types of file: well, that's up to you, really! Creating text files (of any extension: xml, html, dat, qqq...) is very easy. Images tend to be done through a 3rd party script (do you really want to write your own JPG converter? Video, same thing). You can also create binary files following a format of your own choosing. The only difficulty is writing the serializer and deserializer for the data, which would scale in difficulty as the complexity of the data scales.
You have full file system access* so you can realistically read or write anywhere. This is C# we're talking about. But with great power comes great responsibility.
*Note: Mobile devices heavily frown on that sort of thing and will deny access to folders outside the one explicitly given to that application.
Related
I have several security cameras that upload pictures to my ftp server. Some of these cams conveniently create subfolders at the start of a new day in the format "yyyymmdd". This is great and makes it easy to maintain/delete older pictures by a particular day. Other cams aren't so nice and just dump pictures in a giant folder, making deletion difficult.
So I am writing a C# windows form program to go to a specific folder (source folder) using FolderBrowserDialog and I name a target folder also using FBD. I was using the standard process to iterate through a file list using a string array filled via Directory.GetFiles() method. I use the file's Creation Date to create a subfolder if it doesn't exist. In either case I move the file to that date based subfolder. It works great while testing with small numbers of files.
Now I'm ready to test against real data and I'm concerned that with some folders having thousands of files, I'm going to have many issues with memory and other problems. How well can a string array handle such huge volumes of data? Note one folder has over 28,000 pictures. Can a string array handle such a large number of FileInfo objects?
My question then is how can I iterate through a list of files in a given folder without having to use a string array and Directory.GetFiles() method? I'm open to any thoughts though I do want to use c# in a windows form environment. I have an added feature that lets me delete pictures older than a given date instead of moving them.
Many thanks!
You'll be just fine with thousands of file names. You might have a problem with millions, but thousands isn't a big deal for C#. You may have a performance issue just because of how NTFS works, but if so there's nothing you can do about that in C#; it's a problem inherent in the file system.
However, if you really want to pick at this, you can do a little better by using Directory.EnumerateFileSystemInfos(). This method has two benefits over GetFiles():
It loads the file name and creation date in one disk access, instead of two.
It allows you to work with an IEnumerable instead of an array, such that you only need memory for one file record at a time.
I'm making an XNA game, which uses a lot (currently ~2800) of small resource files. It has become a problem to move them around from place to place unarchived, so I thought maybe I could just zip them and make the game unzip them automatically, into memory, preferably. I don't need the writing capability yet, right now only reading.
Is there an easy way to unzip a folder into memory and access those files just like, or as simple as the regular files on disk?
I've been reading some similar questions and I see many people say that the OS (Windows in my case) can handle file caching better than a ram drive. I'm just going for unzipping and reading files for now, but in future I might need to modify or create new files, and I'd like it to be quick and seamless for the user. Maybe I should take a different approach at solving my current problem, taking in account my future goal?
I haven't personally tried this but if you want to be able to zip/unzip in memory, you could just use a MemoryStream and pass that into a library (eg https://github.com/icsharpcode/SharpZipLib). Things you'll probably need to remember, are you just moving a bottleneck to a different bottleneck?
You could also try something like the approach with Sprites in HTML. You combine all your Zip's into 1 with an Index to where in the file they are. Then you move your FileStream.Position to the location for the resource you are looking for, read (the amount you need) then do what you need with it. You'd need to make sure that if you rebuild any you make something rebuild all your combining indexes etc. Then you just would be copying 1 file around, it just so happens that inside that file you have ~2800 smaller segments of interest.
I am coding a rhythm game and one of the things I really want to keep in mind is the integrity of an artist's license and copyright. I appreciate artists giving me copyright license to use in my game but I don't want people to use my game as a way of getting their music for free.
Part of the idea is to distribute the game with 1-2 songs and they can download as many as they want from my website (to keep the installer size small so people with bandwidth limits can download the game without having to worry about size limits).
What I would like to do is have a file for instance a .dbf which when double clicked will move itself to my game directory (e.g C:/Program Files/Dashie's Sky Games/Rhythms/) as a .dbf, each .dbf will essentially contain two files (the .mp3 and the .drf), the .drf will contain the things such as where the notes would be, difficulty level, where the UberDash is and so on, this would be unencrypted however only editable via the ingame editor (not around yet).
I don't want people to be able to just rename the .dbf into .zip and be able to access the .mp3
I kinda want it so that the game will open the .dbf, decrypt it or whatever and store the .mp3 and .drf into memory (or in some very obscure temporary directory). I am using bass.dll for the music library. Any ideas at all?
Very much appreciated.
End of the day if the song is stored as an MP3 or other popular format then no matter how you package it someone will in theory be able to get it.
In my opinion your best option is to zip them and change the extension (maybe not zip zip, but 7zip or rar or a not so "default" compression mechanism) and then let your program unpackage the songs. I think you have to accept that if your program can unpackage the file then a human being will be able to as well, but assume that if someone wants a song, trying to decrypt your song storage mechanism won't be their preferred way of doing so (bittorrent will be, let's be honest). You could also include a "Get this song!" button in your application which would take the user to iTunes or what have you, allowing the user to get the song legally. If Warner, EMI and the RIAA can't stop people from pirating music, you most certainly won't be able to. Just try to make it easier for the user to get the song legally.
There is a virus that my brother got in his computer and what that virus did was to rename almost all files in his computer. It changed the file extensions as well. so a file that might have been named picture.jpg was renamed to kjfks.doc for example.
so what I have done in order to solve this problem is:
remove all file extensions from files. (I use a recursive method to search for all files in a directory and as I go through the files I remove the extension)
now the files do not have an extension. the files now look like:
I think this file names are stored in a local database created by the virus and if I purchase the anti virus they will be renamed back to their original name.
since my brother created a backup I selected the files that had a creation date latter than when my brother performed the backup. so I have placed that files in a directory.
I am not interested in getting the right extension as long as I can see the content of the file. for example, I will scan each file and if it has text inside I know it will have a .txt extension. maybe it was a .html or .css extension I will not be able to know that I know.
I belive that all pdf files should have something in common. or doc files should also have something in common. How can I figure what the most common types (pdf, doc, docx, png, jpg, etc) files have in common)
Edit:
I know it will probably take less time to go over all this 200 files and test each one instead of creating this program. it is just that I am curios to see if it will be possible to get the file extension.
In unix, you can use file to determine the type of file. There is also a port for windows and you can obviously write a script (batch, powershell, etc.) or C# program to automate this.
First, congratulate your brother on doing a backup. Many people don't, and are absolutely wiped out by these problems.
You're going to have to do a lot of research, I'm afraid, but you're on the right track.
Open each file with a TextReader or a BinaryReader and examine the headers. Most of them are detectable.
For instance: Every PDF starts with "%PDF-" and then its version number. Just look at those first 5 characters. If it's "%PDF-", then put a PDF on the filename and move on.
Similarly: "ÿØÿà..JFIF" for JPEG's, "[InternetShortcut]" for URL shortcuts, "L...........À......Fƒ" for regular shortcuts (the "." is a zero/null, BTW)
ZIPs / Compressed directories start with {0x50}{0x4B]{0x03}{0x04}{0x14}, and you should be aware that Office 2007/2010 documents are really ZIPs with XML files inside of them.
You'll have to do some digging as you find each type, but you should be able to write something to establish most of the file types.
You'll have to write some recursion to work through directories, but you can eliminate any file with no extension.
BTW - A great tool to help pwith this is HxD: http://www.mh-nexus.de/ It's what I used to pull this answer together!
Good luck!
"most common types" each have it's own format and most of them have some magic bytes at the fixed position near beginning of the file. You can detect most of formats quite easily. Even HTML, XML, .CSS and similar text files can be detected by analyzing their beginning. But it will take some time to write an application that will guess the format. For some types (such as ODF format or JAR format, which are built on top of regular ZIPs) you will be also able to detect this format.
But ... Can it be that there exists such application on the market? I guess you can find something if you search, cause the task is not as tricky as it initially seems to be.
I am writing a client windows app which will allow files and respective metadata to be uploaded to a server. For example gear.stl (original file) and gear.stl.xml (metadata). I am trying to figure out the correct protcol to use to transfer the files.
I was thinking about using ftp since it is widely used and a proven method to transfer files, except that I would have to transfer 2 files for every actual file (.stl and .stl.xml). However, another thought had also crossed my mind ... What if I create an object and wrap the file, metadata and the directory I needed to tranfer it to, serialize the object and then submit a request to a webservice, to transfer the file.
Original file size would range from 100k to 10MB. Metadata size would probably be less than 200k
The webservice call seems like an easier process to me to deserialize the object and distribute the file and respective metadata accordingly. However I'm not sure if this is a sound idea or if there is a better way to transfer this data other than the two methods I have mentioned.
If someone can point me in the right direction it would be much appreciated.
You could wrap it in a zip file like the "new" office document format does. You might even be able to use their classes to package it all up.
Edit:
Take a look at the System.IO.Packaging.Package class. It seems to be what you need. This class resides in the WindowsBase.dll assembly and became available in .NET 3.0.
PS: Remember that even though it is a zip file, it doesn't need to be compressed. If you have very large files, it may be better to keep them uncompressed. It all depends on how they're going to be used and if the transport size is an issue.