game client updater - c#

I'm a MMORPG Private Server dev and I'm looking forward to create an all in one updater for the user's clients because it is very annoying and lame to use patches that must be manually downloaded.
I'm new in C# but I already succeded at making my own launcher with my own interface and basic game start/options buttons and notice that is read from my webserver.
Now, I wanna make an integrated update function for that and I'm pretty lost, I have no idea where to start. This is what it would look like, it's just a concept
It will have the main button wich is used to start the game AND update it, basically when you open the program the button would write "UPDATE" and is disabled(while it searches for new updates) and if any are found, it would turn into a clickable button, then after the updates are dowmloaded it would just change itself into "start game".
A progressbar for the overall update and another one to see the progress on the current file that is downloading only, all that with basic info like percentage and how much files need to be downloaded.
I need to find a way for the launcher to check the files on the webserver by HTTP method and check if they are same as client or newer ones so it dosent always redownload files already same version and also a method so that the updater will download the update as a compressed archive and auto extract and overwrite existing files when they are done downloading.
NOTE: The files being updated are not .exe, they mostly are textures/config files/maps/images/etc...

I'll sketch a possible architecture for this system. It's incomplete, you should consider it a form of detailed pseudo-C#-code for the first half and a set of hints and suggestions for the second.
I believe you may need two applications for this:
A C# WinForms client.
A C# server-side application, maybe a web service.
I'll not focus on security issues on this answers, but they are obviously very important. I expect that security can be implemented at a higher level, maybe using SSL. The web service would run within IIS and implementing some form of security should be mainly a matter of configuration.
The server-side part is not strictly required, especially if you do't want compression; probably there is a way to configure your server so that it returns a easily parsable list of files when an HTTP request is made at website.com/updater. However it is more flexible to have a web service, and probably it's even easier to implement. You can start by looking at this MSDN article. If you do want compression, you can probably configure the server to transparently compress individual files. I'll try to sketch all possible variants.
In the case of a single update ZIP file, basically the updater web service should be able to answer two different requests; first, it can return a list of all game files, relative to the server directory website.com/updater, together with their last write timestamp (GetUpdateInfo method in the web service). The client would compare this list with the local files; some files may not exist anymore on the server (maybe the client must delete the local copy), some may not exist on the client (they are entirely new content), and some other files may exist both on the client and on the server, and in that case the client needs to check the last write time to determine if it needs the updated version. The client would build a list of the paths of these files, relative to the game content directory. The game content directory should mirror the server website.com/updater directory.
Second, the client sends this list to the server (GetUpdateURL in the web service). The server would create a ZIP containing the update and reply with its URL.
[ServiceContract]
public interface IUpdater
{
[OperationContract]
public FileModified[] GetUpdateInfo();
[OperationContract]
public string GetUpdateURL();
}
[DataContract]
public class FileModified
{
[DataMember]
public string Path;
[DataMember]
public DateTime Modified;
}
public class Updater : IUpdater
{
public FileModified[] GetUpdateInfo()
{
// Get the physical directory
string updateDir = HostingEnvironment.MapPath("website.com/updater");
IList<FileModified> updateInfo = new List<FileModified>();
foreach (string path in Directory.GetFiles(updateDir))
{
FileModified fm = new FileModified();
fm.Path = // You may need to adjust path so that it is local with respect to updateDir
fm.Modified = new FileInfo(path).LastWriteTime;
updateInfo.Add(fm);
}
return updateInfo.ToArray();
}
[OperationContract]
public string GetUpdateURL(string[] files)
{
// You could use System.IO.Compression.ZipArchive and its
// method CreateEntryFromFile. You create a ZipArchive by
// calling ZipFile.Open. The name of the file should probably
// be unique for the update session, to avoid that two concurrent
// updates from different clients will conflict. You could also
// cache the ZIP packages you create, in a way that if a future
// update requires the same exact file you would return the same
// ZIP.
// You have to return the URL of the ZIP, not its local path on the
// server. There may be several ways to do this, and they tend to
// depend on the server configuration.
return urlOfTheUpdate;
}
}
The client would download the ZIP file by using HttpWebRequest and HttpWebResponse objects. To update the progress bar (you would have only one progress bar in this setup, check my comment to your question) you need to create a BackgroundWorker. This article and this other article cover the relevant aspects (unfortunately the example is written in VB.NET, but it looks very similar to what would be in C#). To advance the progress bar you need to keep track of how many bytes you received:
int nTotalRead = 0;
HttpWebRequest theRequest;
HttpWebResponse theResponse;
...
byte[] readBytes = new byte[1024];
int bytesRead = theResponse.GetResponseStream.Read(readBytes, 0, 4096);
nTotalRead += bytesread;
int percent = (int)((nTotalRead * 100.0) / length);
Once you received the file you can use System.IO.Compression.ZipArchive.ExtractToDirectory to update your game.
If you don't want to explicitly compress the files with .NET, you can still use the first method of the web service to obtain the list of updated file, and copy the ones you need on the client using an HttpWebRequest/HttpWebResponse pair for each. This way you can actually have two progress bars. The one that counts files will simply be set to a percentage like:
int filesPercent = (int)((nCurrentFile * 100.0) / nTotalFiles);
If you have another way to obtain the list, you don't even need the web service.
If you want to individually compress your files, but you can't have this feature automatically implemented by the server, you should define a web service with this interface:
[ServiceContract]
public interface IUpdater
{
[OperationContract]
public FileModified[] GetUpdateInfo();
[OperationContract]
public string CompressFileAndGetURL(string path);
}
In which you can ask the server to compress a specific file and return the URL of the compressed single-file archive.
Edit - Important
Especially in the case that your updates are very frequent, you need to pay special attention to time zones.
Edit - An Alternative
I should restate that one of the main issues here is obtaining the list of files in the current release from the server; this file should include the last write time of each file. A server like Apache can provide such a list for free, although usually it is intended for human consumption, but it is easily parsable by a program, nevertheless. I'm sure there must be some script/extension to have that list formatted in an even more machine-friend way.
There is another way to obtain that list; you could have a text file on the server that, for every game content file, stores its last write time or, maybe even better, a progressive release number. You would compare release numbers instead of dates to check which files you need. This would protext yourself from time zone issues. In this case however you need to maintain a local copy of this list, because files have no such thing as a release number, but only a name and a set of dates.

This is a wide and varied question, with several answers that could be called 'right' depending on various implementation requirements. Here are a few ideas...
My approach would be to use System.Security.Cryptography.SHA1 to generate a list of hash codes for each game asset. The updater can then download the list, compare it to the local file system (caching the locally-generated hashes for efficiency) and build a list of new/changed files to be downloaded.
If the game data uses archives, the process gets a bit more involved, since you don't want to download a huge archive when only a single small file inside may have been changed. In this case you'd want to hash each file within the archive and provide a method for downloading those contained files, then update the archive using the files you download from the server.
Finally, give some thought to using a Binary Diff/Patch algorithm to reduce the bandwidth requirements by downloading smaller patch files when possible. In this case the client would request a patch that updates from the current version to the latest version of a file, sending the hash of the local file so the server knows which patch to send. This requires you to maintain a stack of patches on the server for each previous version you want to be able to patch from, which might be more than you're interested in.
Here are some links that might be relevant:
SHA1 Class - Microsoft documentation for SHA1 hashing class
SevenZipSharp - using 7Zip in C#
bsdiff.net - a .NET library implementing bsdiff
Oh, and consider using a multi-part downloader to better saturate the available bandwidth at the client end. This results in higher load on the server(s), but can greatly improve the client-side experience.

Related

C# find when file was uploaded on FTP

I have a job that once in a set periods of time "looks" at FTP if some new files have been uploaded. Once it finds any, it downloads it.
The question is how using C# to extract the time when the file was actually uploaded to FTP.
Thank you. I just still can't figure out how to extract exactly the time when file was uploaded to FTP, not modified. As the following shows the time of file modification.
fileInfo = session.GetFileInfo(FileFullPath);
dateUploaded = fileInfo.LastWriteTime;
Please advice some sample code that may be integrated in my current solution
using (Session session = new Session())
{
string FileFullPath =
Dts.Variables["User::FTP_FileFullPath"].Value.ToString();
session.Open(sessionOptions);
DateTime dateTime = DateTime.Now;
session.MoveFile(FileFullPath, newFTPFullPath);
TransferOperationResult transferResult;
transferResult = session.GetFiles(newFTPFullPath,
Dts.Variables["User::Local_DownloadFolder"].Value.ToString(),false);
Dts.Variables["User::FTP_FileProcessDate"].Value = dateTime;
}
You might not be able to, unless you know the FTP server reliably sets the file create/modified date to the date it was uploaded. Do some test uploads and see. If it works out for you on this particular server you want to use then great; keep a note of when you last visited and retrieve files with a greater date. By way of an example, a test upload to an Azure ftp server just now (probably derived from Microsoft IIS) did indeed set the time of the file to the datetime it was uploaded. Beware that the listed file time sent by the server might not be the same timezone as you are in, nor will it have any timezone info represented - it could just be some number of hours out relative to your current time
To get the date itself you'll need to parse the response the server gives you when you list the remote directory. If you're using an FTP library for C# (edit: you're using WinSCP), that might already be handled for you (edit: it is, see https://winscp.net/eng/docs/library_session_listdirectory and https://winscp.net/eng/docs/library_remotefileinfo); unless things have improved recently the default FTP provision in .NET isn't great - it's more intended for basic file retrieval than complex syncing, so i'd definitely look at using a capable library (and we don't do software recs here, sorry, so I can't recommend one) if you're scrutinizing the date info offered
That said, there's another way to carry out this sync process that is more of a side effect of what you want to do (and doesn't necessarily rely on parsing a non standard list output) overall as a process:
Keep a memory of every file you saw last time and reference it when looking at every file that is there now. This is actually quite easy to do:
Download all the files.
Disconnect.
Go back some time later and download any files that you don't already have
Keep track of which files you downloaded and do something with them?
You say you want to download them anyway, so just treat any file you don't already have (or maybe one that has a newer date, different file size etc) as one that is new/changed since you last looked
Big job, potentially, depending how many various servers you want to support

Web Api 2 RESTFUL Image Upload

I'm new in Web Api and I'm working on my first project. I'm working on mobile CRM system for our company.
I want to store companies logo, customers face foto etc.
I found some tutorials on this topic, but unfortunately some of them was old (doesn't use async) and the others doesn't work.
At the end I found this one:
http://www.intstrings.com/ramivemula/articles/file-upload-using-multipartformdatastreamprovider-in-asp-net-webapi/
It works correctly, but I don't understand a few things.
1) Should I use App_Data (or any other folder like /Uploads) for storing this images, or rather store images in database?
2) Can I set only supported images like .jpg, .png and reject any other files?
3) How can I processed image in upload method? Like resize, reduce size of the file, quality etc?
Thank you
1) We are storing files in a different location than app_data. We have a few customer groups and we gave them all a unique folder that we get from the database. Storing in database is also an option but if you go down this road, make sure that the files you are saving don't belong directly to a table that you need to retrieve often. There is no right or wrong, but have a read at this question and answer for some pros and cons.
2) If you foollowed that guide, you can put a check inside the loop to check the file ending
List<string> denyList = new List<string>();
denyList.Add(".jpg");
foreach (MultipartFileData file in provider.FileData)
{
string fileName = Path.GetFileName(file.LocalFileName);
if(denyList.Contains(Path.GetExtension(fileName))
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
files.Add(Path.GetFileName(file.LocalFileName));
}
3) Resizing images is something that I have never personally done my self, but I think you should have a look at the System.Drawing.Graphics namespace.
Found a link with an accepted answer for downresize of picture: ASP.Net MVC Image Upload Resizing by downscaling or padding
None of the questions are actually related to Web API or REST.
If you are using SQL Server 2008 or newer the answer is use FILESTREAM columns. This looks like a column in database with all its advantages (i.e. backup, replication, transactions) but the data is actually stored in file system. So you get the best of each world, i.e. it will not happen that someone deletes the file accidentally so database will reference an inexistent file, or vice versa, records from database are deleted but files not so you'll end up with a bunch of orphan files. Using a database has many advantages, i.e. metadata can be associated with files and permissions are easier to set up.
This depends on how files are uploaded. I.e. if using multipart forms then examine content type of each part before part is saved. You can even create your own MultipartStreamProvider class. Being an API maybe the upload method has a stream or byte array parameter and a content type parameter, in this case just test the value of content type parameter before content is saved. For other upload methods do something similar depending on what the input is.
You can use .Net's built in classes (i.e. Bitmap: SetResolution, RotateFlip, to resize use a constructor what accepts a size), or if you are not familiar with image processing rather choose an image processing library.
All of the above work in Asp.Net, MVC, Web API 1 and 2, custom HTTP handlers, basically in any .Net code.
#Binke
Never user string operations on paths. I.e. fileName.split('.')[1] will not return the extension if file name is like this: some.file.txt, and will fail with index out of range error if file has no extension.
Always use file API, i.e. Path.GetExtension.
Also using the extension to get content type is not safe especially when pictures and videos are involved, just think of avi extension what is used by many video formats.
files.Add(Path.GetFileName(file.LocalFileName)) should be files.Add(fileName).

Download 3000+ Images Using C#?

I have a list of around 3000 image URLs, where I need to download them to my desktop.
I'm a web dev, so naturally wrote a little asp.net c# download method to do this but the obvious problem happened and the page timed out before I got hardly any of them.
Was wondering if anyone else knew of a good, quick and robust way of me looping through all the image URL's and downloading them to a folder? Open to any suggestions, WinForms, batch file although I'm a novice at both.
Any help greatly appreciated
What about wget? It can download a list of URL specified in a file.
wget -i c:\list-of-urls.txt
Write a C# command-line application (or Winforms, if that's your inclination), and use the WebClient class to retrieve the files.
Here are some tutorials:
C# WebClient Tutorial
Using WebClient to Download a File
or, just Google C# WebClient.
You'll either need to provide a list of files to download and loop through the list, issuing a request for each file and saving the result, or issue a request for the index page, parse it using something like HTML Agility Pack to find all of the image tags, and then issue a request for each image, saving the result somewhere on your local drive.
Edit
If you just want to do this once (as in, not as part of an application), mbeckish's answer makes the most sense.
You might want to use an existing download manager like Orbit, rather than writing your own program for the purpose. (blasphemy, I know)
I've been pretty happy with Orbit. It lets you import a list of downloads from a text file. It'll manage the connections, downloading portions of each file in parallel with multiple connections, to increase the speed of each download. It'll take care of retrying if connections time out, etc. It seems like you'd have to go to a lot of effort to build these kind of features from scratch.
If this is just a one-time job, then one easy solution would be to write a HTML page with img tags pointing to the URLs.
Then browse it with FireFox and use an extension to save all of the images to a folder.
Working on the assumption that this is a one off run once project and as you are a novice with other technologies I would suggest the following:
Rather than try and download all 3000 images in one web request do one image per request. When the image download is complete redirect to the same page passing the URL of the next image to get as a query string parameter. Download that one and then repeat until all images are downloaded.
Not what I would call a "production" solution, but if my assumption is correct it is a solution that will have you up an running in no time.
Another fairly simple solution would be to create a simple C# console application that uses WebClient to download each of the images. The following psuedo code should give you enough to get going:
List<string> imageUrls = new List<string>();
imageUrls.Add(..... your urls from wherever .....)
foreach(string imageUrl in imagesUrls)
{
using (WebClient client = new WebClient())
{
byte[] raw = client.DownloadData(imageUrl);
.. write raw .. to file
}
}
I've written a similar app in WinForms that loops through URLs in an Excel spreadsheet and downloads the image files. I think they problem you're having with implementing this as a web application is that server will only allow the process to run for a short amount of time before the request from your browser times out. You could either increase this time in the web.config file (change the executionTimeout attribute of the httpRuntime element), or implement this functionality as a WinForms application where the long execution time won't be a problem. If this is more than a throw-away application and you decide to go the WinForms route, you may want to add a progress bar to ind

C#: Programmatically apply merge/patch to file?

I have a program that requires a few large (~4 or 5mb) files. Once a week, every week, there are new versions of these files with minor changes. Mostly just a few lines added or removed.
When the program starts, if there's an Internet connection, I'd like the program to update these files automatically. Instead of downloading the entire new versions of the files, I'll like to download just a patch based on the client's version of the files that updates them.
How might I do this?
I have total control over the server.
That is a tough problem to solve if you don't have any for knowledge of what is in the file or the server doest have a facility to allow you to request differences. Any program you write that does not have a way to determine the differences with out looking at the old and new file will have to download it anyway.
C# doesn't have any built-in facility to do this, but it sounds like your requirements aren't complicated. Look at how diff and ed on Unix can be used to patch a text file based on an easy-to-grok delta. Of course you should check the resulting file against a hash and fall back to a full download if it isn't correct.

how to know when download is finished

Hi I'm creating online shop. In this shope people online must be buy files with zip extension. They pay with their credit cards or other methods get key and download product. How can I know when they finish product download?
Thanks
Unfortunatelly there is no really good way to do this as some clients might not download the file at once (e.g. Downloadmanagers split the download into several parralel part downloads).
Options are:
If it is very important to you that it can only be downloaded once: You could
simply not support resuming. Then you
can log if the file has entirely been
downloaded (as soon as the last byte
has been sent). This might work well if the download is small.
Otherwise you could offer some grace
data (we usually allow to download
clients to download 5 times the size
of the real download) and log every
download attempt.
You should NOT just count the bytes downloaded (because the download might be disrupted). And NOT just determine if all sections have been downloaded once (also because the download might be disrupted)
Just to clarify: All this means that you have to write your own download handler (fileserver).
you can use custom file server that works on either http or ftp and have it send a notification once the client received the last file fragment.
all other options are problematic; the client might download the file using a download manager,so you cannot even register for any browser event, if there was any.
A custom server application seems indeed a solution for this,
or possibly some kind of scripting.
A normal http server does not notify the end of a connection,
but possibly, if you generate the output in a cgi/php/asp/* script,
you read the file in cgi/php/asp/* scripting language and
send it to the output. when you reach the end of the file, you
do the notification, and then end the script.
When you do it that way, it will only detect fully downloaded files,
and if the connection gets interrupted half-way, it would not mark
the file as downloaded.
a 'cgi-script' can be a compiled c program, (or any other langauge
for that matter). Compiled code anyways. A compiled program
would give better performance then a interpreted script solution.

Categories

Resources