Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am in the design phase of a file upload service that allows users to upload very large zip files to our server as well as updates our database with the data. Since the files are large (About 300mb) we want to allow the user to limit the amount of bandwidth they want to use for uploading. They should also be able to pause and resume the transfer, and it should recover from a system reboot. The user also needs to be authenticated in our MSSQL database to ensure that they have permission to upload the file and make changes to our database.
My question is, what is the best technology to do this? We would like to minimize the amount of development required, but the only thing that I can think of now that would allow us to do this would be to create a client and server app from scratch in something like python, java or c#. Is there an existing technology available that will allow us to do this?
There are quite a few upload controls for this you should be able to Google. There are a few on this download page.
Another work around is to have your clients install a Firefox FTP plugin or write a Firefox plugin yourself but FTP is by far the easiest way to boot.
What's wrong with FTP? The protocol supports reusability and there are lots and lots of clients.
On client side, flash; On server side, whatever (it wouldn't make any difference).
No existing technologies (except for using FTP or something).
I found 2 more possibilities
Microsoft Background Intelligent Transfer Service (BITS):
Has: up and download, large files, encrypted (vis https), resumable (even auto resuming as long as the user is logged in), manual pause and resume, authentication (via https again), wrapper for .NET, foreground or background priority, ...
Not: bandwidth throttling, file verification (only filesize), compression
rsync:
Has: unidirectional transfer, large files, resume of partial uploads (and pause via stop), verification, encryption (via ssh, stunnel), compression, usable c library (librsync by Martin Pool [1],[2])
Not: good windows compability (only via cygwin or cwrsync), commercially usable (GPL)
Anybody found something else in C#?
There's an example of using HTML5 to create a resumable large file upload, might be helpful.
http://net.tutsplus.com/tutorials/javascript-ajax/how-to-create-a-resumable-video-uploade-in-node-js/
I'm surprised no one has mentioned torrent files. They can also be packaged into a script that then triggers something to execute.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Suppose I have an API endpoint such as Facebook Graph API, which I design an application running on my PC to periodically connect to the API and retrieve my posts, comments, etc. On each Timer_Tick, the program reconnects to the API and brings the top 10 data items from the API, and persists these data into databases.
Now, suppose that this application is built by 3rd party, and I just downloaded from the internet as binary file not opensource.
How can I know if the application is leaking my Facebook data to third party without my knowledge?
Is there a mechanism to monitor such leaking if found? (from programmatic perspective)
This is a matter of security and for being sure you almost must think about any vulnerability here and try to make sure there is no way to reveal data by the known vulnerabilities but you cannot be sure about unknown ones.
If this is the matter of the trust and you are dealing with sensitive data i strongly recommend you to avoid using 3rd party tools unless they are provided or certified by the API provider. here are some techniques witch will help you understand about what is going on in the backyard but they will definitely not guaranty the safety :
1- First of all make sure the application is really a binary code (i know you mentioned it as a binary), it's because some executable files are just scripts or semi-scripts but look a like a binary files. for instance in the some cases if the source of the executable application is written with C#, Python, Java, there are tools out there that will help you DeComplie the application and find out what's going inside. this solution of course can be considerably tough if for example the code is obfuscated or there is complex models or OO programming models involved.
2- Use network monitoring tools like WireShark or any other tool to capture all traffic of HTTP/HTTPS requests while using the 3rd party application. because the API is just the same as HTTP requests used by applications to exchange data you can use these tools to monitor what's going on in your computer. normally this application must only connect to the Facebook servers and URLs needed to use the web API, if there is any other request sent or received from a server other than the Facebook there is chance of data leak here. if these requests are not encrypted by SSL/TLS you would be able to see the data being exchanged or if they are encrypted through SSL/TLS there are tools that provide man in the middle attack solution to see these traffics but if they are encrypted in the application layer you won't be able to see what data are being transmitted so it might involve suspicion about data even higher chance of data leak. don't forget that this monitoring must be extended for the entire using cycle of the application.
Also limiting the application to talk only to the server in witch you are calling the API with OS Firewall will be step forward to decrease the chance of data leak here.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am about to start development of application in C# and .NET. The application is going to be big in terms of how user will configure it to display on the screen.
I need some way to store configuration data and I have only explored 2 options yet and it is XML files and INI files , which one of these is better? Is there any other new way to store data that works great with .NET Framework?
This is a really broad question, but basically the way I see it there are a few prime places to store application data. Exactly what and how you store is going to depend on the type of data. The following is my personal short list:
Registry - The windows registry can be used to store single key/value pairs or small amounts of read only data. You really can only write these settings when your application is installed unless you are running in administrator mode (which isn't a good idea).
App Config - Similar to the registry this allows storage of application data that is generally best written during installation or during configuration but not much after that. The nice thing about this is the system administrator can often find these files and they are xml which means they are easier to edit (and read) than other files.
Isolated Storage - If you are storing application, user or machine specific information and you don't mind writing your own file readers and writers (or you are interested in delving into xml storage) this is an excellent option for you. It allows user specific settings and it doesn't require the user to have special privileges on the computer.
Local Database - If you want values that you can look up easily, read and write often and are stored simply a local database can be excellent. You might consider looking into SQLLite or a similar tool for this.
Network Database - This is pretty much as advanced as it gets. If you want user information to be automatically processed regardless of where the user opens your application and you want to be able to share settings between computers this is probably your best option. You can use MySQL for free or SQLExpress if you aren't storing GB of settings. It does require a significant amount of setup but it might be the best option anyways if you require this level of capabilities.
Hopefully this gets you started. Best of luck!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently, a project came my way with requirements to ...
1. Build a C# console app that continuously checks website availability.
2. Save website status somewhere so that different platforms can access the status.
The console app is completed but I'm wrestling with where I should save the status. I'm thinking a SQL record.
How would you handle where you save the status so that it's extensible, flexible and available for x number of frameworks or platforms?
UPDATE: Looks like I go with DB storage with a RESTful service. I also save the status to an xml file as a fallback to service being down.
The availability of the web-sites could be POSTed to a second web service which returned a JSON/Xml result on the availability of said website(s). This pretty much means any platform/language that is capable of making a web-service call can check the availability of the web site(s).
Admittedly, this does give a single point of failure (the status web service), but inevitably you'll end up with that kind of thing anyway unless you want to start having fail-over web services, etc.
You could save it as XML, which is platform independent. And then to share it, you could use a web server and publish it there. It seems ironic to share website availability on an other website but just as websites, other type of servers/services can have downtime also.
You could create a webservice, and you probably will need to open less unusual ports on firewall to connect to a HTTP server than to connect a SQL Server database. You can also extend that service layer to add business rules more easily than at database level.
I guess webservice is the best option. Just expose a restful api to get a simple Json response with the server status. Fast and resources cheap.
Don't re-invent the wheel. Sign up for Pingdom, Montastic, AlertBot, or one of the plethora of other pre-existing services that will do this for you.
But, if you really must, a database table would be fine.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm writing a video on demand solution. I want to take care of the end to end process, from ingestion of content to playback.
I have decided to utilise IIS Smooth Streaming for the method of delivery, which means all the video content must be encoded as H246 Adaptive Streaming Video.
I originally started using Azure for this project, but I as dove deeper I realised this really was too big a platform for what I need. It loaded unnecessary work and complexity for what I'm trying to achieve.
The 'issue' I'm having is with choosing an appropriate method to encode uploaded content. There are many encoding packages available but I can't find one which meets my criteria.
I'm happy to use an off the shelf package or write something with an appropriate SDK.
Must operate on Windows Server 2012
Must operate while the interactive user is logged off (i.e. as a windows service)
Ideally notify when the job is complete (can be an indirect method)
Ideally create a thumbnail
Invoking the encode process can be simple command line, watch folder or API / SDK
Must on on my server, not a cloud service
Must encode H264 Adaptive Streaming for IIS
I've tried:
Expression Encoder - Doesn't work in 2012, and no longer developed by MS
Sorenson Squeeze - Almost works, but leaves itself open when launched from command prompt so each time it encodes so I'd end up with 100's of instances
Azure .. too big & expensive
I know Sorenson has an server product which can be self-hosted, but this is cost prohibitive.
MainConcept have several SDK's and I've emailed them, however they don't list prices so this to me means expensive. (You may have noticed cost is a big factor. I'm one guy and a company)
Can anyone recommend a .NET SDK (c#) SDK or encoder package which will hit my criteria?
Many thanks
Take a look at http://www.ffmpeg.org/
While not a .net solution, it's free, and meets most of your criteria
I've not used it for adaptive streaming video, but apparently it supports it
It's all done from the command line - depending on what you want to do, you may need to write a wrapper for it (we had to do this for monitoring folders/databases and for notifications when complete), but i've successfully used it in the past to encode tens of gigs of video on a daily basis
You can write your own service in c# or directly run it in IIS application with the needed permission.
Using ffmpeg :
http://vbffmpegwrapper.codeplex.com/
https://code.google.com/p/ffmpeg-sharp/
Using vlc:
https://code.google.com/p/libvlc-sharp/
http://libvlcnet.codeplex.com/
Rhozet Promedia Carbon is the tool you're looking for. Supports all your needs. You can request a free demo. They handle all the licensing for the formats.
http://www.harmonicinc.com/product/promedia-carbon
Any tool you might want for this is going to be cost-prohibitive due to licensing.
As others have mentioned, your other option is FFMpeg
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've been looking for a simple key/license system for our users. Its partly to stop piracy (avoid users from sharing the application around) and the other half to track the number of 'licensed users' we have. I have already read a few good suggestions on SO but I'm curious as to how people have implemented the 30 day evaluation criteria.
Do you generate a key that stores the date somewhere and do a comparison each time or is it a little more complicated - deleting the file/removing the registry shouldn't deactivate.
Are there any example implementations out there that can give me a head start? The irony is that our PM doesn't want to license a third-party system to do it for us.
This is for a Windows Forms application.
Have you checked out the Rhino-Licensing project by Ayende Rahien. You can also see his blog post about licensing a commercial product which led him to develop this solution.
There are two separate challenges: i. How do you prevent a copied app from running. ii. How to prevent users from ripping out/bypassing your prevention scheme. The first one is usually done by taking a hard to copy signature of the user's system (e.g. Hard Drive ID + Processor ID + RAM, etc) and using it as the seed/key AND activating it on-line by calling "home".
The Second issue is harder to do in .Net since the source code can be in someway extracted and recompiled to exclude your protection system. The key here is to make it cheaper to buy the license than to remove the protection at the user's end. You may find that for most products, the suggestion to use a customized engine to encrypt your product libraries that also contain your copy-protect and decrypt it at initial run-time, might be enough.
I am not sure you can actually protect a .NET - There may be commercial solutions that do the trick. The reason is .NET code can be seen through Lutz Roeder (Thanks Jasonh for the heads up) Red Gate's Reflector (It was formerly by the named guy above). The best way to deal with it is to look for code obfuscation which makes reflecting more trickier, I can point you to one place I know of that does this for free - Phoenix - NtCore.Com.
The more esoteric solution would be to create a .NET hosting environment in C++, load the binary image (which could be encrypted) and the hosting environment than undecrypt it in memory - have heard of that theory but not sure how that would be done in practice. Please do not use your own protection scheme as there could be a weakness.
Someone once said - "Security through obscurity"....
Hope this helps,
Best regards,
Tom.
I worked on a project that handled this by putting some critical functionality (for example data storage, reporting, or payments) on an external server we ran, and requiring the user to log in to this server to get the functionality.
Customers can make backups, share, or run the application locally, but to access this critical function they have to type a password in to our application and connect to our server. Customers knew the password allowed changing their data, so they would not want to share the password with other people.
This was handy because we do not care how many copes of the application are out in the wild, we only track server connections. We included machine-identifying data like MAC address in the connection data, so we can track which machines are connecting.
I'm not just saying this because my company sells the OffByZero Cobalt software licensing solution for .NET: your PM should know that software licensing is very hard to get right, and if you roll your own, you'll be supporting it for the foreseeable future.
Take a look at the article Developing for Software Protection and Licensing; it explains how to choose a solution, why you should obfuscate your application and gives a number of tips for structuring your code to be harder to crack.
In particular it makes the point that the vast majority of companies should outsource their software licensing, as it makes no sense to spend developer time on building and maintaining a complex system that isn't your core business.
What is more important to your company: adding an important new feature to your product, or tracking down a peculiar permission behaviour on an ancient version of Windows that's clobbering your licensing system?