On my companies intranet, the business had asked to allow the users to deposit a file into a shared folder for it to be auto-posted to the intranet site. Pretty much the same as a file upload, with direct access to the storage device on a different server (with filetype limitations).
Since I don't have any control over the users and their knowledge of Section 508 compliance, is there a method to valid the document prior to it being added to the page? Right now, I have a C# class that builds a list from approved folders within that directory. I just want to make sure that if the files are not accessible, that they do not get added to the list.
I am sorry to say that Chris is incorrect in his answer. Section 508 applies to all file types, not just applications. There is a scanner by HiSoftware called Compliance Sheriff, that might work, but it may break due to how big your site is. My recommendation is to train people to make files compliant.
If you want the files to be truly accessible, then no machine algoritm will do. A story from my previous workplace:
Content editors for a site were asked to enter in a picture's caption (placed below an image, for all to see), and the alt text (for screen-readers). One ended up entering the same text in for the alt text as they did for the caption. As a result, a 508 scan did pass, but an actual 508 evaluation by a human did not.
We had a similar problem as you, with files placed on our CMS. Our customer and her cadre of lawyers ultimately decided that it was not up to the IT staff to determine 508: it was the publisher of the file. If someone wanted to upload a PDF or PPT or Word doc, they had to ensure it was 508 compliant FIRST. We, the code monkeys, were not to take on the risk.
Whenever someone was going to upload a file, we'd warn them about compliance, and offer them a contact for making their files 508 compliant (we had an in-house group for that). We also offered a way for users to flag non-508 files so we could pull them down quickly and get them updated.
Related
If I were to follow this example file uploads would be stored in the wwwroot file. It is my understanding this file is where you should store static files that will be served to the user. Sure, I want my users to be able to download but is there a filesystem, specific to asp.net core/IIS/Windows Server 2012, that would be best? I'm expecting around 10,000 files max after several years.
I'm planning on creating a folder for the uploaded documents, I'm just unsure of where to place it.
Note: The answer provided here was not sufficient
To the best of my understanding, unless you take special precautions, files under wwwroot can be downloaded freely by users, bots, etc. with no authentication. If the files are not sensitive in nature, then there is nothing wrong with using wwwroot.
If you want to provide security at the controller level (e.g., a user can only view their own files), then it's probably better to put them elsewhere in the file system. The path is kind of arbitrary, but the security settings on the folder must be set in such a way that the dotnet process can access it. You can give Everyone full access, or be more restrictive if you see fit. This is done directly on the OS of the server, assuming that you have access to it.
For 2 reasons I would not do that.
1. Keep control and overview. I would create a subfolder and give it the needed rights to create and write uploaded files there.
2. You are then free to change authorization if your requirements ever should change.
In my asp.net application when i'm going to upload any size of video need to decrease(reduce) the size of video, any one have code please help me
Thanks
You can not.
Any reduction in video size would be either compression or recoding to a lower resolution etc.
This is way beyond the scope of a web browser upload - unless you want to implement one or both of those in javascript (!).
Any size reduction would have to be done as a separate step - outside of the website - before uploading.
The whole question begs the concept whether you have understood how web pages work, in principle. There is a very strong separation of responsibilities between the web browser and the server. In particular, the following answer to a comment is - funny:
Okay no need to instantly decrease size, just before save path in
database and store file in folder decrease the size, save decreasing
file in folder
Ok, lets upload the parth. HOW DOES THIS HELP?
The path will be local to the uploaders machine. C:\Videos\LargeVideo.mpg is neither the video file, nor a location your asp.net server can access.
This does totally not solve the problem. Unless the user transcodes the file, it still is on the user's machine and too large. This is like saying "ok, the package weights too much - let's write the recipient address in another font". Does not even try to solve the problem.
Only realistic solutions are:
Provide the bandwidth.
Provide a client side upload application (NOT a webpage) that the user installs that then can not only do the upload, but can do any trans-coding necessary before uploading.
You are stuck in two elements:
A very strong client/server separation and
A very limited runtime environment on the client (javascript in the web browser).
No whining and not acceptance will every be able to change that. There is no magical way to "nothing to convert any format, all type of videos accepted just simple decrease file size only". This is called transcoding (change from one encoding to another one - and you can for example change the resolution when doing so) and it is a VERY intensive process not just doable in a browser.
I'm working on a project that requires us to send PDFs to a printing press. We've previously done this with a desktop app and Acrobat, but I'd like to switch to an ASP.Net app to give us more flexibility on what device the end user is using (there will likely only be one user at any given time).
Following something similar to this MS KB article is working well for sending the PDFs - the printer prints the documents, decent quality, etc.
The only issue I've found though, is that our files may require different printer configurations - e.g. one may need to be booklet folded, while the next may not.
Previously we had set these up in preconfigured drivers (i.e. "MyPrinter1" is set to booklet folded, "MyPrinter2" is not - both point to the same physical printer).
Sending the raw data, however, seems to ignore these. I'm assuming it's due to some header data not being included, or something similar; but I haven't found any info on how to include it.
I'm open to other methods. I've tried GhostScript, but it threw errors about the files. PDFSharp seems to work fine locally or while logged in to the web server, but doesn't do anything when logged out (not even an error message; assuming this is Adobe more than PDFSharp).
I'm potentially open to a paid option, but would (obviously) prefer free.
It's been a while and I forgot I had asked this question, but what we wound up doing was:
PdfView4Net (http://www.o2sol.com/pdfview4net/overview.htm) for opening the PDF and managing the print job.
Setting up default printing preferences for each configuration on the print server.
Making sure all printer configurations were installed for the same user as the service.
I wrote a custom control for output file name selection with the typical: text box for the filename, a "browse" button, and some other functionality specific to my application.
The text box changes color depending on the filename. If the file location cannot be written to, it turns red. If the file already exist, it turns yellow. Otherwise, it remains the system-assigned color.
To see if a file exists, I use IO.File.Exists; simple enough.
I implemented the "if the file can be written to" as a simple try-catch block where a file is actually opened, something written in it, closed, then deleted. If at any point an exception is thrown, I know the user can't use that filename and I turn the text box red.
This is a catch-all; since I'm doing the actual operation I intend to do, it is fool-proof. However, it seems irresponsible to have software creating and deleting files like crazy just to see if it can.
So my question is, how do I replicate this functionality without creating files? I can see I have to:
Check the path for legality (e.g., 'z:' is not a valid filename). This entails parsing the path and making sure all directories exist.
If the location exists, I have to check for write permissions. (Several answered questions exist to this end.)
Is there anything else?
EDIT
Within minutes I see people are already voting up an answer that criticizes that I'm checking at all that the file is accessible before actual writing to it occurs. While I appreciate experts "standing back" from my question to see whether or not there is a completely different way to achieve it, telling me I shouldn't be doing it is not an answer to my question.
So let me elaborate on my application (I am not expecting hundreds of users at the same time).
I use this file chooser control in data acquisition applications. In many situations the test that you are about to run is "expensive" in one way or another. Therefore it is critical to set things up very carefully. Overwriting data can be very expensive (and for the fearful user I have a checkbox that will append the date and time down to the millisecond to the filename).
So the purpose of my indicator colors is not to provide a surefire way for the software to know the file can be written to (that check is still done at the instant it actually has to), it's to serve as an indicator to the user that at least he has set up the file name correctly so if he goes forward he is guaranteed not to overwrite old data and he's almost sure a last-minute IO error (filename typo) won't let the experiment run unrecorded.
I suggest this - don't check anything before user commits the action. With your current approach, even if you verified the file is okay, it may be locked 5 seconds later when the user actually commits to write to a file. Doing preliminary checks may only give user a false impression of estimated success. Especially consider this point on a terminal server with 100+ simultaneous users.
There is nothing wrong with showing a prompt with Retry/Cancel/etc. if no access, and let user decide.
EDIT:
No offense, but there are standards on how such collisions are handled. Windows standard is to show a prompt to the user. Also consider this - if you suddenly have a deny in write access to the folder, which you are not expected to have, you probably need to hire another system/network administrator.
If the operation is costly, make sure this guy is paid well. C'mon, what if your network goes down during writing? Hard drive? Router? There are many reasons why writing to a file can be interrupted, and you should be prepared for that. If you cannot afford it, make sure you have invested in good infrastructure and good people to support it.
Down on earth, you can increase chances of acquiring a successful lock on the file:
Pick a unique file name, using datetime-based hash as a suffix/prefix.
Write to user's home directory, also known as %UserProfile%, it is likely that you will succeed.
I can understand your problem with not wanting to risk losing "expensive" data because the file couldn't be written and a responsible program will do it's best to avoid the situation.
I would do this by cacheing the results. Before the test is run write a mock result to a file somewhere in the user data space, then leave the file open and write the real result to the file. After this is done write it to the user-specified file. Provide a recovery option that will read the cache file and write it out to the user's file.
Your approach could fail because just because the file was writable at the start doesn't mean it's still writable. The network could have gone down. Someone could have removed the flash drive. Someone else could be doing a large data transfer through a buggy router. (Real world case--it took me a long time to prove it was a network problem and not my program. finally accepted it was their fault when I showed that dir :*.* /s on multiple machines at once would almost certainly cause one or more to fail.)
OK, I thought this was a fairly simple task, but apparently it isn't ...
I have a folder with +1000 photos in it. These are all photos taken with a camera, each about 3 MB. Users need to be able to view these pictures (as a list), rename or delete them. That's it.
A possible solution would be this control : ImageListView - CodeProject
but because it has an Apache license, we can't use it.
So how to do it? Any ideas or suggestions? I'm using .NET 2.0
.... EDIT : .....................................
OK, apparently we CAN use the Apache license. (Also see: https://stackoverflow.com/questions/1007338/can-i-use-a-library-under-the-apache-software-license-2-0-in-a-commercial-applic) However, using the license is very confusing for me. I read the following guide but still don't exactly know how to apply it to our project : http://blog.maestropublishing.com/how-to-apply-the-apache-20-license-to-your-pr
it says:
you need two files in the root or top directory of your distribution.
What's exactly meant by 'distribution'? Is that our installed application, and top directory meaning Program files/OurApp/ ?
It also says:
Replace all [bracketed] items in the above notice statement. There are only two of these items so should not be hard for you to do.
But that would give me a notice file, reading :
Copyright 2012 OUR_COMPANY
Licensed under the Apache License, etc...
But our app isn't licensed under the Apache license?
I'm sorry but I'm very confused and don't want to make any mistakes with this legal stuff...
What would I need to do exactly to be able to use this control?
Perhaps you need your own control for this task.
What i think is just a sketch of what i`d do in your place.
You need your own control with paging(to show only limited photos to user) or scroll-event-driven(to load photos on demand).
Perhaps you need some thumbnail generator.
Point is you probably face a huge pile of photos, so you cannot get them all in one time.
"Thats it" is not that simple.
For 1000+ that is over 3 GB.
Would need thumbnails for faster preview.
If users are going to access this files directly then they would need NTFS permission.
Maybe what you want.
What you are going to get into in locking problems.
If one user has a file open then you cannot rename or delete it.
I know you are not going to like this but to do it right you need a server app to manage that folder and users access via a WCF service so there is a single control point.