How to get free space on a disk volume without letter assigned? - c#

I'm trying to find out free space information about volumes. Those with letters assigned are fine (GetDiskFreeSpaceEx). I've also connected to VDS (Virtual Disk Service) and retrieved something called AvailableAllocationUnits (A) and AllocationUnitSize (B), where A*B = free size shown by Windows. But B is 4096, so this is not an exact number in bytes.
How is it possible to determine this without VDS?
Is there a more precise way (in bytes)?
regards,
Kate

On Windows, you could execute the following commands and parse the output:
vssadmin list volumes
This gives:
C:\Windows\system32>vssadmin list volumes
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2013 Microsoft Corp.
Volume path: \\?\Volume{66c6160d-60cc-11e3-824b-806e6f6e6963}\
Volume name: \\?\Volume{66c6160d-60cc-11e3-824b-806e6f6e6963}\
Volume path: D:\
Volume name: \\?\Volume{66c6160f-60cc-11e3-824b-806e6f6e6963}\
Volume path: C:\
Volume name: \\?\Volume{66c6160e-60cc-11e3-824b-806e6f6e6963}\
Then Execute
fsutil volume diskfree
Which gives:
C:\Users\MC>fsutil volume diskfree \\?\Volume{66c6160e-60cc-11e3-824b-806e6f6e6963}\
Total # of free bytes : 47826694144
Total # of bytes : 255691059200
Total # of avail free bytes : 47826694144
To read output of a shell process, you can read the standard output
string output = proc.StandardOutput.ReadToEnd();
DISCLAIMER: Yes I know its not exactly the cleanest way, but it is a way. As I'm not aware of an API for accessing such low level info.

Related

Increase fps in record video ffmpeg

I need to record 2 videos from 2 cameras in full hd 30 fps.
I use ffmpeg and wrapper - Aforge for c#.
init device:
_videoCaptureDevice = new VideoCaptureDevice(deviceName);
_videoCaptureDevice.VideoResolution = _videoCaptureDevice.VideoCapabilities[0];
_videoCaptureDevice.DesiredFrameRate = _fps;
_videoSourcePlayer.VideoSource = _videoCaptureDevice;
_videoCaptureDevice.NewFrame += _videoCaptureDevice_NewFrame;
_videoSourcePlayer.Start();
saving frames
if (_videoRecordStatus == VideoRecordStatus.Recording)
{
_videoFileWriter.WriteVideoFrame(eventArgs.Frame);
}
and init file writer
_videoFileWriter = new VideoFileWriter();
_videoFileWriter.Open(_fileName, _videoCaptureDevice.VideoResolution.FrameSize.Width,
_videoCaptureDevice.VideoResolution.FrameSize.Height, 30, VideoCodec.MPEG4, 10 * 1000 * 1000);
now _videoCaptureDevice.VideoResolution.FrameSize equals 1280x720 and 640x480 (for second device). But I already have problems with recording. Maximum fps is 24 for 480p and 13-14 for 720p (when I try to record videos from 2 cameras in the same time)
How to increase it?
Or it isn't possible? Maybe more powerfull computer will solve this problem (I have Pentium(R) Dual-Core CPU 2.50Ghz and usual videocart (Geforse 8500 GT) for working with two displays, usual hdd, usb 2.0)?
I will glad any help (maybe another library, but not language (c#))
PS
I already used Emgu.CV and faced with simular problems..
The framerate is limited by the hardware.
Use AMCap or GraphEdit to check what your camera really supports. It will depend on the choosen resolution and ouput format (higher resolution -> lower framerater).
Be aware that AForge always uses the highest value for all resolutions which can lead to oversampling (e.g.: AForge produces the rames at 60Hz, but the camera only supports 15Hz at the given resolution, so the images will mostly be duplicates. See here
Also use process explorer and a profiler to see how busy your CPU really is and what it is doing.

Replace the image in a JPG image file but keep the metadata?

Is it possible to replace the image portion of a JPG but keep the embedded metadata? Alternatively, is there a way to reliably copy all metadata from one JPG image to another?
Background: I have an ASP.NET web application that stores JPG images. Users use the tinyMCE image editor to edit the image in the browser (resize, crop, etc), then the browser sends this modified image up to the server. I need to replace the original image file with the edited one while preserving the original metadata.
My first approach was to copy the System.Drawing.Imaging.PropertyItems and also copy some items using InPlaceBitmapMetadataWriter, but many properties are getting missed with this approach. I need something more reliable and it occurred to me if I could just replace the image portion that might be the ticket. But googling hasn't revealed anything.
This is an open source product (Gallery Server) so any libraries have to be compatible with this license.
You can do this with ImageMagick - here is one way.
Load the original image with the metadata you want to preserve, then load the "fake" image and composite that over the top of the original and save, and ImageMagick will preserve the original image's metadata. Let's do an example.
Here is an original image, let's check the metadata with jhead
jhead original.jpg
File name : original.jpg
File size : 2080473 bytes
File date : 2015:12:18 09:05:53
Camera make : Apple
Camera model : iPhone 4S
Date/Time : 2014:12:25 15:02:27
Resolution : 3264 x 2448
Flash used : No
Focal length : 4.3mm (35mm equivalent: 35mm)
Exposure time: 0.0083 s (1/120)
Aperture : f/2.4
ISO equiv. : 50
Whitebalance : Auto
Metering Mode: pattern
Exposure : program (auto)
GPS Latitude : N 54d 26m 14.84s
GPS Longitude: W 3d 5m 49.91s
GPS Altitude : 226.00m
JPEG Quality : 95
Now, we run the process I suggested, compositing a grey (fake) image of the same size over the top:
convert original.jpg fake.jpg -composite new.jpg
and we get this:
And if we check the metadata of the new image:
jhead new.jpg
File name : new.jpg
File size : 43408 bytes
File date : 2015:12:18 09:08:30
Camera make : Apple
Camera model : iPhone 4S
Date/Time : 2014:12:25 15:02:27
Resolution : 3264 x 2448
Color/bw : Black and white
Flash used : No
Focal length : 4.3mm (35mm equivalent: 35mm)
Exposure time: 0.0083 s (1/120)
Aperture : f/2.4
ISO equiv. : 50
Whitebalance : Auto
Metering Mode: pattern
Exposure : program (auto)
GPS Latitude : N 54d 26m 14.84s
GPS Longitude: W 3d 5m 49.91s
GPS Altitude : 226.00m
JPEG Quality : 96
... and suddenly the grey rectangle I just made two minutes ago on my Mac was taken on Christmas Day last year at exactly the same spot as the original in the Lake District :-)
If your fake image and original differ in size, you can do this to force them to a common size before compositing (so that the fake completely covers the original) like this:
convert original.jpg fake.jpg -resize 3264x2446! -composite new.jpg
For copying metadata between images. You can use exiftool, which is designed for metadata manipulation.
exiftool -TagsFromFile sourceImg.jpg target.jpg
-TagsFromFile means you are copying metadata.
add -overwrite_original before -TagsFromFile if you dont want exiftool to generate backup files.
If you want to copy selected tags or perform recursion, see this answer.
I think that ImageMagick can help you.
I saw that ImageMagick example where you can see how to extract all metadata from a jpg file.
Now if you have the metadata of the original file, you can create a new file with that metadata. I found a example of how to add metadata with ImageMagick
I hope this can help you

How many USB cameras can be accessed by one PC

I am just wondering how many USB cameras can be accessed by one desktop PC? Is there any limit? I am planning to create my own Windows application (using .NET) to capture around 10 USB cameras that are connected to my desktop PC. Is this possible?
The problem is not how many you can discover. On a single USB bus, ~127 could be possible.
But, a USB bus can only transfer a limited amount of bytes per second. So if you want to use more then one, you have to calculate the amount of bandwidth you have for the video stream.
Example :
A USB bus normally can deliver realistically ~35 MB/s. 640*480*2 bytes per pixel => 614400 bytes per frame. #30 FPS this is ~17 MB/s, so you can use 2 cameras simultaneously with this setup.
If that Actually, see code for connect 5 cams in to one computer( processor core i3, 8gb ram!!!) you need connect all cameras in to usb ports only on you'r computer!!!
git hub link
a bit late sorry :)
What i found out is that a single USB card is limited by the USB bandwidth. but..
if you add USB cards on the PCI you can get more cameras but...
most vendors do not bother to alter the USB card address the computer see so you need to buy USB to PCI cards from different vendors and try your luck.
I had the same problem with firewire.
here is my code for python. (thank other programmers on stackoverflow)
# show multiple usb cameras
import os
import cv2
import threading
import time
import datetime
#font for image writing
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
fontColor = (255,180,180)
lineType = 2
SaveImage = True # if true save images
duration = [100,100,100,10,10] # time between image saves in sec
IMAGESAVEPATH = "C:/tmp/pix" # path for camera to store image to
ShowText = True #Show text on image - text will be saved with the image
#camera thread. here me make a thread and its functions
class camThread(threading.Thread):
def __init__(self, previewName, camID):
threading.Thread.__init__(self)
self.previewName = previewName
self.camID = camID
def run(self):
print ("Starting " + self.previewName)
camPreview(self.previewName, self.camID)
#camera main loop - here we init the specific camera and start it then have a window to show the image and we store the image to the right directory
def camPreview(previewName, camID):
cv2.namedWindow(previewName)
cam = cv2.VideoCapture(camID) #start the camera (the cameras are numbered by the order they are connected to the computer)
if cam.isOpened(): # try to get the first frame
cam.set(3,4000) #this will bring the largest frame set
cam.set(4,4000)
cam.set(5,1) #fps
time.sleep(2)
cam.set(15, -1.0)
rval, frame = cam.read() #read the image
else:
rval = False
TStart = time.time() # time for next image
mpath = os.path.join(IMAGESAVEPATH, str(camID)) #make sure the directory we save in exists, otherwise make it
print("try to make dir ", mpath, " T " , time.time())
if not os.path.exists(mpath):
os.makedirs(mpath)
cv2.namedWindow(previewName, cv2.WINDOW_NORMAL)
while rval: #if we get an image
height, width, channels = frame.shape
if ShowText: # write text on the image
caption = str(camID) + " - " + str(height) + " " + str(width) + " "
cv2.putText(frame,str(caption),(20,20),font, fontScale, fontColor, lineType)
cv2.imshow(previewName, frame) # show image in its window
#cv2.resizeWindow(previewName, 1280,960) # resize all windows removed ofer
rval, frame = cam.read() #raed next image
key = cv2.waitKey(20)
if key == 27: # exit on ESC
print("key pressed ", camID)
break
TDiff = int(time.time() - TStart) # time difference from last image
if (SaveImage and TDiff > duration[camID]): # Save if time passed
file_name = os.path.join(mpath, "T{:%Y.%m.%d %H-%M-%S}.jpg".format(datetime.datetime.now())) # make file name string
cv2.imwrite(file_name, frame)
print("\rsaved to : ", file_name)
TStart = time.time() #reset time to next image
cv2.destroyWindow(previewName)
# Create 5 threads as follows
thread1 = camThread("Camera 1", 0)
thread2 = camThread("Camera 2", 1)
thread3 = camThread("Camera 3", 2)
thread4 = camThread("Camera 4", 3)
thread5 = camThread("Camera 5", 4)
thread1.start()
thread2.start()
thread3.start()
thread4.start()
thread5.start()
[Edited]
Actually, see this article which explains:
Get List of connected USB Devices
I'm not sure there is a maximum. I will check and post back if I find out.
[Further Edit]
Can't find a documented maximum. Theoretically the ManagementObjectCollection should be able to hold millions of objects in it. If you ran into problems (which I doubt with 10 devices), you could just preallocate the collection size upon instantiation.
I've just ran a test and I can pick up over 10 USB devices through a hub. You should be fine.
Maximum limit for usb devices connected to one host - 127. So, you can connect up to 100+ devices and they would work fine (100+ - because hub is also active device and have own address).
Possibly, you try to access first (already active) camera and program fails, because camera already locked?

Parsing large xml files to add new line between elements really slow

I have a scenario where I need to pull data out of a DB and write it out as xml. The problem is that the users want every element (DB Column) to be separated by a new line. The db table I am extracting has about 20,000 rows and has a lot of ntext columns (Table is about 3 Gig in size).
I am breaking the file up into 250 rows each so each file comes out to be around 14MB each. The problem is that the parsing is really really slow. In order to add a new line between each element/column I am adding some unique strings between each column coming out of the db so that I can use a Regex.Split function and append a new line to each item in that array.
I am sure that the slowness is user error / ignorance on my part as I live mostly in DBs, but I am not really sure what to do to try and speed up the parsing. Extracting the data as xml from the db is fast and writes rather quickly. But, introducing the parsing and adding a new line between each element has made each file write about 3 minutes per file.
Any suggestions on what I should be using in C# to parse and add the newline would be greatly appreciated.
As always I appreciate the input / comments I get on Stack.
Code I am using to parse the xml data:
//parsing the xml anywhere I see the string AddNewLine
public static void WriteFile(string xml,int fileNum)
{
string[] xmlArray = Regex.Split(xml, "AddNewLine");
string newXml = "";
//Getting filepath to write file out to
Connection filePath = new Connection();
string fileName = filePath.FilePath;
//foreach item in the array append carriage and new line
foreach(string xmlRow in xmlArray)
{
newXml = newXml + xmlRow + "\n\r\n";
}
//use StreamWriter to write file
using (StreamWriter sw = new StreamWriter(fileName + fileNum + ".xml"))
{
sw.Write(newXml);
}
//XmlDocument doc = new XmlDocument();
//doc.LoadXml(newXml);
//doc.Save(#"C:\TestFileWrite\PatentSchemaNew_" + fileNum + ".xml");
}
Example XML output where I would want a new line between each element:
<products>
<product>
<ProductID>1</ProductID>
<!--New Line-->
<Product>TestProduct1</Product>
<!--New Line-->
<ProductDescription>With the introduction of the LE820 Series, Sharp once again establishes its leadership in LCD and LED technology. In a monumental engineering breakthrough, Sharp’s proprietary QuadPixel Technology, a 4-color filter that adds yellow to the traditional RGB, enables more than a trillion colors to be displayed for the first time. A stunning new contemporary edge-light design with full-front glass proudly announces a new AQUOS direction for 2010. The proprietary AQUOS LED system comprised of the X-Gen LCD panel and UltraBrilliant LEDs enables an incredible dynamic contrast ratio of 5,000,000:1 and picture quality that is second to none. The LE820 series is very fully featured, including the addition of Netflix™ streaming video capability through the AQUOS Net™ service, along with the industry’s leading online support system, AQUOS Advantage Live. A built in media player allows for playback of music and photos via USB port.
QuadPixel Technology 4-Color Filter adds yellow to the traditional RGB sub-pixel components, enabling the display of more than a trillion colors.
Full HD 1080p (1920 x 1080) Resolution for the sharpest picture possible.
UltraBrilliant LED System includes a “double-dome” light amplifier lens and multi-fluorescents, enabling high brightness and color purity.
Full HD 1080p X-Gen LCD Panel with 10-bit processing is designed with advanced pixel control to minimize light leakage and wider aperture to let more light through.
120Hz Fine Motion Advanced for fast-motion picture quality.
Wide Viewing Angles (176°H x 176°W) Sharp's AQUOS® LCD TVs’ viewing angles are so wide, you can view the TV clearly from practically anywhere in the room.
High Brightness (450 cd/m2) AQUOS LCD TVs are very bright. You can put them virtually anywhere – even near windows, doors or other light sources – and the picture is still vivid.
AQUOS Net delivers streaming video with Netflix™, customized Internet content and live customer support via Ethernet, viewable in widget, full-screen or split-screen mode.
USB Media Player adds the convenience of viewing high-resolution photos and music on the TV.</ProductDescription>
<!--New Line-->
<ProductAccessories> What You'll Need
Add
Monster Cable MC BNDLF OL150F Bundle HDTV Performance Kit with Flat Panel Wall Bracket
Monster Cable HT700 8 Outlet Surge Protector
Monster's SurgeGuard™ protects components from harmful surges and...
$208.95
Get More Performance
Add
AudioQuest AQ Kit4 1-4ft. and 1-8ft. Black HDTV Performance Pack with HDMI Cables, Screen Cleaner & Mitt
Uncompressed digital signal for the highest quality picture and sound. One cable for video, audio and control. Two-way communication for expanded system control. Automatic display and source matching for resolution, format and aspect ratio. Computer and gaming compatibility. $79.75
Recommended Accessories
General Accessory
Add
Monster Cable ScreenClean 6oz. Ultimate Performance TV Screen Cleaner
Safe for use on your iPad, iPhone, iPod Touch, laptops, monitors, and TV screens Includes a high-tech reusable MicroFiber cloth that cleans screens without scratching Powerful cleaning solution removes dust, dirt, and oily fingerprints for ultimate clarity Advanced formula cleans without dripping, streaking, or staining like ordinary cleaners $13.94
Add
AudioQuest CleanScreen TV Screen Cleaning Kit
$19.75
Protection Plans
Add
TechShield TTL200S5 5-Year Service Warranty for LCD TVs $1,000-$2,000 (In-Home Service)
Parts and labor coverage with no deductibles No-lemon guarantee 50% value guarantee if you never use the warranty service $314.95
Add
TechShield TTL200S3 3-Year Service Warranty for LCD TVs $1,000-$2,000 (In-Home Service)
$157.95
Add
TechShield TTL200S4 4-Year Service Warranty for LCD TVs $1,000-$2,000 (In-Home Service)
$262.95
Add
TechShield TTL200S2 2-Year Service Warranty for LCD TVs $1,000-$2,000 (In-Home Service)
$104.95
Flat Panel Wall Mount - Fixed
Add
OmniMount OL150F Flat Panel Wall Bracket
Eco-friendly design and packaging Low mounting profile Includes universal rails and spacers for greater panel compatibility Small footprint provides ample room for power and A/V cutouts behind panel Lift n’ Lock™ allows you to easily attach your flat panel to the mount Sliding lateral on-wall adjustment Locking system secures panel to mount Installation template for simple and accurate mounting Includes end caps for a clean side view Includes complete hardware kit $99.95
Add
OmniMount NC200F Black Fixed Wall Mount for 37-63 inch Flat Panels
$129.95
Flat Panel Wall Mount - Tilt
Add
OmniMount NC200T Black Tilt Mount for 37-63 inch Flat Panels
Universal rails for greater panel compatibility Sliding lateral on-wall adjustment Locking bar works with padlock or screw End caps cover locking hardware and present a clean side view Installation template for simple and accurate mounting $179.95
Flat Panel Wall Mount - Cantilever/Articulating
Add
OmniMount UCL-X Platinum Wishbone Cantilever Mount Heavy Duty Dual Arm Double Stud
Tilt, pan and swivel for maximum viewing flexibility Weight capacity: 200 lbs Double-arm i-beam design for added strength Integrated cable management hides wires Lift and lock mounting system $279.88
Add
OmniMount NC125C Black Cantilever Mount for 37-52 inch Flat Panels
$299.95
Line Conditioner/Surge Protector
Add
Panamax PM8-GAV Surge Protector with Current Sense Control
8 Outlets (4 switched, 4 always on) Exclusive Protect or Disconnect circuitry Telephone line protection Cable and Satellite protection $59.89
Add
Monster Cable DL MDP 900 Monster Digital PowerCenter MDP 900 w/ Green Power and USB Charging
$74.77
HDMI Cable
Add
AudioQuest HDMI-X 2m (6.56 ft) HDMI Digital Audio Video Cable with Braided Jacket
Large 1.25% silver conductors Critical Twist Geometry Solid High-Density Polyethylene is used to minimize loss caused by insulation Uncompressed digital signal for the highest quality picture and sound $40.00
Add
Icarus ECB-HDM2 2m (6.56 ft) HDMI Digital Audio Video Cable
$16.95
Add
Monster Cable MC HDMIB 2m (6.56 ft.) HDMI Cable
$39.00
Component Video Cable
Add
Monster Cable MC 400CV-2m (6.56 ft.) Advanced Performance Component Video Cable
Get All the High Resolution Picture You Paid For
Your new DVD player, cable/satellite receiver, and TV might be more advanced... $49.00
Add
Monster Cable MC 400CV-1m (3.28 ft.) Advanced Performance Component Video Cable
$39.00
Add
AudioQuest YIQ-A 2m (6.6 ft) Component Video Cable
$44.75
General Accessory
Add
Monster Cable ScreenClean 6oz. Ultimate Performance TV Screen Cleaner
Safe for use on your iPad, iPhone, iPod Touch, laptops, monitors, and TV screens Includes a high-tech reusable MicroFiber cloth that cleans screens without scratching Powerful cleaning solution removes dust, dirt, and oily fingerprints for ultimate clarity Advanced formula cleans without dripping, streaking, or staining like ordinary cleaners $13.94
Add
AudioQuest CleanScreen TV Screen Cleaning Kit
$19.75</ProductAccessories>
<ProductFeatures>Detailed Specifications:
Basic Specifications
10-bit LCD Panel Yes
120HzFrameRate Yes
Aspect Ratio 16:09
Audio System 10W + 10W +15W (Subwoofer)
Backlight System Edge LED
Panel Type X-Gen LCD Panel
Pixel Resolution 1920 x 1080 (x4 sub-pixels) 8 million dots
Response Time 4ms
Tuning System ATSC / QAM / NTSC
Viewing Angles 176° H / 176° V Features
AQUOS Net Yes
AQUOS AdvantageSM Support Yes
AQUOS® Series Yes
Digital Still Picture Display Yes
Quattron quad pixel technology Yes
Included Accessories
Remote Control Yes
Table Stand Yes Power
Power Consumption AC (watts) 160W
Power Source 120 V, 60 Hz
Terminals
Audio Inputs (L/R) RCA X 2
Composite Video 1
Ethernet Input 1
HD Component 1
HDMI® 4
PC 1 (15-pin D-sub)
RS-232C 1
Weight & Dimensions Dimensions
Dimensions (wxhxd) (inches) 49-39/64" x 31-59/64" x 1-37/64
Dimensions with Stand(wxhxd) (inches) 49-39/64" x 33-57/64" x 13-25/64" Weight
Product Weight (lbs.) 66.1
Weight with Stand & Speakers (lbs.) 79.4</ProductFeatures>
<!--New Line-->
<CreatedDate>2011-03-13T12:59:54.627</CreatedDate>
<!--New Line-->
<LastModifiedDate>2011-03-13T12:59:54.627</LastModifiedDate>
<!--New Line-->
</product>
</products>
Thanks,
S
If I understand correctly the question and you have already AddNewLine separator in you input 14 MB XML files, possible you don't need load all file and split into parts at all. - Just read from input file line by line, replace AddNewLine text with new line in each line, where the separator exists and write modified line to new output file.
Following code will replace your AddNewLine text with \n\r\n in several orders faster than your function - less then 1 sec.
using (var streamOut = new StreamWriter(outputFileName)
{
using (var streamIn = new StreamReader(inputFileName)
{
while (!streamIn.EndOfStream)
{
string line = streamIn.ReadLine();
line = line.Replace("AddNewLine", "\n\r\n");
streamOut.WriteLine(line);
}
}
}
I think that you should investigate vtd-xml for at least three reasons:
Parsing performance and memory usage
Incremental update: DOM's problem is that it will construct a tree by taking apart the input document, then write the whole thing back out by concatnation. VTD-XML doesn't take apart the input doc, and the modification is by directly inserting the whitespace char (in your situation) into the docoument's byte representation. SAX and Pull will have the similar issue.
Support for xpath and random access.
Based on the info given above, I fully expect the performance to be below 1 sec for each file. What does your file look like? I would be glad to provide some sample code
Ok here is the code that does the white space insertion
using System;
using System.Text;
using System.Net;
using com.ximpleware;
public static void insertWS()
{
VTDGen vg = new VTDGen();
if (vg.parseFile("input.xml",false){
VTDNav vn = vg.getNav();
AutoPilot ap = new AutoPilot(vn);
XMLModifier xm = new XMLModifier(vn);
ap.selectXPath("/products/product/*");
while(ap.evalXPath()!=-1){
xm.insertAfterElement("\n");
}
xm.output("output.xml");
}
}
If I were you, I would abandon the string replace method and approach this from different angle. I would add the new lines as part of the xml when creating the xml and not after the fact. Something along the lines of:
void WriteXml(string xmlFileName, DataRowCollection rows)
{
var xmlSettings = new XmlWriterSettings { Indent = true };
using(StreamWriter stream = new StreamWriter(xmlFileName))
using(XmlWriter writer = XmlWriter.Create(stream, settings))
{
writer.WriteStartElement("products");
foreach(DataRow row in rows)
{
writer.WriteStartElement("product");
writer.WriteElementString("ProductID", row["ProductID"].ToString());
writer.Flush();
stream.WriteLine(); //insert new line
writer.WriteElementString("Product", row["Product"].ToString());
writer.Flush();
stream.WriteLine(); //insert new line
//repeat for rest of columns/elements
//...
writer.WriteEndElement(); //end product
}
writer.WriteEndElement(); //end products
}
}

C# High speed MD5/SHA hash over network

In a C# project that I am currently working on, we're attempting to calculate the MD5 of a large quantity of files over a network (current pot is 2.7 million, client pot may be in excess of 10 million). With the number of files that we are processing, speed is of the issue.
The reason we do this is to verify the file was copied to a different location without modification.
We currently use the following code to calculate the MD5 of a file
MD5 md5 = new MD5CryptoServiceProvider();
StringBuilder sb = new StringBuilder();
byte[] hashMD5 = null;
try
{
// Open stream to file to get MD5 hash for, create hash
using (FileStream fsMD5 = new FileStream(sFilePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
hashMD5 = md5.ComputeHash(fsMD5);
}
catch (Exception ex)
{
clsLogging.logError(clsLogging.ErrorLevel.ERROR, ex);
}
string md5sum = "";
if (hashMD5 != null)
{
// Change hash into readable text
foreach (byte hex in hashMD5)
sb.Append(hex.ToString("x2"));
md5sum = sb.ToString();
}
However, the speed of this isn't what my manager has been hoping for. We've gone through a number of changes to the way and number of files that we calculate the MD5 for (i.e. we don't do it for files that we don't copy... until today when my manager changed his mind so ALL files must have a MD5 calculated for them, in case at some future time a client wishes to bugger with our program so all files are copied i guess)
I realize that the speed of the network is probably a major contributing factor (100Mbit/s). Is there an efficient way to calculate the MD5 of the contents of a file over a network?
Thanks in advance.
Trevor Watson
Edit: put all code in block instead of just a part of it.
The bottleneck is that the whole file must be streamed/copied over the network, and your seems to look good...
the different hash functions (md5/sha256/sha512) have almost the same computation time
Two possible solutions for this problem:
1) run a hasher on the remote system and store the hashes in to separate files - if that is possible in your environment.
2) Create a part-wise hash of the file, so that you only copy a part of the file.
I mean something like that:
part1Hash = md5(file.getXXXBytesFromFileAtPosition1)
part2Hash = md5(file.getXXXBytesFromFileAtPosition2)
part3Hash = md5(file.getXXXBytesFromFileAtPosition3)
finalHash = part1Hash ^ part2Hash ^ part3Hash;
you have to test which part of the file are optimal to read, so the hashes stay unique.
hope that helps...
edit: changed to bitwise xor
One possible approach would be to make use of the parallel task library in .Net 4.0. 100Mbps will still be a bottleneck, but you should see a modest improvement.
I wrote a small application last year that walks the top levels of a folder tree checking folder and file security settings. Running over a 10Mbps WAN it took about 7 minutes to complete one of our large file shares. When I parallelised the operation the execution time came down to a bit over 1 minute.
Why don't you try installing a 'client' on each one which listens on a port and when signaled, will calculate the MD5 hash for the files requested.
The main server will then only need to ask each client to calculate the MD5. Using this distributed approach you will gain the combined speed of all the clients and reduce network congestion.

Categories

Resources