A generic error occurred in GDI+ while trying to print streams - c#

So I have a function that renders a Queue Stream class. Based on a conditional, I would like to add an additional byte array to this queue. I thought I could do this by converting the byte array to a stream, and then Enqueueing it with this additional stream.
I do notice that the count of the streams goes up after enqueueing the original stream, and I pass that into a function that prints the streams. This is where I receive the exception "A generic error occured in GDI+."
EDIT: After going through each page, I realized that when combining the TermsandConditions Byte array to the Stream Queue, it was showing as just one byte Array, where as the T&C's is 2 pdfs.
So my next question is how to convert a pdf to an image?
Here's the function where I'm combining the queues:
internal static void DoPrintInvoice(int orderID, SalesOrderBLL.DocumentType doctype, string printer, int copies, List<string> lines)
{
using (var context = rempscoDataContext.CreateReadOnlyContext())
using (MiniProfiler.Current.Step("DoPrintInvoice()"))
{
// Generate Report
using (var report = GetSalesOrderReport(orderID, _DocumentTypeDescriptions[doctype], doctype != DocumentType.InvoiceLetterhead, lines))
{
// render queue streams for printing
var streams = PrintingBLL.RenderStreams(report, landscape: false);
//add additional byte array to stream.
var TermsAndConditions = GetTermsAndConditions().ToArray();
Stream TCStream = new MemoryStream(TermsAndConditions);
if (doctype == DocumentType.OrderAcknowledgement)
{
streams.Enqueue(TCStream);
}
// render and save pdf in background, print report
BackgroundTask.ParallelInvoke(
() => SaveSalesOrderPDF(orderID, doctype, report),
() => PrintingBLL.PrintStreams(streams, string.Format("Sales Order ({0})", report.DisplayName), printer, copies, false)
);
}
}
}
Then here's the PrintStreams function:
internal static void PrintStreams(Queue<Stream> streams, string documentName, string printer, int copies, bool landscape)
if (copies > 0)
{
// get printer details
using (var pd = new PrintDocument())
{
lock (_PrintLock)
{
pd.PrinterSettings.PrinterName = printer.Trim();
if (!pd.PrinterSettings.IsValid)
throw new ArgumentOutOfRangeException(string.Format("Invalid printer \"{0}\". Please try again with a different printer.", printer));
}
pd.DocumentName = documentName;
pd.PrintController = new StandardPrintController();
pd.PrinterSettings.Copies = (short)copies;
pd.PrinterSettings.Collate = true;
pd.PrinterSettings.DefaultPageSettings.PaperSize = pd.PrinterSettings.PaperSizes.Cast<PaperSize>().First(ps => ps.Kind == PaperKind.Letter);
pd.DefaultPageSettings.Landscape = landscape;
pd.DefaultPageSettings.Margins = new Margins()
{
Top = 0,
Bottom = 0,
Left = 0,
Right = 0,
};
var numPages = streams.Count;
var currentPage = 0;
pd.PrintPage += (s, ev) =>
{
BackgroundTask.SetCurrentStatus(
currentPage / numPages,
string.Format("Printing page {0} of {1}. [{2}]", currentPage, numPages, documentName));
// get next page
var ms = streams.Dequeue();
// if we have any streams left, then we have another page
ev.HasMorePages = streams.Any();
// reset stream
ms.Position = 0;
// read page image
var image = new Metafile(ms);
var r = new Rectangle()
{
X = 0,
Y = 0,
Width = 825,
Height = 1075,
};
if (landscape)
{
r.Height = 825;
r.Width = 1075;
}
// draw image directly on page
ev.Graphics.DrawImage(image, r);
// destroy stream
ms.Close();
currentPage++;
};
try
{
lock (_PrintLock)
pd.Print();
BackgroundTask.SetCurrentStatus(100, string.Format("Finished printing {0}", documentName));
}
catch (Exception e)
{
BackgroundTask.SetCurrentStatus(0, string.Format("Printing Error: {0}", e.Message));
throw new InvalidOperationException(
string.Format("The document failed to print. Arguments were: documentName = {0}; printer = {1}; copies = {2}; landscape = {3}",
documentName,
printer,
copies,
landscape),
e);
}
}
}
}

Related

Extract image from pdf in specific position

I have inserted the image in the pdf using itextsharp as shown below and I tried all the possible solutions to extract it with using the coordinates.
String pathin = pdf.src;
String pathout = "C:\\....";
string signedFile = System.IO.Path.GetTempFileName();
PdfReader reader = new PdfReader(pathin);
FileStream fs = new FileStream(pathout, FileMode.Create);
PdfStamper stamper = new PdfStamper(reader, fs);
PdfContentByte cb = stamper.GetOverContent(1);
iTextSharp.text.Image image1 = iTextSharp.text.Image.GetInstance(imageFileName);
image1.RotationDegrees = 270f;
image1.Alignment = Element.ALIGN_TOP;
image1.SetAbsolutePosition(0,0);
image1.ScalePercent(50f, 50f);
cb.AddImage(image1);
stamper.Close();
fs.Close();
Console.Read();
pdf.src = pathout;
Is there any way to use itextsharp to extract image from position (0,0)?
pseudo-code:
implement IEventListener
parse the page you are interested in, CanvasProcessor takes an IEventListener implementation in its constructor, every time it finishes rendering text, an image, or a path, it will notify the IEventListener
IEventListener has a method called eventOccurred(IEventData data, EventType type). One of the types will be responsible for images
cast IEventData to ImageRenderInfo
derive the coordinates from the ImageRenderInfo object
if the coordinates happen to contain Point(0, 0) then (temporarily) store the image in a variable of your IEventListener
code sample (java, iText7)
disclaimer: the following code does not handle rotation
class MyImageSeek implements IEventListener{
private int pageNr = 0;
private Map<Integer, Map<Rectangle, BufferedImage>> images = new HashMap<>();
public MyImageSeek(PdfDocument pdfDocument){
PdfCanvasProcessor canvasProcessor = new PdfCanvasProcessor(this);
for(int i=1;i<=pdfDocument.getNumberOfPages();i++) {
images.put(i, new HashMap<Rectangle, BufferedImage>());
pageNr = i;
canvasProcessor.processPageContent(pdfDocument.getPage(i));
}
}
#Override
public void eventOccurred(IEventData data, EventType type) {
if(type != EventType.RENDER_IMAGE)
return;
ImageRenderInfo imageRenderInfo = (ImageRenderInfo) data;
int x = (int) imageRenderInfo.getStartPoint().get(0);
int y = (int) imageRenderInfo.getStartPoint().get(1);
int w = (int) imageRenderInfo.getImageCtm().get(Matrix.I11);
int h = (int) imageRenderInfo.getImageCtm().get(Matrix.I22);
try {
images.get(pageNr).put(new Rectangle(x,y,w,h), imageRenderInfo.getImage().getBufferedImage());
} catch (IOException e) {}
}
#Override
public Set<EventType> getSupportedEvents() {
return null;
}
public Map<Rectangle, BufferedImage> getImages(int pageNr){
return images.get(pageNr);
}
}
and this is the main method to call this class
PdfDocument pdfDocument = new PdfDocument(new PdfReader(new File("C:\\Users\\me\\lookAtMe.pdf")));
MyImageSeek meeseek = new MyImageSeek(pdfDocument);
for(Map.Entry<Rectangle, BufferedImage> en : meeseek.getImages(1).entrySet())
System.out.println(en.getKey() + "\t" + en.getValue());

Printing PDF using different printer trays c#

Is there a way to print a PDF and select the paper tray to use programmatically?
I'm open to suggestions such as converting the PDF to a different format and printing from there.
I can print to the correct tray using PaperSource() and PrintDocument() is it possible to convert PDFs into a format that these functions can understand?
Thanks.
Based on getting the paper tray PaperSource from something like here on MSDN, if you don't mind using Ghostscript.NET, this should work for you:
public void PrintPdf(string filePath, string printQueueName, PaperSource paperTray)
{
using (ManualResetEvent done = new ManualResetEvent(false))
using (PrintDocument document = new PrintDocument())
{
document.DocumentName = "My PDF";
document.PrinterSettings.PrinterName = printQueueName;
document.DefaultPageSettings.PaperSize = new PaperSize("Letter", 850, 1100);
document.DefaultPageSettings.PaperSource = paperTray;
document.OriginAtMargins = false;
using (var rasterizer = new GhostscriptRasterizer())
{
var lastInstalledVersion =
GhostscriptVersionInfo.GetLastInstalledVersion(
GhostscriptLicense.GPL | GhostscriptLicense.AFPL,
GhostscriptLicense.GPL);
rasterizer.Open(filePath, lastInstalledVersion, false);
int xDpi = 96, yDpi = 96, pageNumber = 0;
document.PrintPage += (o, p) =>
{
pageNumber++;
p.Graphics.DrawImageUnscaledAndClipped(
rasterizer.GetPage(xDpi, yDpi, pageNumber),
new Rectangle(0, 0, 850, 1100));
p.HasMorePages = pageNumber < rasterizer.PageCount;
};
document.EndPrint += (o, p) =>
{
done.Set();
};
document.Print();
done.WaitOne();
}
}
}

Passing in a HttpPostedFileWrapper variable to a function breaks another function. Image validation and conversion in C#

So basically this part of the program is editing/uploading a new profile picture to a user's account. Previously, it worked fine. Then I decided to add in some picture validations (picture has to have certain dimensions, etc). So I made a separate Helper class for that that takes in the HttpPostedFileWrapper variable initialized in the controller.
So, in this controller function, I initialize a new instance of the ValidateImage class which holds two functions (DoValidation and Resize).
The Resize function was working fine until I added the DoValidation function and I feel like it has something to do with the memory stream.
I now get an "Invalid Parameter" error in the ResizeImage function (see below), even though I never changed that code and was working fine previously. Does it have something to do with the filestream not being closed properly or something?
Here is the code:
//Controller.cs
public virtual ActionResult EditMyProfilePicture(bool? ignore)
{
var loggedInEmployee = this.EmployeeRepos.GetEmployeeByUserName(User.Identity.Name);
int tgtWidth = 250, tgtHeight = 250;
try
{
// get a reference to the posted file
var file = Request.Files["FileContent"] as HttpPostedFileWrapper;
ValidateImage img = new ValidateImage();
if (file != null && file.ContentLength > 0)
{
// isolate the filename - IE returns full local path, other browsers: just the file name.
int index = file.FileName.LastIndexOf("\\");
// if not IE, index will be -1, but -1 + 1 = 0 so we are okay.
string fileName = file.FileName.Substring(index + 1);
// Validate the image
img.DoValidation(file, tgtWidth, tgtHeight);
if (!img.IsValidated)
{
throw new ArgumentException(img.Message);
}
else
{
byte[] resizedImg = img.Resize(file, tgtWidth, tgtHeight);
this.EmployeeRepos.SaveProfileImage(loggedInEmployee.EmployeeCode, resizedImg);
}
return RedirectToAction(MVC.Employees.EditMyProfile());
}
else
{
throw new ArgumentException("Please select a file to upload.");
}
}
catch (Exception ex)
{
ModelState.AddModelError(string.Empty, ex.Message);
}
return View(Views.EditMyProfilePicture, loggedInEmployee);
}
// ValidateImage.cs
public class ValidateImage
{
public string Message { get; private set; }
public bool IsValidated { get; private set; }
public void DoValidation(HttpPostedFileWrapper file, int tgtWidth, int tgtHeight)
{
try
{
Image img = Image.FromStream(file.InputStream);
int curHeight = img.Height, curWidth = img.Width;
// check for image too small
if (curHeight < tgtHeight || curWidth < tgtWidth)
{
Message = "image is too small. please upload a picture at least 250x250.";
IsValidated = false;
return;
}
// check for image is square
else if (curHeight != curWidth)
{
Message = "image is not a square.";
IsValidated = false;
return;
}
else
{
IsValidated = true;
}
}
catch
{
}
}
public byte[] Resize(HttpPostedFileWrapper file, int tgtWidth, int tgtHeight)
{
byte[] bytes = new byte[file.ContentLength];
file.InputStream.Read(bytes, 0, file.ContentLength);
file.InputStream.Close(); // close the file stream.
// Down-sample if needed from current byte array to max 250x250 Jpeg
byte[] resized = Helpers.ImageResizer.ResizeImage(bytes, tgtWidth, tgtHeight, ResizeOptions.MaxWidthAndHeight, ImageFormat.Jpeg);
return resized;
}
}
// Resize Image function
public static byte[] ResizeImage(byte[] bytes, int width, int height, ResizeOptions resizeOptions, ImageFormat imageFormat)
{
using (MemoryStream ms = new MemoryStream(bytes))
{
Image img = Image.FromStream(ms);
Bitmap bmp = new Bitmap(img);
bmp = ResizeImage(bmp, width, height, resizeOptions);
bmp.SetResolution(72, 72);
bmp.Save(ms, imageFormat);
return ms.ToArray();
}
}

NAudio streaming to mp3 file throught yeti lame wrapper

I have the following code to record audio in and out
using System;
using System.Diagnostics;
using System.IO;
using NAudio.Wave;
using Yeti.MMedia.Mp3;
namespace SoundRecording
{
public class SoundManager
{
private WaveInEvent _waveIn;
private WaveFileWriter _waveInFile;
private WasapiLoopbackCapture _waveOut;
private WaveFileWriter _waveOutFile;
private Process _lameProcess;
public void StartRecording()
{
InitLame();
DateTime dtNow = DateTime.Now;
try
{
InitAudioOut(dtNow);
}
catch
{
}
try
{
InitAudioIn(dtNow);
}
catch
{
}
}
private void InitLame()
{
string outputFileName = #"c:\Rec\test.mp3";
_lameProcess = new Process();
_lameProcess.StartInfo.FileName = #"lame.exe";
_lameProcess.StartInfo.UseShellExecute = false;
_lameProcess.StartInfo.RedirectStandardInput = true;
_lameProcess.StartInfo.Arguments = "-r -s 44.1 -h -b 256 --bitwidth 32 - \"" + outputFileName + "\"";
_lameProcess.StartInfo.CreateNoWindow = true;
_lameProcess.Start();
}
private void InitAudioIn(DateTime dtNow)
{
string pathIn = #"C:\Rec\(" + dtNow.ToString("HH-mm-ss") + " " + dtNow.ToString("dd-MM-yyyy") + " IN).wav";
_waveIn = new WaveInEvent();
_waveIn.WaveFormat = new WaveFormat(8000, 1);
_waveIn.DataAvailable += WaveInDataAvailable;
_waveIn.RecordingStopped += WaveInRecordStopped;
_waveInFile = new WaveFileWriter(pathIn, _waveIn.WaveFormat);
_waveIn.StartRecording();
}
private void InitAudioOut(DateTime recordMarker)
{
string pathOut = #"C:\Rec\(" + recordMarker.ToString("HH-mm-ss") + " " + recordMarker.ToString("dd-MM-yyyy") + " OUT).mp3";
_waveOut = new WasapiLoopbackCapture();
//_waveOut.WaveFormat = new WaveFormat(44100, 1);
_waveOut.DataAvailable += WaveOutDataAvailable;
_waveOut.RecordingStopped += WaveOutRecordStopped;
_waveOutFile = new WaveFileWriter(pathOut, new Mp3WaveFormat(_waveOut.WaveFormat.SampleRate, _waveOut.WaveFormat.Channels, 0, 128));
_waveOut.StartRecording();
}
private void WaveInDataAvailable(object sender, WaveInEventArgs e)
{
if (_waveInFile != null)
{
_waveInFile.Write(e.Buffer, 0, e.BytesRecorded);
_waveInFile.Flush();
}
}
private void WaveOutDataAvailable(object sender, WaveInEventArgs e)
{
if (_waveInFile != null)
{
using (var memStream = new MemoryStream(e.Buffer))
{
using (WaveStream wStream = new RawSourceWaveStream(memStream, _waveOut.WaveFormat))
{
var format = new WaveFormat(_waveOut.WaveFormat.SampleRate, _waveOut.WaveFormat.Channels);
var transcodedStream = new ResamplerDmoStream(wStream, format);
var read = (int)transcodedStream.Length;
var bytes = new byte[read];
transcodedStream.Read(bytes, 0, read);
var fmt = new WaveLib.WaveFormat(transcodedStream.WaveFormat.SampleRate, transcodedStream.WaveFormat.BitsPerSample, transcodedStream.WaveFormat.Channels);
var beconf = new Yeti.Lame.BE_CONFIG(fmt, 128);
// Encode WAV to MP3
byte[] mp3Data;
using (var mp3Stream = new MemoryStream())
{
using (var mp3Writer = new Mp3Writer(mp3Stream, fmt, beconf))
{
int blen = transcodedStream.WaveFormat.AverageBytesPerSecond;
mp3Writer.Write(bytes, 0, read);
mp3Data = mp3Stream.ToArray();
}
}
_waveOutFile.Write(mp3Data, 0, mp3Data.Length);
_waveOutFile.Flush();
}
}
}
}
private byte[] WavBytesToMp3Bytes(IWaveProvider waveStream, uint bitrate = 128)
{
// Setup encoder configuration
var fmt = new WaveLib.WaveFormat(waveStream.WaveFormat.SampleRate, waveStream.WaveFormat.BitsPerSample, waveStream.WaveFormat.Channels);
var beconf = new Yeti.Lame.BE_CONFIG(fmt, bitrate);
// Encode WAV to MP3
int blen = waveStream.WaveFormat.AverageBytesPerSecond;
var buffer = new byte[blen];
byte[] mp3Data = null;
using (var mp3Stream = new MemoryStream())
{
using (var mp3Writer = new Mp3Writer(mp3Stream, fmt, beconf))
{
int readCount;
while ((readCount = waveStream.Read(buffer, 0, blen)) > 0)
{
mp3Writer.Write(buffer, 0, readCount);
}
mp3Data = mp3Stream.ToArray();
}
}
return mp3Data;
}
private void WaveInRecordStopped(object sender, StoppedEventArgs e)
{
if (_waveIn != null)
{
_waveIn.Dispose();
_waveIn = null;
}
if (_waveInFile != null)
{
_waveInFile.Dispose();
_waveInFile = null;
}
_lameProcess.StandardInput.BaseStream.Close();
_lameProcess.StandardInput.BaseStream.Dispose();
_lameProcess.Close();
_lameProcess.Dispose();
}
private void WaveOutRecordStopped(object sender, StoppedEventArgs e)
{
if (_waveOutFile != null)
{
_waveOutFile.Close();
_waveOutFile = null;
}
_waveOut = null;
}
public void StopRecording()
{
try
{
_waveIn.StopRecording();
}
catch
{
}
try
{
_waveOut.StopRecording();
}
catch
{
}
}
}
}
I'm using NAudio to capture audio in/out and yetis' lame wrapper to convert it to mp3 file on the fly, the problem is that the resulting audio out file is corrupted and unreadable, probably, missing mp3 headers or something other that i've missed...
The problem is that you're getting batches of data from the loopback capture interface in the default format (ie: PCM), then writing that to a wave file with a format block that claims that the data is in ALAW format. At no point do you actually do a conversion from the PCM data to ALAW data, resulting in a garbage file.
The WaveFileWriter class doesn't do any form of recoding or resampling for you. It uses the format specifier to build a format block for the WAV file, and assumes that you are providing it with data in that format.
Your two options are:
Convert the incoming data from PCM-44100-Stereo (or whatever the default is) to ALAW-8000-Mono before writing to the WaveFileWriter instance.
Initialize _waveOutFile with _waveOut.WaveFormat to match the data formats.
Updated 26-Sep...
So after much messing around, I finally have a working solution to the original problem of correctly converting the wave format from the loopback capture into something that can be compressed.
Here's the code for the first stage of the conversion:
[StructLayout(LayoutKind.Explicit)]
internal struct UnionStruct
{
[FieldOffset(0)]
public byte[] bytes;
[FieldOffset(0)]
public float[] floats;
}
public static byte[] Float32toInt16(byte[] data, int offset, int length)
{
UnionStruct u = new UnionStruct();
int nSamples = length / 4;
if (offset == 0)
u.bytes = data;
else
{
u.bytes = new byte[nSamples * 4];
Buffer.BlockCopy(data, offset, u.bytes, 0, nSamples * 4);
}
byte[] res = new byte[nSamples * 2];
for (i = 0, o = 0; i < nSamples; i++, o+= 2)
{
short val = (short)(u.floats[i] * short.MaxValue);
res[o] = (byte)(val & 0xFF);
res[o + 1] = (byte)((val >> 8) & 0xFF);
}
u.bytes = null;
return res;
}
That will convert the 32-bit floating point samples to 16-bit signed integer samples that can be handled by most audio code. Fortunately, this includes the Yeti MP3 code.
To encode on-the-fly and ensure that the MP3 output is valid, create the Mp3Writer and its output Stream (a FileStream to write directly to disk for instance) at the same time and just keep feeding it data (run through the converter above) as it comes in from the loopback interface. Close the Mp3Writer and the Stream in the waveInStopRecording event handler.
Stream _mp3Output;
Mp3Writer _mp3Writer;
private void InitAudioOut(DateTime recordMarker)
{
string pathOut = string.Format(#"C:\Rec\({0:HH-mm-ss dd-MM-yyyy} OUT).mp3", recordMarker);
_waveOut = new WasapiLoopbackCapture();
_waveOut.DataAvailable += WaveOutDataAvailable;
_waveOut.RecordingStopped += WaveOutRecordStopped;
_mp3Output = File.Create(pathIn);
var fmt = new WaveLib.WaveFormat(_waveOut.WaveFormat.SampleRate, 16, _waveOut.Channels);
var beconf = new Yeti.Lame.BE_CONFIG(fmt, 128);
_mp3Writer = new Mp3Writer(_mp3Stream, fmt, beconf);
_waveOut.StartRecording();
}
private void WaveOutDataAvailable(object sender, WaveInEventArgs e)
{
if (_mp3Writer != null)
{
byte[] data = Float32toInt16(e.Buffer, 0, e.BytesRecorded);
_mp3Writer.Write(data, 0, data.Length);
}
}
private void WaveOutRecordStopped(object sender, StoppedEventArgs e)
{
if (InvokeRequired)
BeginInvoke(new MethodInvoker(WaveOutStop));
else
WaveOutStop();
}
private void WaveOutStop()
{
if (_mp3Writer != null)
{
_mp3Writer.Close();
_mp3Writer.Dispose();
_mp3Writer = null;
}
if (_mp3Stream != null)
{
_mp3Stream.Dispose();
_mp3Stream = null;
}
_waveOut.Dispose();
_waveOut = null;
}
Incidentally, the Mp3Writer class is all you need for this. Throw out the other Lame code you've got there. It will just get in your way.
WasapiLoopbackCapture will likely be capturing audio at 32 bit floating point, 44.1kHz, stereo. WaveFormatConversionStream will not convert that into a-law 8kHz mono in one step. You need to do this conversion in multiple steps.
First get to 16 bit PCM (I tend to do this manually)
Then get to mono (mix or discard one channel - it's up to you) (Again I'd do this manually)
Then resample down to 8kHz (WaveFormatConversionStream can do this)
Then encode to a-law (use a second instance of WaveFormatConversionStream)

EmguCV - Face Recognition - 'Object reference not set' exception when using training set from Microsoft Access Database

I've been developing a face recognition application using EmguCV (C#). I got the whole thing working okay if I store the face images (training set) in simple windows folder. But, after I tried to migrate the face images to be stored in a Microsoft Access database, an 'object reference not set to an instance of an object' exception message often occurs (not always, but most of the time) when the application tries to recognize a face from the video feed.
Funny thing is, the recognition actually still works okay if the exception happens to not occur.
Here is the snippet of the code of my program, using windows folder and database:
Reading the stored images from a Windows Folder
private void FaceRecognition_Load(object sender, EventArgs e)
{
//if capture is not created, create it now
if (capture == null)
{
try
{
capture = new Capture();
}
catch (NullReferenceException excpt)
{
MessageBox.Show(excpt.Message);
}
}
if (capture != null)
{
if (captureInProgress)
{
Application.Idle -= ProcessFrame;
}
else
{
Application.Idle += ProcessFrame;
}
captureInProgress = !captureInProgress;
}
#endregion
{
// adjust path to find your xml at loading
haar = new HaarCascade("haarcascade_frontalface_default.xml");
try
{
//Load of previus trainned faces and labels for each image
string Labelsinfo = File.ReadAllText(Application.StartupPath + "\\TrainedFaces\\TrainedLabels.txt");
string[] Labels = Labelsinfo.Split('%');
NumLabels = Convert.ToInt16(Labels[0]);
ContTrain = NumLabels;
string LoadFaces;
for (int tf = 1; tf < NumLabels + 1; tf++)
{
LoadFaces = "face" + tf + ".bmp";
trainingImages.Add(new Image<Gray, byte>(Application.StartupPath + "\\TrainedFaces\\" + LoadFaces));
labels.Add(Labels[tf]);
}
}
catch (Exception error)
{
//MessageBox.Show(e.ToString());
MessageBox.Show("Nothing in binary database, please add at least a face(Simply train the prototype with the Add Face Button).", "Triained faces load", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
}
}
Reading the stored images from a Microsoft Access Database
private void connectToDatabase()
{
DBConnection.ConnectionString = #"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=FacesDatabase.mdb";
DBConnection.Open();
dataAdapter = new OleDbDataAdapter("Select * from TrainingSet1", DBConnection);
dataAdapter.Fill(localDataTable);
if (localDataTable.Rows.Count != 0)
{
numOfRows = localDataTable.Rows.Count;
}
}
private void FaceRecognition_Load(object sender, EventArgs e)
{
//if capture is not created, create it now
if (capture == null)
{
try
{
capture = new Capture();
}
catch (NullReferenceException excpt)
{
MessageBox.Show(excpt.Message);
}
}
if (capture != null)
{
if (captureInProgress)
{
Application.Idle -= ProcessFrame;
}
else
{
Application.Idle += ProcessFrame;
}
captureInProgress = !captureInProgress;
}
#endregion
{
// adjust path to find your xml at loading
haar = new HaarCascade("haarcascade_frontalface_default.xml");
connectToDatabase();
Bitmap bmpImage;
for (int i = 0; i < numOfRows; i++)
{
byte[] fetchedBytes = (byte[])localDataTable.Rows[i]["FaceImage"];
MemoryStream stream = new MemoryStream(fetchedBytes);
bmpImage = new Bitmap(stream);
trainingImages.Add(new Emgu.CV.Image<Gray, Byte>(bmpImage));
String faceName = (String)localDataTable.Rows[i]["Name"];
labels.Add(faceName);
}
}
}
The face recognition function that causes the exception (exactly the same both when using windows folder and Access database):
private void ProcessFrame(object sender, EventArgs arg)
{
Image<Bgr, Byte> ImageFrame = capture.QueryFrame();
Image<Gray, byte> grayframe = ImageFrame.Convert<Gray, byte>();
MinNeighbors = int.Parse(comboBoxMinNeighbors.Text);
WindowsSize = int.Parse(textBoxWinSiz.Text);
ScaleIncreaseRate = Double.Parse(comboBoxMinNeighbors.Text);
var faces = grayframe.DetectHaarCascade(haar, ScaleIncreaseRate, MinNeighbors,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Size(WindowsSize, WindowsSize))[0];
if (faces.Length > 0)
{
Bitmap BmpInput = grayframe.ToBitmap();
Graphics FaceCanvas;
foreach (var face in faces)
{
t = t + 1;
result = ImageFrame.Copy(face.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
ImageFrame.Draw(face.rect, new Bgr(Color.Red), 2);
ExtractedFace = new Bitmap(face.rect.Width, face.rect.Height);
FaceCanvas = Graphics.FromImage(ExtractedFace);
FaceCanvas.DrawImage(BmpInput, 0, 0, face.rect, GraphicsUnit.Pixel);
ImageFrame.Draw(face.rect, new Bgr(Color.Red), 2);
if (trainingImages.ToArray().Length != 0)
{
MCvTermCriteria termCrit = new MCvTermCriteria(ContTrain, 0.001);
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
trainingImages.ToArray(),
labels.ToArray(),
3000,
ref termCrit);
try
{
name = recognizer.Recognize(result).Label;
}
catch (Exception error)
{
MessageBox.Show(error.ToString());
}
ImageFrame.Draw(name, ref font, new Point(face.rect.X - 2, face.rect.Y - 2), new Bgr(Color.LightGreen));
}
}
}
CamImageBox.Image = ImageFrame;
}
Here is the screenshot of the exception message:
http://i.imgur.com/DvAhABK.jpg
Line 146 where the exception occurs is this line of the ProcessFrame function:
name = recognizer.Recognize(result).Label;
I tried searching for similar problems in the internet, and found these:
'Object reference not set to instance of an object' error when trying to upload image to database
Object reference not set to an instance of an object #5
C# Error 'Object Reference Not Set To An Instance Of An Object'
C#, "Object reference not set to an instance of an object." error
Most of them suggests to check if any of the involved variable is null. I've checked the involved variable, and indeed the exception occurs when the recognizer.Recognize(result) statement returns null.
So my question is, why does that statement often return null when I use training images from the database, while it never returns null when I use training images from windows folder?
Check your fetchedBytes array to see if you are consistently getting just a stream of bytes representing a BMP image (starting with 0x42 0x4D), or if there may be "other stuff" in there, too.
Depending on how the BMP data was inserted into the Access database it may contain an OLE "wrapper". For example, an 8x8 24-bit BMP image of pure red is saved by MSPAINT.EXE like this
If I copy that file and paste it into a Bound Object Frame in an Access form then Access wraps the BMP data in some "OLE stuff" before writing it to the table. Later, if I try to retrieve the BMP image via code, using something like this...
Sub oleDumpTest()
Dim rst As ADODB.Recordset, ads As ADODB.Stream
Set rst = New ADODB.Recordset
rst.Open "SELECT * FROM TrainingSet1 WHERE ID = 1", Application.CurrentProject.Connection
Set ads = New ADODB.Stream
ads.Type = adTypeBinary
ads.Open
ads.Write rst("FaceImage").Value
rst.Close
Set rst = Nothing
ads.SaveToFile "C:\Users\Gord\Pictures\oleDump_red."
ads.Close
Set ads = Nothing
End Sub
...then the resulting file also contains the OLE "wrapper"...
...and obviously is not a valid stand-alone BMP file. If I rename that file to give it a .bmp extension and try to open it in Paint, I get
So maybe (some of) the [FaceImage] objects in your database are not raw BMP data, and perhaps the other software is rejecting them (or simply not able to understand them).
Edit
Another possible issue is that when you get the images from files in a folder you hand the Image object a string containing the file path...
trainingImages.Add(new Image<Gray, byte>(Application.StartupPath + "\\TrainedFaces\\" + LoadFaces));
...but when you try to retrieve the images from the database you hand the same object a Bitmap object
MemoryStream stream = new MemoryStream(fetchedBytes);
bmpImage = new Bitmap(stream);
trainingImages.Add(new Emgu.CV.Image<Gray, Byte>(bmpImage));
I have no way of knowing whether the Emgu.CV.Image object might behave differently depending on the type of object it is given, but a quick+dirty workaround might be to write bmpImage to a temporary file, hand trainingImages.Add the path to that file, and then delete the file.
Finally made it!! just one more day of coding helped me to got the problem solved:
public void ProcessRequest(HttpContext context)
{
_httpContext = context;
var imageid = context.Request.QueryString["Image"];
if (imageid == null || imageid == "")
{
imageid = "1";
}
using (WebClient wc = new WebClient())
{
// Handler retrieves the image from database and load it on the stream
using (Stream s = wc.OpenRead("http://mypageurl/Image.ashx?Image=" + imageid))
{
using (Bitmap bmp = new Bitmap(s))
{
AddFace(bmp);
}
}
}
}
public void AddFace(Bitmap image)
{
var faceImage = DetectFace(image);
if (faceImage != null)
{
var stream = new MemoryStream();
faceImage.Save(stream, ImageFormat.Bmp);
stream.Position = 0;
byte[] data = new byte[stream.Length];
stream.Read(data, 0, (int)stream.Length);
_httpContext.Response.Clear();
_httpContext.Response.ContentType = "image/jpeg";
_httpContext.Response.BinaryWrite(data);
}
}
private Bitmap DetectFace(Bitmap faceImage)
{
var image = new Image<Bgr, byte>(faceImage);
var gray = image.Convert<Gray, Byte>();
string filePath = HttpContext.Current.Server.MapPath("haarcascade_frontalface_default.xml");
var face = new HaarCascade(filePath);
MCvAvgComp[][] facesDetected = gray.DetectHaarCascade(face, 1.1, 10, HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(20, 20));
Image<Gray, byte> result = null;
foreach (MCvAvgComp f in facesDetected[0])
{
//draw the face detected in the 0th (gray) channel with blue color
image.Draw(f.rect, new Bgr(Color.Blue), 2);
result = image.Copy(f.rect).Convert<Gray, byte>();
break;
}
if (result != null)
{
result = result.Resize(200, 200, INTER.CV_INTER_CUBIC);
return result.Bitmap;
}
return null;
}
public bool IsReusable
{
get { return false; }
}
I couldnt make it work from reading a direct Stream from the the Database where the images are located but your workaround, saving the images to a local folder, worked for me, thx a lot for sharing.Here's my demo page where you load files from DB: http://www.edatasoluciones.com/FaceDetection/FaceDataBase

Categories

Resources