Reference to type 'ISerializable' claims it is defined in 'mscorelib' - c#

I am using EmguCV in Unity 5.6.2f1 Personal to detect faces in a camera stream.
And it works just fine if I'm trying to run it in the built-in Editor/Game Output and there are no Errors in the Console.
However, when I'm trying to build it to Windows Store, specifically HoloLens, I'm getting 4 Errors in the Console:
Assets\OpenCVTest.cs(57,11): error CS7069: Reference to type 'ISerializable' claims it is defined in 'mscorlib', but it could not be found
Assets\OpenCVTest.cs(57,11): error CS7069: Reference to type 'ICloneable' claims it is defined in 'mscorlib', but it could not be found
Assets\OpenCVTest.cs(62,26): error CS7069: Reference to type 'ISerializable' claims it is defined in 'mscorlib', but it could not be found
Assets\OpenCVTest.cs(62,26): error CS7069: Reference to type 'ICloneable' claims it is defined in 'mscorlib', but it could not be found
It seems weird since it works just fine in the built-in preview, but not when I'm trying to export it.
the lines in question are
using (Image<Gray, byte> nextFrame = cap.QueryFrame ().ToImage<Gray, byte>()) {
Rectangle[] faces = cascade.DetectMultiScale(nextFrame, 1.4, 1);
and according to the line number it refers to Image in the one line and cascade (which is a CascadeClassifier) in the other.
A few google searches for different keywords, including the error-numbers didn't yield any helpful result. Similar questions on here weren't able to help me either, since the surroundings are completely different.
Here is my complete code, without all the unnecessary stuff:
using UnityEngine;
using System.Collections;
using Emgu.CV;
using Emgu.CV.Util;
using Emgu.CV.UI;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using Emgu.Util;
using System.Runtime.InteropServices;
using System;
using System.Drawing;
using System.Windows.Forms;
public class OpenCVTest : MonoBehaviour {
private VideoCapture cap;
private CascadeClassifier cascade;
private int counter;
private int intervall;
private GameObject border;
// Use this for initialization
void Start () {
counter = 0;
intervall = 6;
cap = new VideoCapture (0);
cascade = new CascadeClassifier ("haarcascade_frontalface_alt.xml");
border = Resources.Load ("Border") as GameObject;
}
// Update is called once per frame
void Update () {
counter++;
if (counter >= intervall) {
counter = 0;
using (Image<Gray, byte> nextFrame = cap.QueryFrame ().ToImage<Gray, byte>()) { //ERROR 1
if (nextFrame != null) {
Rectangle[] faces = cascade.DetectMultiScale(nextFrame, 1.4, 1); //ERROR 2
//remove preivious borders
var previous = GameObject.FindGameObjectsWithTag("Face");
foreach (var box in previous) {
Destroy (box.gameObject);
}
//instanciate new ones
foreach (var face in faces) {
GameObject newBorder = Instantiate (border);
newBorder.transform.position = new Vector3 (face.X / 100f, face.Y / -100f, 10);
}
}
}
}
}
}
all my EmguCV dll's are in Assets\Plugins.

Related

Face tracker with Unity

I would like to program a face tracker and recognizer with OpenCV plus Unity. This is the code I am using for face tracking
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using OpenCvSharp;
using OpenCvSharp.Tracking;
using OpenCvSharp.Aruco;
using OpenCvSharp.Detail;
using OpenCvSharp.Face;
using OpenCvSharp.Flann;
using OpenCvSharp.ML;
using OpenCvSharp.Util;
using OpenCvSharp.XFeatures2D;
public class FaceDector : MonoBehaviour
{
WebCamTexture _webCamTexture;
CascadeClassifier cascade;
void Start()
{
WebCamDevice[] devices = WebCamTexture.devices;
_webCamTexture = new WebCamTexture(devices[0].name);
_webCamTexture.Play();
cascade = new CascadeClassifier(Application.dataPath+#"haarcascade_frontalface_default.xml");
}
voidUpdate()
{
GetComponent<Renderer>() .material.mainTexture = _webCamTexture;
Mat frame = OpenCvSharp.Unity.TextureToMat(_webCamTexture);
findNewFace(frame);
}
void findNewFace(Mat frame)
{
var faces = cascade.DetectMultiScale(frame, 1.1, 2, HaarDetectionType.ScaleImage);
if(faces.Length >= 1)
{
Debug.Log(faces [0].Location);
}
}
}
but there is always the error message
FileNotFoundException: "C:/Users/bomba/Documents/Eigene Spiele/I.B.I.S/Assetshaarcascade_frontalface_default.xml"not found
OpenCvSharp.CascadeClassifier..ctor (System.String fileName) (at Assets/OpenCV+Unity/Assets/Scripts/OpenCvSharp/modules/objdetect/CascadeClassifier.cs:40)
FaceDector.Start () (at Assets/FaceDector.cs:24)
and then every frame
NullReferenceException: Object reference not set to an instance of an object
FaceDector.findNewFace (OpenCvSharp.Mat frame) (at Assets/FaceDector.cs:36)
FaceDector.Update () (at Assets/FaceDector.cs:31)
".
Does anyone have a solution for the problem?
I presume that you are talking about this tutorial:
Advanced Tutorial-OpenCV in Unity
Change:
cascade = new CascadeClassifier(Application.dataPath+#"haarcascade_frontalface_default.xml");
to
cascade = new CascadeClassifier(System.IO.Path.Combine(Application.dataPath, "haarcascade_frontalface_default.xml"));

NameResolutionFailure with Azure Storage Unity UWP app (works in Player)

I have a Hololens app that should load data from Azure Storage. When using the WindowsAzure.Storage package converted to a unitypackage, I can load data when using the Unity Player and when testing with a normal 2D XAML UWP app, I can also load data using that API on the Hololens, however, when debugging the IL2CPP project, I get a "WebException: Error: NameResolutionFailure" (full log).
Here are the steps to build a simplified testing project:
Open Unity 2018.2 (I use 2018.2.14f, 2018.2 is necessary for using
https, which is apparently necessary for connecting to Azure)
Set .NET version of Unity Project to 4.x, because the Azure Storage API uses await/async, I used IL2CPP as backend. The .NET backend gives errors about some Newtonsoft.JSON functions not being found, which might be what is causing me problems? Assets/Plugins/Newtonsoft.Json.dll exists and references .NET v4.0.30319.
Error: method
System.Threading.Tasks.Task1
Newtonsoft.Json.Linq.JObject::LoadAsync(Newtonsoft.Json.JsonReader,System.Threading.CancellationToken)`
doesn't exist in target framework. It is referenced from
Microsoft.WindowsAzure.Storage.dll at System.Void
Microsoft.WindowsAzure.Storage.ODataErrorHelper/d__2::MoveNext().
Create an empty game object called ImageGrid at (0, 0, 2)
Import the script PopulateImageGrid.cs below to the project and attach it to the ImageGrid
Create a prefab from a 1*1*0.1 cube and set the public field Image Grid Tile of the Image Grid game object to that prefab
Delete Assets/Plugins/Microsoft.CSharp.dll as Unity complains about it existing twice
Build as UWP, load the built project in Visual Studio and start with Release and x86 selected (or deploy to Hololens)
Here's the PopulateImageGrid.cs. Feel free to connect with the account details given in the code, as it's a free account without sensitive data.
using System.Collections;
using System.Collections.Generic;
// using UnityEditor.PackageManager;
using UnityEngine;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.File;
using System;
public class PopulateImageGrid : MonoBehaviour {
public Transform ImageGridTile;
async void Start()
{
Debug.Log("In PopulateImageGrid.Start()");
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(#"DefaultEndpointsProtocol=https;FileEndpoint=https://meshiconstorage.file.core.windows.net;AccountName=meshiconstorage;AccountKey=2Myeg/WUowehkrAY8Lgl361xxylfkMdITrVapKKVPyo9bVFqN6/uD1S66CB4oTPnnWncLubiVjioBUIT+4utaw==");
CloudFileClient cloudFileClient = storageAccount.CreateCloudFileClient();
string shareName = "meshicon-share1";
var cloudFileShare = cloudFileClient.GetShareReference(shareName);
CloudFileDirectory rootDirectory = cloudFileShare.GetRootDirectoryReference();
var fileDirectory = rootDirectory.GetDirectoryReference("images");
FileContinuationToken token = null;
int row = 0;
int col = 0;
const int width = 32;
const int height = 32;
do
{
FileResultSegment frs = await fileDirectory.ListFilesAndDirectoriesSegmentedAsync(token);
foreach (var result in frs.Results)
{
Debug.Log("In loop with " + result.ToString());
Vector3 position = new Vector3(col++ * 0.13f - 0.39f, row * 0.13f - 0.26f, 0f);
Quaternion rotation = Quaternion.identity;
Transform gridTile = Instantiate(ImageGridTile);
gridTile.transform.parent = gameObject.transform;
gridTile.localPosition = position;
gridTile.localRotation = rotation;
if (col > 6)
{
row++;
col = 0;
}
byte[] imgData = new byte[10000];
int size = await ((CloudFile)result).DownloadToByteArrayAsync(imgData, 0);
Debug.Log("Downloaded to byte[]");
Texture2D texture = new Texture2D(width, height);
byte[] realImgData = new byte[size];
Array.Copy(imgData, realImgData, size);
texture.LoadImage(imgData);
gridTile.GetComponent<MeshRenderer>().material.mainTexture = texture;
}
} while (token != null);
}
}
The trick was to wait until after a 30-second timeout the error message of a NameResolutionFailure appeared, which made it apparent to me that I should activate internet permissions in the appxmanifest.

OpenCVSharp: Need help tracking white blobs in moving video file using Bounding Box + Centroids

I'm new to Stack Overflow so I apologize if this question is too broad, off topic, or if I'm about to supply insufficient or unnecessary information, but I really need some assistance! I'm new to computer vision and my task is as follows:
I need to take in a video file (in my particular case we are recording simple human motion patterns such as walking in a straight line, cirle, zig zag, etc... that we will later use to do deep learning through Convolutional Neural Networks) The ultimate goal is for the code to perform background subtraction / foreground detection on the video file, detect & track the white blobs in the foreground mask with a red bounding box (ignoring smaller blobs that should be considered as noise), and most importantly calculate and update the centroids of the bounding box in each frame of the video as the person moves, and output the centroid information at each frame to a text file, rather than the console window. The centroid information will then be used to develop a small deep learning database with Tensorflow so that when we give the program a pattern of unrecognized x,y centroid coordinates, it will compare those coordinates with a pattern of coordinates in the database, and then be able to say "This is Pattern 1 or Patter 2... etc"
This is the code that does the background subtraction on a supplied VIDEO file:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using OpenCvSharp;
using OpenCvSharp.CPlusPlus;
using OpenCvSharp.Blob;
namespace OpenCvSharp.Test
{
/// <summary>
///
/// </summary>
class BgSubtractorMOG
{
public BgSubtractorMOG()
{
//using (CvCapture capture = CvCapture.FromCamera(0)) // for webcam
//using (IplImage img = cap.QueryFrame())
using (CvCapture capture = new CvCapture("C:\\Users\\tev\\Documents\\Scenario 1.avi")) // specify your movie file
using (BackgroundSubtractorMOG mog = new BackgroundSubtractorMOG())
using (CvWindow windowSrc = new CvWindow("src"))
using (CvWindow windowDst = new CvWindow("dst"))
{
IplImage imgFrame;
using (Mat imgFg = new Mat(capture.FrameWidth, capture.FrameHeight, MatrixType.U8C1))
while ((imgFrame = capture.QueryFrame()) != null)
{
mog.Run(new Mat(imgFrame, false), imgFg, 0.01);
windowSrc.Image = imgFrame;
windowDst.Image = imgFg.ToIplImage();
Cv.WaitKey(50);
}
}
}
}
And this is the code that finds the bbox + centroids on IMAGES:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using OpenCvSharp;
using OpenCvSharp.Blob;
namespace WhiteblobDetection
{
class Program
{
static void Main(string[] args)
{
//using (IplImage imgSrc = new IplImage("shapes.png", LoadMode.Color))
using (IplImage imgSrc = new IplImage("C:\\OpenCvSharp\\TestImage\\mydrawing.png", LoadMode.Color))
using (IplImage imgBinary = new IplImage(imgSrc.Size, BitDepth.U8, 1))
using (IplImage imgLabel = new IplImage(imgSrc.Size, CvBlobLib.DepthLabel, 1))
using (IplImage imgDst = new IplImage(imgSrc.Size, BitDepth.U8, 3))
{
Cv.CvtColor(imgSrc, imgBinary, ColorConversion.BgrToGray);
Cv.Threshold(imgBinary, imgBinary, 200, 255, ThresholdType.Binary);
using (CvBlobs blobs = new CvBlobs())
{
blobs.Label(imgBinary, imgLabel);
int count = 0;
foreach (var item in blobs)
{
Console.WriteLine("{0} | Centroid:{1} Area:{2}", item.Key, item.Value.Centroid, item.Value.Area);
// draw rectangle
imgSrc.DrawRect( item.Value.Rect, CvColor.Red);
// draw cross at the centroid
CvPoint pt1, pt2;
int d = 3;
pt1.X = (int)item.Value.Centroid.X - d;
pt1.Y = (int)item.Value.Centroid.Y;
pt2.X = (int)item.Value.Centroid.X + d;
pt2.Y = (int)item.Value.Centroid.Y;
//imgSrc.DrawLine(item.Value.Centroid, item.Value.Rect.TopLeft, CvColor.Red, 1);
imgSrc.DrawLine( pt1, pt2, CvColor.Red, 1);
pt1.X = (int)item.Value.Centroid.X;
pt1.Y = (int)item.Value.Centroid.Y - d;
pt2.X = (int)item.Value.Centroid.X;
pt2.Y = (int)item.Value.Centroid.Y + d;
imgSrc.DrawLine(pt1, pt2, CvColor.Red, 1);
}
blobs.RenderBlobs(imgLabel, imgSrc, imgDst);
}
}
}
}
}
I am not looking for someone to write the solution out for me because I wouldn't learn anything that way. (though if you did I'd love you forever! lol). I'm simply looking for ideas or more in depth documentation that I could use to help me learn.
These were two pieces of sample code supplied to me by an advisor. I've modified only what I understand and can find documentation for on the internet, but there isn't much for OpenCVSharp, and being a newbie to computer vision makes it difficult for me to try and port C++ OpenCV code over to OpenCVSharp.
So to sum things up, my task is basically to combine the codes in such a way that the code for finding bbox + centroids on images, works with a video file like in the code for background subtraction. I know that given what I already have, what I'm missing is probably very simple and I just need another set of eyes that can maybe help me spot what's missing.

QR Code Scanner in Unity?

I am trying to get QRCode reader in unity that works on ios and Android.
Unity Zxing QR code scanner integration
Using above answer I Have added Vuforia (Working perfectly alone). then i also have added Zxing.unity.dll in plugins folder, then added this script to ARCamera in a scene.
using UnityEngine;
using System;
using System.Collections;
using Vuforia;
using System.Threading;
using ZXing;
using ZXing.QrCode;
using ZXing.Common;
[AddComponentMenu("System/VuforiaScanner")]
public class VuforiaScanner : MonoBehaviour
{
private bool cameraInitialized;
private BarcodeReader barCodeReader;
void Start()
{
barCodeReader = new BarcodeReader();
StartCoroutine(InitializeCamera());
}
private IEnumerator InitializeCamera()
{
// Waiting a little seem to avoid the Vuforia's crashes.
yield return new WaitForSeconds(1.25f);
var isFrameFormatSet = CameraDevice.Instance.SetFrameFormat(Image.PIXEL_FORMAT.RGB888, true);
Debug.Log(String.Format("FormatSet : {0}", isFrameFormatSet));
// Force autofocus.
var isAutoFocus = CameraDevice.Instance.SetFocusMode(CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
if (!isAutoFocus)
{
CameraDevice.Instance.SetFocusMode(CameraDevice.FocusMode.FOCUS_MODE_NORMAL);
}
Debug.Log(String.Format("AutoFocus : {0}", isAutoFocus));
cameraInitialized = true;
}
private void Update()
{
if (cameraInitialized)
{
try
{
var cameraFeed = CameraDevice.Instance.GetCameraImage(Image.PIXEL_FORMAT.RGB888);
if (cameraFeed == null)
{
return;
}
var data = barCodeReader.Decode(cameraFeed.Pixels, cameraFeed.BufferWidth, cameraFeed.BufferHeight, RGBLuminanceSource.BitmapFormat.RGB24);
if (data != null)
{
// QRCode detected.
Debug.Log(data.Text);
}
else
{
Debug.Log("No QR code detected !");
}
}
catch (Exception e)
{
Debug.LogError(e.Message);
}
}
}
}
But it is still not detecting any QRCode. Is there any other way to do QRcode reading and writing except Zxing? or any working sample project you have?
I also tried to implement a QRCode Reader with Vuforia and XZing using almost the same code you used. For me it worked, but it took very very long to detect the QRCode.
When I used a Color32 array instead of cameraFeed.pixels it was much quicker:
GUI.DrawTexture(screenRect, webCamTexture, ScaleMode.ScaleToFit);
try
{
IBarcodeReader barcodeReader = new BarcodeReader();
var result = barcodeReader.Decode(webCamTexture.GetPixels32(),
webCamTexture.width, webCamTexture.height);
if (result != null)
{
Debug.Log("DECODED TEXT FROM QR: " + result.Text);
loadNewPoi(Convert.ToInt32(result.Text));
PlayerPrefs.SetInt("camera_enabled", Convert.ToInt32(false));
webCamTexture.Stop();
}
}
But in this example I was using a WebCamTexture instead of Vuforia.
Unluckily it isn't possible to get a Color32 array with GetPixels32() from the Vuforia camera.
Another option is to use the QRCodes as Image-Targets, but I have a lot of wrong detections doing this.
For me there is no fitting solution for XZing together with Vuforia at the moment.

Failed To Create Assembly in MS SQL SERVER

I am trying to implement routing funcationality in MS SQL Server 2012 using the prospatial tutorial, I created a C# class and successfully build DLL file.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SqlServer.Server;
using Microsoft.SqlServer.Types;
namespace ProSQLSpatial.Ch14
{
public partial class UserDefinedFunctions
{
[Microsoft.SqlServer.Server.SqlFunction]
public static SqlGeometry GeometryTSP(SqlGeometry PlacesToVisit)
{
// Convert the supplied MultiPoint instance into a List<> of SqlGeometry points
List<SqlGeometry> RemainingCities = new List<SqlGeometry>();
// Loop and add each point to the list
for (int i = 1; i <= PlacesToVisit.STNumGeometries(); i++)
{
RemainingCities.Add(PlacesToVisit.STGeometryN(i));
}
// Start the tour from the first city
SqlGeometry CurrentCity = RemainingCities[0];
// Begin the geometry
SqlGeometryBuilder Builder = new SqlGeometryBuilder();
Builder.SetSrid((int)PlacesToVisit.STSrid);
Builder.BeginGeometry(OpenGisGeometryType.LineString);
// Begin the LineString with the first point
Builder.BeginFigure((double)CurrentCity.STX, (double)CurrentCity.STY);
// We don't need to visit this city again
RemainingCities.Remove(CurrentCity);
// While there are still unvisited cities
while (RemainingCities.Count > 0)
{
RemainingCities.Sort(delegate(SqlGeometry p1, SqlGeometry p2)
{ return p1.STDistance(CurrentCity).CompareTo(p2.STDistance(CurrentCity)); });
// Move to the closest destination
CurrentCity = RemainingCities[0];
// Add this city to the tour route
Builder.AddLine((double)CurrentCity.STX, (double)CurrentCity.STY);
// Update the list of remaining cities
RemainingCities.Remove(CurrentCity);
}
// End the geometry
Builder.EndFigure();
Builder.EndGeometry();
// Return the constructed geometry
return Builder.ConstructedGeometry;
}
};
}
I also enabled CLR and when I try to create a assembly using the above created DLL:
CREATE ASSEMBLY GeometryTSP
FROM 'D:\Routing\my example\GeometryTSP\GeometryTSP\bin\Debug\GeometryTSP.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS;
GO
I'm getting "Failed to create AppDomain" error like this:
Msg 6517, Level 16, State 1, Line 2
Failed to create AppDomain "master.dbo[ddl].12".
Exception has been thrown by the target of an invocation.
What should be the reason?
try remove the neamespace section
namespace ProSQLSpatial.Ch14
{
}
sql server use default namespace
After Some research i found the solution,
its the problem with a system restart after .NET framework installation

Categories

Resources