did a lot of looking around and I couldn't find any solution.
Goal: To blink the Camera flash LED on my Windows 8.1 tablet. Using Windows 8.1 to develop and VS2013.
The InitializeAsync method allows the application to initialize the Camera and Microphone with the default settings
I built the app as a Windows Store application and it worked flawlessly.
I need the file to be an executable and I need to convert it to a console application
I get the following error when I do mc.InitializeAsync "Error 1 'await' requires that the type 'Windows.Foundation.IAsyncAction' have a suitable GetAwaiter method. Are you missing a using directive for 'System'? c:\users\levi\documents\visual studio 2013\projects\ledblinkerconsole\ledblinkerconsole\torch.cs 16 14 LEDBlinkerConsole
I have no idea how to initialize the camera via a Console application
Any other ways to blink the LED flash are greatly appreciated. I do not have access to the memory locations though to do it in C++.
Thanks guys!
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Windows.Media.Devices;
using Windows.Media.Capture;
namespace LEDBlinkerConsole
{
class Torch
{
public async static void BlinkLED()
{
MediaCapture mc = new MediaCapture();
await mc.InitializeAsync();
Console.WriteLine("Please type \"flash\" to flash the LED\n");
string consInput = Console.ReadLine();
if (consInput.ToUpper() == "FLASH")
{
if (mc.VideoDeviceController.TorchControl.Supported == true)
{
mc.VideoDeviceController.TorchControl.Enabled = true;
mc.VideoDeviceController.TorchControl.PowerPercent = 100;
}
}
}
}
}
I figured it out. I referenced "System.Runtime" and I had to reference "System.Runtime.Windowsruntime". I had to delete the reference to "System.Runtime" in order for it to work. More info here about the Async calls from a non-metro app:
http://www.wintellect.com/blogs/jeffreyr/using-the-windows-runtime-from-a-non-metro-application
Related
I have been using Microsoft.Speech synthesizer for years on different PCs and different versions of Windows. I am trying to set up the development and support environment on a new Windows 10 PC and I can't get spsynthesizer to work correctly. I have downloaded the Microsoft SDK5.1 and the voices twice now to make sure nothing was corrupted. I have stepped through each line of code with the debugger and everything works as expected until the call to spsynthesizer.speak...the call never speaks and never returns to the program. I have tried executing as x86, x64 and AnyCPU and I have a reference entry to Microsoft.Speech.
Any suggestions? Here is the simple C# test application.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Speech.Synthesis;
namespace test_text_to_speech
{
class Program
{
static void Main(string[] args)
{
Microsoft.Speech.Synthesis.SpeechSynthesizer spsynthesizer = new Microsoft.Speech.Synthesis.SpeechSynthesizer();
System.Collections.ObjectModel.ReadOnlyCollection<Microsoft.Speech.Synthesis.InstalledVoice> myvoices;
myvoices = spsynthesizer.GetInstalledVoices();
spsynthesizer.SelectVoice(myvoices[0].VoiceInfo.Name); //Invoke the one which you want
spsynthesizer.Volume = 100; // Can be 1 - 100
spsynthesizer.Rate = 1; // can be 1 - 10
spsynthesizer.SetOutputToDefaultAudioDevice();
PromptBuilder Thanks = new PromptBuilder();
Thanks.AppendSsmlMarkup("<voice xml:lang=\"en-US\">");
Thanks.StartStyle(new PromptStyle(PromptEmphasis.Strong));
Thanks.AppendText("Hello World");
Thanks.EndStyle();
Thanks.AppendSsmlMarkup("</voice>");
spsynthesizer.Speak(Thanks);
spsynthesizer.Speak("try with no markup");
}
}
}
I'm currently doing a project where I have to use the Affectiva SDK to analyse some videos that I have recorded. I have downloaded the files, which they have given me and started writing the code for the SDK to work, however when calling the callback functions in my code, Visual Studio doesn't seem to accept the arguments that are put in. So I figured that the interfaces for the callback functions must be done. I'm not really clear on how to this though, since I thought this was all done in their assembly code. My code so far looks like this:
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Threading.Tasks;
using System.Windows;
using Affdex;
namespace ConsoleApplication2
{
class Program
{
public interface FaceListener { }
public interface ImageListener { }
public interface ProcessStatusListener { }
static void Main(string[] args)
{
VideoDetector detector = new VideoDetector(15);
String licensePath = "C:/Users/hamud/Desktop/sdk_ahmedmudi1992#gmail.com.license";
detector.setLicensePath(licensePath);
String classifierPath = "C:/Programmer/Affectiva/Affdex SDK/data";
detector.setClassifierPath(classifierPath);
detector.setFaceListener(this);
detector.setImageListener(this);
detector.setProcessStatusListener(this);
detector.setDetectSmile(true);
detector.setDetectSurprise(false);
detector.setDetectBrowRaise(false);
detector.setDetectAnger(false);
detector.setDetectDisgust(false);
detector.setDetectAllExpressions(false);
detector.start();
detector.stop();
}
}
}
As far as I know, I have to write code for the interfaces if I'm not mistaken... Or do I? Please help.
Here is a tutorial on getting started to analyze the video files.
As far as I know, I have to write code for the interfaces if I'm not mistaken... Or do I?
No you don't. You just have to implement the methods in the interfaces if you were to use them.
Here is the link to the sample app that uses Camera Detector which you can relate to since both the Camera Detector and the Video Detector implement the FaceListener, ProcessListener and ImageListener Interfaces.
EDIT: You have to implement the Listeners. For example in the code sample you are using the FaceListener so you need to write the implementation for the callbacks viz onFaceFound() and onFaceLost().
You may also want to create an object of processStatusListener and wait for the process to end for a video file something like this:
AffdexProcessStatusListener processStatusListener = new AffdexProcessStatusListener();
processStatusListener.wait();
Here is a link to our C# app AffdexMe which uses CameraDetector. You may find examples of CameraDetector, PhotoDetector, VideoDetector and FrameDetector in our getting started guide.
I'm working with some Audio files in my app (mp3, wav, ..etc)
I was using the Audio Class from the Microsoft.DirectX.AudioVideoPlayback dll
so first I had to download the dll, after doing so, I went to Add Reference
then I browsed to the dll location, and added it
I also installed the DirectX 9.0 Web Setup
Now, i don't get any problem with just saying: Audio aud;
but if I do something like this:
Audio aud = new Audio(path);
or
Video vid = new Video(path);
if I press Ctrl+F5 the app will crash immediately, If i try to debug, I just can't see the debugging cursor, and if i keep pressing F10 nothing ever happens ..
I put it in a try/catch block, it didn't threw an exception ..
so what's goin on ?
how can i fix this ?
I even tried to make a whole new app, here's the whole code, there's nothing really in it:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Microsoft.DirectX.DirectSound;
using Microsoft.DirectX.AudioVideoPlayback;
using Microsoft.DirectX;
namespace WindowsFormsApplication2
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
Audio aud = new Audio("C:\\Users\\vexe\\Desktop\\Songs\\Kimosabe.mp3");
}
}
}
Any help would be appreciated ..
Thanks in advance ..
add this code to your project:
using Microsoft.DirectX.DirectSound
using Microsoft.DirectX
I have found a number of threads on this error but I haven't found a solution. I am using a number of class libraries from XNAExpert.com that are designed to animate a skinned mesh. I'm using XNA 4.0, Win Xp and programming games for Windows. Here is complete error:
Cannot find ContentTypeReader SkinnedModel.SkeletonReader, SkinnedModel, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null.
The tutorial can be found here . Here is the code from the reader class within SkinnedModel project:
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Content;
namespace SkinnedModel
{
public class SkeletonReader : ContentTypeReader<Skeleton>
{
protected override Skeleton Read(ContentReader input, Skeleton existingInstance)
{
List<Bone> boneList = input.ReadObject<List<Bone>>();
return new Skeleton(boneList);
}
}
}
Here is the code from the writer class from within SkinnedModelProcessor project:
[ContentTypeWriter]
public class SkeletonWriter : ContentTypeWriter<Skeleton>
{
protected override void Write(ContentWriter output, Skeleton value)
{
output.WriteObject(value.BoneList);
}
public override string GetRuntimeReader(TargetPlatform targetPlatform)
{
return typeof(SkeletonReader).AssemblyQualifiedName;
}
}
As you can see the type returned is the Assembly Qualified Name for each reader...Is anyone aware of another reason why I may be having trouble?
Solution for me was to just delete the ContentTypeReader and create a new one.
My problem seemed to be caused by having a mirrored project (I had Windows game library and Windows Phone game library). On Windows client the ContentReader was successfully found, but not on the Windows Phone client.
As i read it the SkeletonReader is known to the SkeletonWriter. I cannot think of a valid way to setup the projects so that this is true.
Project Main (Links To Content)
SkeletonReader
Skeleton
Project Content (Links To ContentExtendion)
SkeletonFile (Has Processor set to SkeletonProcessor)
Project ContentExtendion (Cannot link circular)
SkeletonContent (Is Input For Writer)
SkeletonWriter
SkeletonProcessor
Look at your ProjectSetup i think your assemblies are not linked correctly.
And return a fixed string in GetRuntimeReader - if you setup the projects correctly you will loose the connection to the SkeletonReader.
There is a quite complete tutorial on the content pipeline on the interwebs.
I want to write a simple hello world add in for Media Center on Windows 7, but I am having problems finding up to date functional documentation. I found this page: http://blogs.msdn.com/b/mcreasy/archive/2004/10/12/241449.aspx which looks to be exactly what I need. I implemented it and some of the interfaces it references are marked as obsolete, and even so when I try to launch it in media center is just pops up a dialog saying "unable to launch addin"
I updated the namespace interfaces from using Microsoft.MediaCenter.AddIn to using Microsoft.MediaCenter.Hosting which looks to be the up to date namespace according to the sdk docs, but I still have the same problem.
registering the assembly with the gac and with RegisterMCEApp both are successful, and I have unregistered and registered from both places in between builds.
I strongly signed the assembly with a .snk file and got the public key token to update the registration.xml
Can anyone either tell me what I am doing wrong or direct me to some up to date tutorial /docs?
Here is the little bit of code I have:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.MediaCenter.Hosting;
namespace MCPluginTakeTwo
{
public class HelloWorldAddIn: MarshalByRefObject, IAddInModule, IAddInEntryPoint
{
public void Initialize(Dictionary<string, object> appInfo, Dictionary<string, object> entryPointInfo)
{
}
public void Uninitialize()
{
}
public void Launch(AddInHost host)
{
}
}
}
Maybe looking at some open source mc plugins would help.
Here's another getting started tutorial, with some tips for getting started with Visual Studio 2010 (because the SDK only comes with VS 2008 templates).
http://david.gardiner.net.au/2010/10/writing-media-center-application-in.html