We're working on a SIP softphone and we get audio feedback when we call from one phone to the other. However, when we call from a normal SIP Phone (software or hardware) to our app, then it all works fine - it's only when calling from one phone using the app to another one. Here is the code we use to initialize RIL Audio:
public static void InitRILAudio()
{
IntPtr res;
RILRESULTCALLBACK result = new RILRESULTCALLBACK(f_result);
RILNOTIFYCALLBACK notify = new RILNOTIFYCALLBACK(f_notify);
res = RIL_Initialize(1, result, notify, (0x00010000 | 0x00020000 | 0x00080000), 0, out hRil);
if (res != IntPtr.Zero)
return;
RILAUDIODEVICEINFO audioDeviceInfo = new RILAUDIODEVICEINFO();
audioDeviceInfo.cbSize = 16;
audioDeviceInfo.dwParams = 0x00000003; //RIL_PARAM_ADI_ALL;
audioDeviceInfo.dwRxDevice = 0x00000001; //RIL_AUDIO_HANDSET;
audioDeviceInfo.dwTxDevice = 0x00000001; //RIL_AUDIO_HANDSET;
res = RIL_SetAudioDevices(hRil, audioDeviceInfo);
}
We are using SipEk (http://voipengine.googlepages.com/sipeksdk) for the SIP stack. Basically we just use a callback delegate from the SDK for the audio stuff. Has anyone else experienced problems with Audio feedback loops like this? Either with RIL Audio or SipEk? Any suggestions?
Thanks in advance!
Feedback means that you're not using echo cancellation (line and/or acoustic, depending on whether it's working as a speakerphone or not), or if you are, the delay in your system (jitter buffers, network, encode/decode, etc) is greater than the echo canceller can handle. Excessive gain/clipping in the wrong places can also defeat any echo canceller (they don't like non-linear effects).
Sounds like you're just dumping the audio off to some other layers. SipEk is just a wrapper for pjsip, but I assume you're doing audio via the Microsoft RIL/etc stuff, not via pjmedia. You need to have a good understanding of your audio paths - where stuff gets sampled, if/how it's acoustic/line echo-cancelled, what the echo tail is, how it gets encoded and packetized, how it's received, jitter-buffered, loss-concealed, and decoded and played back.
Related
I have a dynamic pipeline that switches between sources (File/HTTP/RMTP/SRT) on set intervals, and outputs to an RTMP sink. This is working fine, until I attempt to perform a seek.
The seek itself works, but the next time I try to switch sources, the pipeline hangs.
I've been able to stop it from hanging by recreating the muxer after performing a seek, but I still end up with audio/video synchronization issues - the audio plays, but the video falls behind.
For what it's worth, I'm testing the outgoing RTMP stream in VLC, and if I stop/play the stream is back in sync and continues to play without issue.
Here is my pipeline graph:
The application is written in C#.
I'm performing the seek like this:
var bus = pipeline.Bus;
do
{
var msg = bus.TimedPopFiltered(Constants.SECOND, MessageType.Error | MessageType.Eos | MessageType.StateChanged);
// Parse message
if (msg != null)
{
...
}
else
{
if (pipeline.CurrentState == State.Playing && !seekDone)
{
seekDone = true;
pipeline.SetState(State.Paused);
var seekTo = 50 * Gst.Constant.SECOND;
var seeked = mux.SeekSimple(Format.Time, SeekFlags.Flush | SeekFlags.KeyUnit, seekTo);
if (!seeked)
{
Console.WriteLine("Seek failed");
}
else
{
Console.WriteLine("Performing seek");
}
pipeline.SetState(State.Playing);
}
}
}
while(!terminate)
The source switching is handled by a Timer on a separate thread while the pipeline is in the Paused state.
Any help would be greatly appreciated.
EDIT: I placed pad probes on the video and audio queues and can see that the video timestamps are falling behind the audio by a few seconds.
Considering that this only happens after a seek, I am wondering if there is something in the video processing that is taking too long and for the player to handle.
The VLC logs show that the player is dropping a lot of late video frames.
I have an AHK script (below) which sends various commands to the Spotify desktop app globally while in the background to perform various actions, however I play an MMO which has an always running anti-cheat which shuts the game down when it detects AHK because people can use it for macros, etc.
; "CTRL + Alt + UP" Increase the volume.
^!Up::
DetectHiddenWindows, On
WinGet, winInfo, List, ahk_exe Spotify.exe
Loop, %winInfo%
{
thisID := winInfo%A_Index%
ControlFocus , , ahk_id %thisID%
ControlSend, , ^{up}, ahk_id %thisID%
}
return
The anti-cheat does not detect/care about C# programs/applications so I'm porting over the code to C# and have found a way to send commands to the Spotify application to perform a majority of the stuff I wanted via SendMessage:
private const int WM_KEYDOWN = 0x0100;
private const int WM_KEYUP = 0x0101; //Tried using these to no avail
private const int WM_KEYSYS = 0x0104;
private const int WM_ACTIVATEAPP = 0x001C;
var proc = Process.GetProcessesByName("Spotify").FirstOrDefault(p => !string.IsNullOrWhiteSpace(p.MainWindowTitle));
IntPtr hwnd = proc.MainWindowHandle;
SendMessage(hwnd, 0x0319, (IntPtr)0, (IntPtr)917504);
This command would play/pause the current song as that is what the final number (917504) corresponds to. Spotify also provides command codes for volume up/down, however they affect the entire volume of my PC, not just spotify which is obviously not what I want.
I've tried literally hundreds of combinations of PostMessage and SendMessage and I simply cannot figure out how to send the keys 'Ctrl' + 'Up' to make the volume increase (and vice versa for decrease). while in the background. I don't want the window to be brought to the foreground, etc.
Any help is greatly appreciated.
I also mentioned the AHK script because from my digging I think the ControlSend function is ran from this point in the source code https://github.com/Lexikos/AutoHotkey_L/blob/90b5c2ac3208c38b9488a72fc3e6e4f9cf20b276/source/keyboard_mouse.cpp#L135 , however I don't understand C/C++ enough to be able to figure out how to override the functions that require the window focus, etc.
Maybe think about compiling your AHK script to an exe: It might not be blocked this way. And you would have no effort to transfer your AHK solution.
I am not an expert on c# but just trying to churn out a quick solution.
Gist of the solution:
I have a sensor that sends a signal via serial port, and based on the signal the video played must be changed. It switches between two videos
It works as it stands now using axWindowsMediaPlayer. But unfortunately the app crashes after, lets say, 2 hours or so.
Here is the code that i use to change the video url when signal arrives
if (axWindowsMediaPlayer1.playState == WMPPlayState.wmppsPlaying) axWindowsMediaPlayer1.Ctlcontrols.stop();
axWindowsMediaPlayer1.uiMode = "none";
axWindowsMediaPlayer1.URL = mainFile;
axWindowsMediaPlayer1.Ctlcontrols.play();
Initializing the player with this
axWindowsMediaPlayer1.uiMode = "none";
axWindowsMediaPlayer1.Dock = System.Windows.Forms.DockStyle.Fill;
axWindowsMediaPlayer1.settings.setMode("loop", true);
axWindowsMediaPlayer1.settings.volume = 0;
I assume this his some adverse effect on the memory or something; just not expert enough to figure it out. ANy suggestions, is this the right way to change video url?
Thanks
I have frames of a Video with 30FPS in my C# code and I want to broadcast it in local host so all other applications can use it. I though because it is a video and there is no concern if any packet lost and no need to connect/accept from clients, UDP is a good choose.
But there are number of problems here.
If I use UDP Unicast speed is enough, about 25FPS (CPU Usage is 25%
that mean 100% on one thread in my 4 core CPU which is not ideal. But
at least it send enough set of data). But unicast cant deliver data
to all clients.
If I use broadcast speed is very low. About 10FPS with same CPU usage.
What can I do?! Data are in same computer so there is no need to remote access from LAN or etc. I just want a way to transfer about 30MBytes of data per second between different applications of same machine. (640x480 is fixed size of Image x 30fps x 3byte per pixel is about 27000KByte per second)
Is UDP Multicast has better performance?!
Is TCP can give me better performance even if I accept each client
and send to them independently?!
Is there any better way than Socket?! Memory sharing or something?!
Why UDP broadcast is that much slow?! Only about 10MBytes per
second?!
Is there a fast way to compress frames with high performance (to
encode 30fps in a sec and decode on other part)? Client apps are in
C++ so this must be a cross platform way.
I just want to know other developers experiences and ideas here so please write what you think.
Edit:
More info about data: Data are in Bitmap RGB24 format and they are streaming from a device to my application with 30FPS. I want to broadcast this data to other applications and they need to have this images in RGB24 format again. There is no header or any thing, only bitmap data with fixed size. All operations must perform on the fly. No matter of using a lossy compression algorithm or any thing.
I experiment Multicast in an industrial environment, it's a good choice over a not staturated reliable network.
In local host, shared memory may be a good choice because you may build a circular queue of frames and flip from one to the next only with a single mutex to protect a pointer assignment (writter side). With one writter, several reader, no problem arise.
On Windows with C++ and C#, shared memory is called File Mapping, but you may use system paging file (RAM and/or disk).
See these links to more information
http://msdn.microsoft.com/en-us/library/system.io.memorymappedfiles.aspx
http://msdn.microsoft.com/en-us/library/dd997372.aspx
Mixing C++ and C# : How to implement shared memory in .NET?
Fully managed shared memory .NET implementations?
The shared memory space isn't protected nor private but it'd named.
Usually, the writer process creates it, and the readers opens it by its name. Antivirus softwares takes a look at this kind of I/O in a same fashion as they do for all others but don't block the communication.
Here is a sample to begin with File Mapping:
char shmName[MAX_PATH+1];
sprintf( shmName, "shmVideo_%s", name );
shmName[MAX_PATH] = '\0';
_hMap =
CreateFileMapping(
INVALID_HANDLE_VALUE, 0, PAGE_READWRITE, 0, size, shmName );
if( _hMap == 0 ) {
throw OSException( __FILE__, __LINE__ );
}
_owner = ( GetLastError() != ERROR_ALREADY_EXISTS );
_mutex = Mutex::getMutex( name );
Synchronize sync( *_mutex );
_data = (char *)MapViewOfFile( _hMap, FILE_MAP_ALL_ACCESS, 0, 0, 0 );
if( _data == 0 ) {
throw OSException( __FILE__, __LINE__ );
}
Use live555 http://www.live555.com/ for streaming in combination with your favorite compressor - ffmpeg.
i connected a eos canon camera to pc
i have an application that i could take picture remotly ,and download image to pc,
but when i remove the SD card from camera , i cant download image from buffer to pc
// register objceteventcallback
err = EDSDK.EdsSetObjectEventHandler(obj.camdevice, EDSDK.ObjectEvent_All, objectEventHandler, new IntPtr(0));
if (err != EDSDK.EDS_ERR_OK)
Debug.WriteLine("Error registering object event handler");
///
public uint objectEventHandler(uint inEvent, IntPtr inRef, IntPtr inContext)
{
switch(inEvent)
{
case EDSDK.ObjectEvent_DirItemCreated:
this.getCapturedItem(inRef);
Debug.WriteLine("dir item created");
break;
case EDSDK.ObjectEvent_DirItemRequestTransfer:
this.getCapturedItem(inRef);
Debug.WriteLine("file transfer request event");
break;
default:
Debug.WriteLine(String.Format("ObjectEventHandler: event {0}", inEvent));
break;
}
return 0;
}
anyone could help me , why this event does not call ,
or how i download image from buffer to pc, with out have Sd card on my camera
thanks
You probably ran into the same problem as I did yesterday: the camera tries to store the image for a later download, finds no memory card to store it to and instantly discards the image.
To get your callback to fire, you need to set the camera to save images to the PC (kEdsSaveTo_Host) at some point during your camera initialization routine. In C++, it worked like this:
EdsInt32 saveTarget = kEdsSaveTo_Host;
err = EdsSetPropertyData( _camera, kEdsPropID_SaveTo, 0, 4, &saveTarget );
You probably need to build an IntPtr for this. At least, that's what Dmitriy Prozorovskiy did (prompted by a certain akadunno) in this thread.
The SDK (as far as I know) only exposes the picture taking event in the form of the object being created on the file system of the camera (ie the SD card). There is not a way of which I am familiar to capture from buffer. In a way this makes sense, because in an environment where there is only a small amount of onboard memory, it is important to keep the volatile memory clear so that it can continue to take photographs. Once the buffer has been flushed to nonvolatile memory, you are then clear to interact with those bytes. Limiting, I know, but it is what it is.
The question asks for C#, but in Java one will have to setProperty as:
NativeLongByReference number = new NativeLongByReference( new NativeLong( EdSdkLibrary.EdsSaveTo.kEdsSaveTo_Host ) );
EdsVoid data = new EdsVoid( number.getPointer() );
NativeLong l = EDSDK.EdsSetPropertyData(edsCamera, new NativeLong(EdSdkLibrary.kEdsPropID_SaveTo), new NativeLong(0), new NativeLong(NativeLong.SIZE), data);
And the usual download will do