Audio Session on iOS

Welcome! Forums Unity Plugins Mobile Speech Recognizer Audio Session on iOS

Viewing 15 posts - 1 through 15 (of 22 total)
  • Author
    Posts
  • #1106
    mansiva
    Participant

    Hello there,

    I was wondering what settings you used for the AVAudioSession on iOS.

    I’m asking because I’ve been running into this issue after I changed the category in order to allow sound to play regardless of the silent switch: https://github.com/pbakondy/cordova-plugin-speechrecognition/issues/23

    I’ve tried a few different settings but according to Apple docs this is what I should be doing considering the type of app I’m developing:

    NSError *err = nil;
    [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord
    mode:AVAudioSessionModeSpokenAudio
    options:AVAudioSessionCategoryOptionDefaultToSpeaker
    error:&err];

    However if I set this on app start, then after speech recognition the sound reroutes from the speakers to the earpiece, so I’m assuming that either your plugin or the OS changes the audio settings.

    Any info would be appreciated.

    Cheers

    #1115
    PiotrPiotr
    Keymaster

    Hi,
    Sorry for late response.

    Plugin is overriding those values.
    It’s using AVAudioSessionCategoryRecord and AVAudioSessionModeMeasurement.

    #1182
    mansiva
    Participant

    Hello Piotr,
    I had been overriding those values with the ones specified in my original post, but recently found an issue that caused audio to be rerouted to the earpiece if using StopRecording when nothing was spoken. I assume because an error was being thrown with a slight delay which probably re-reset the audio after I had manually set it.

    Any chance I could grab the source for iOS in order to modify it with the ones I need to use?

    #1373
    davey
    Participant

    I am having the same issue. Upon successful Speech Recognition, audio will no longer play in Unity on iOS. I’m not using any obj C overrides yet for AVAudioSession. Hoping for a little support on this one, it’s a killer!

    #1375
    PiotrPiotr
    Keymaster

    Could you give me some more details on that:
    – How are you using plugin API?
    – How are you playing audio? Are you using iOS or Unity API for that?

    Example project would be best here 🙂

    #1479
    RyanK
    Participant

    Hi,

    I am also having the same issue here. I am unable to play audio both while running speech recognition and after speech recognition. I am using the standard unity api’s for playing audio. Audio does work before attempting speech recognition. I believe that the fix for me is the same as what mansiva suggestion of changing the AVAudioSession category to AVAudioSessionCategoryPlayAndRecord.

    Any help would be appreciated, Thanks.

    • This reply was modified 5 years ago by RyanK.
    #1481
    RyanK
    Participant

    Related to my above ask. Here is a good grid of what the different AVAudioSession allow. About half way down this page. https://yahooeng.tumblr.com/post/133423436921/controlling-audio-output-on-ios-with

    #1482
    mansiva
    Participant

    What I ended up doing was forcing the AVAudioSession after stopping speech recognition. Here is the code although you might need different options:

    #import <AVFoundation/AVFoundation.h>
     
    void iOSAudio_setupAudioSession()
    {
        AVAudioSession *session = [AVAudioSession sharedInstance];
        [session setCategory:AVAudioSessionCategoryPlayback
                        mode:AVAudioSessionModeSpokenAudio
                     options:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionAllowBluetooth
                       error:nil];
        [session setActive:true error:nil];
    }

    Create a .m file (name doesn’t matter) and put in Plugins/iOS. To call add

    [DllImport("__Internal")]
    internal static extern void iOSAudio_setupAudioSession();

    inside a class (I put it inside iOSSpeechRecognizer) and call right after StopIfRecording. I ended up putting it inside SpeechRecognizer.cs so it ended up looking like this:

    public static void StopIfRecording() {
    	Debug.Log("StopRecording...");
    	#if UNITY_IOS && !UNITY_EDITOR
    	iOSSpeechRecognizer._StopIfRecording();
    	iOSSpeechRecognizer.iOSAudio_setupAudioSession();
    	#elif UNITY_ANDROID && !UNITY_EDITOR
    	AndroidSpeechRecognizer.StopIfRecording();
    	#endif
    }
    #1484
    RyanK
    Participant

    Mansiva, thanks this is the route we tried to go however we have some sound effects that we need to have play while the speech recording is happening. When the speech recording starts all the audio inside unity stops. The only real option for us is to change the Audio Session inside the plugin itself or to rewrite the plugin ourselves.

    Any help from the developer would be greatly appreciated, Thanks.

    #1485
    mansiva
    Participant

    As far as I know there’s no way to play sound while the speech recognition is happening. The plugin triggers the system’s speech recognition engine which automatically cuts off all audio so even if you were to rewrite the plugin yourself you still wouldn’t be able to play any sounds.

    The only way to do so would be to use a third party speech recognition software like Google Cloud Speech, however it can be expensive depending on the amount of usage your app is going to make.

    #1486
    RyanK
    Participant

    Actually thats not true. You can do speech recognition and play audio at the same time. That is what the AVAudioSessionCategoryPlayAndRecord category is for. I just tested this with my own plugin and it works fine. Thanks for your help and suggestions though.

    #1487
    mansiva
    Participant

    Interesting, so you were able to get the speech recognition to work while playing audio? I always assumed that launching it forced silence no matter what AVAudioSession was used. At least that’s what happens on Android.

    #1488
    RyanK
    Participant

    Yeah its working on iOS, I still need to look into the android side of things.

    #1489
    davey
    Participant

    Great info guys, maybe some of it will help get around the issues I’ve been having

    #1490
    larryapple
    Participant

    Hey Guys, thanks for all the info. I am having the same issues converting our Language Learning app from IOS Native to Unity IOS, Android, and Magic Leap.

    In my IOS project I used AVAudioSessionCategoryPlayAndRecord and had no issues. I tried Mansiva’s workaround on the KK plugin, but it did not work for me. I am using my own IOS Voice Synthesis plugin, and will probably need to create my own recognition plugin from my IOS code.

    But I am never one to prefer reinventing the wheel. If I could have a copy of the source I would be happy to try fixing this problem with minimum changes, and share the result.

Viewing 15 posts - 1 through 15 (of 22 total)
  • You must be logged in to reply to this topic.