Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

core audio primer

  • 13-11-2010 4:56pm
    #1
    Registered Users, Registered Users 2 Posts: 143 ✭✭


    Hi,

    I've being doing some reading up on core audio for ios 4 with the aim of building a little test app.

    I'm pretty confused at this point in reseach with all the api's. Ideally what I want to know how to do is to extract a number of samples from two mp3s into arrays.

    Then in a callback loop I want to mix these samples together and send them to the speaker.

    There are example on the apple dev site but I'm finding them difficult to disect and digest. Is anybody aware of a nice stripped down example somewhere?

    Also I can't determine which api's to be using.

    There extendedaudiofile and audio file. These seem to be the one's for extracting audio. Which one should I use?

    It it absoultly neccessary to use the mix unit or would I be as well off to do my own mixing code (I want as much sample control as possible).

    Do I need to use audio queue services? I've heard that they provide poor latency, is this true?


    Finally do I have to use an audio session service. Would an audio app work without it? How would the audio session fit in to the whole audio extraction and callback? Is it purely just to handle interruptions?


Advertisement