Home Ask Login Register

Developers Planet

Your answer is one click away!

Darius Petermann February 2016

iOS: Threading issue when calling setNeedDisplay inside callback function

Here is the context: I've been developing an audio-related app for a while now and I sort of hit a wall and not sure what to do next.

I've recently implemented in the app a custom class that plots a FFT display of the audio output. This class is a subclass of UIView meaning that every time I need to plot a new FFT update I need to call setNeedDisplay on my instance of the class with new sample values.

As I need to plot a new FFT for every frame (frame ~= 1024 samples), it means that the display function of my FFT gets called a lot (1024 / SampleRate ~= 0.02321 second). As for the sample calculation, it is done 44'100 / sec. I am not really experienced with managing threading in iOS so I read a little bit about it and here is how I have done it.

How it has been done: I have a subclass of NSObject "AudioEngine.h" that is taking care of all the DSP processing in my app and this is where I am setting my FFT display. All the sample values are calculated and assigned to my FFT subclass inside a dispatch_get_global_queue block as the values need to constantly be updated in the background. The setneedDisplay method is called once the samples index has reached the maximum frame number, and this is done inside a dispatch_async(dispatch_get_main_queue) block

In "AudioEngine.m"

for (k = 0; k < nchnls; k++) {

            buffer = (SInt32 *) ioData->mBuffers[k].mData;

            if (cdata->shouldMute == false) {

                buffer[frame] = (SInt32) lrintf(spout[nsmps++]*coef) ;

                    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
                        @autoreleasepool {

                            // FFT display init here as a singleton
                            SpectralView *specView = [SpectralView sharedInstance];

                            //Here is created a pointer t        


bbum February 2016

As written, your SpectralView* needs to be fully thread safe.

Your for() loop is first shoving frame/sample processing off to the high priority concurrent queue. Since this is asynchronous, it is going to return immediately, at which point your code is going to enqueue a request on the main threat to update the spectral view's display.

This pretty much guarantees that the spectral view is going to have to be updating the display simultaneously with the background processing code also updating the spectral view's state.

There is a second issue; your code is going to end up parallelizing the processing of all channels. In general, unmitigated concurrency is a recipe for slow performance. Also, you're going to cause an update on the main thread for each channel, regardless of whether or not the processing of that channel is completed.

The code needs to be restructured. You really should split the model layer from the view layer. The model layer could either be written to be thread safe or, during processing, you can grab a snapshot of the data to be displayed and toss that at the SpectralView. Alternatively, your model layer could have an isProcessing flag that the SpectralView could key off of to know that it shouldn't be reading data.

This is relevant:


Post Status

Asked in February 2016
Viewed 3,990 times
Voted 8
Answered 1 times


Leave an answer

Quote of the day: live life