Trying to understand code in the demo app

In this iOS demo app of speech recognition, can someone help me understand why the inference is done using closure instead of giving the base address of the buffer to the model all at once? Thank you!

DispatchQueue.global().async {
                floatArray.withUnsafeMutableBytes {
                    let result = self.module.recognize($0.baseAddress!, bufLength: Int32(self.AUDIO_LEN_IN_SECOND * self.SAMPLE_RATE))
                    DispatchQueue.main.async {
                        self.tvResult.text = result
                        self.btnStart.setTitle("Start", for: .normal)
                    }
                }
            }