You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m currently trying to integrate the following code to retrieve camera intrinsics from the CMSampleBuffer to compute the field of view (FOV):
if let captureConnection = videoDataOutput.connection(with: .video) {
captureConnection.isEnabled = true
captureConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
nonisolated func computeFOV(_ sampleBuffer: CMSampleBuffer) -> Double? {
guard let camData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) as? Data else { return nil }
let intrinsics: matrix_float3x3? = camData.withUnsafeBytes { pointer in
if let baseAddress = pointer.baseAddress {
return baseAddress.assumingMemoryBound(to: matrix_float3x3.self).pointee
}
return nil
}
guard let intrinsics = intrinsics else { return nil }
let fx = intrinsics[0][0]
let w = 2 * intrinsics[2][0]
return Double(atan2(w, 2 * fx))
}
However, I’m not very familiar with WebRTC on iOS, and I’m wondering where I can find the typical captureOutput with the CMSampleBuffer in the Sources -> StreamVideo package. I would appreciate any guidance or suggestions on where to integrate this functionality into the existing codebase.
Thanks for your help!
Best regards
If possible, how can you achieve this currently?
Maybe possible?
What would be the better way?
I don't know right now.
The text was updated successfully, but these errors were encountered:
That's a really interesting question. First things first, there is no way to capture frames and pass them to the VideoCapturer. That being said, you can get access on the captured frames in order to perform any processing/analysis your need by providing your own AVCaptureVideoDataOutput. You can do so by calling try await call.addVideoOutput(...).
What are you trying to achieve?
I’m currently trying to integrate the following code to retrieve camera intrinsics from the CMSampleBuffer to compute the field of view (FOV):
However, I’m not very familiar with WebRTC on iOS, and I’m wondering where I can find the typical captureOutput with the CMSampleBuffer in the Sources -> StreamVideo package. I would appreciate any guidance or suggestions on where to integrate this functionality into the existing codebase.
Thanks for your help!
Best regards
If possible, how can you achieve this currently?
Maybe possible?
What would be the better way?
I don't know right now.
The text was updated successfully, but these errors were encountered: