-
-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi encoder #100
base: main
Are you sure you want to change the base?
Multi encoder #100
Conversation
Jitpack is back after the fork-sync
some tweaks to project files to allow integration
.DS_Store | ||
.vscode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops, will fix this
|
||
fun clone(): Frame { | ||
val newRawBuffer = rawBuffer.duplicate() // Creates a new buffer that shares the content of 'rawBuffer'. | ||
// Assuming the MediaFormat object can be shared. If not, you need to deep copy it as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comments courtesy of chatGPT. Will remove
@@ -150,6 +151,11 @@ abstract class MediaCodecEncoder<T : Config>( | |||
Handler(handlerThread.looper) | |||
} | |||
} | |||
private fun releaseHandler() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently I believe that the MCE leaks a thread every time the user hits start/stop? This code is not probably optimal. Maybe better to reuse the same handler on subsequent starts
if(isStopped) { | ||
return | ||
} | ||
isStopped = true | ||
mediaCodec?.setCallback(null) | ||
mediaCodec?.signalEndOfInputStream() | ||
if(hasInputSurface) { | ||
mediaCodec?.signalEndOfInputStream() | ||
} | ||
mediaCodec?.flush() | ||
mediaCodec?.stop() | ||
mediaCodec?.setCallback(null) | ||
releaseHandler() // prevent thread leak |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
per offline discussion, this code remedies a bug where stopStream always results in mediaCodec throwing an exception, in both setCallback (must be stopped before call) or in signalEndOfStream (not allowed for non-surface encoder)
import io.github.thibaultbee.streampack.streamers.interfaces.settings.IVideoSettings | ||
import java.util.concurrent.ExecutorService | ||
|
||
data class MultiVideoEncoderTargetInfo( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
obviously this code is pretty similar to alot of VideoMediaCodecEncoder.CodecSurface. Perhaps someday the two might be one.
@@ -0,0 +1,2 @@ | |||
jdk: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I will remove this again
private val rtmpMuxer: IMuxer = FlvMuxer(writeToFile = false) | ||
private val fileMuxer: IMuxer = MP4Muxer() | ||
|
||
private var rtmpAudioStreamId: Int? = null | ||
private var rtmpVideoStreamId: Int? = null | ||
private var fileAudioStreamId: Int? = null | ||
private var fileVideoStreamId: Int? = null | ||
|
||
// Keep video configuration separate for rtmp and file | ||
private var rtmpVideoConfig: VideoConfig? = null | ||
private var fileVideoConfig: VideoConfig? = null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these things, instead of being two of everything, should get grouped into a thing that there are two of.
If done right, would lead eventually to a CameraMultiStreamer that could use any N streaming destinations.
private val audioEncoderListener = object : IEncoderListener { | ||
override fun onInputFrame(buffer: ByteBuffer): Frame { | ||
return audioSource.getFrame(buffer) | ||
} | ||
|
||
override fun onOutputFrame(frame: Frame) { | ||
val fileFrame = frame.clone() | ||
rtmpAudioStreamId?.let { | ||
try { | ||
[email protected](frame, it) | ||
} catch (e: Exception) { | ||
throw StreamPackError(e) | ||
} | ||
} | ||
fileAudioStreamId?.let { | ||
try{ | ||
[email protected](fileFrame,it) | ||
} catch(e:Exception){ | ||
throw StreamPackError(e) | ||
} | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
per offline discussion, this construction implies that the audio encoding (and thus audio settings) are shared between the different output destinations. This is something that I can live with for my needs, since it allows for a much simpler implementation.
@@ -91,8 +91,10 @@ class FullFrameRect(var program: Texture2DProgram) { | |||
* | |||
* The appropriate EGL context must be current. | |||
*/ | |||
fun changeProgram(program: Texture2DProgram) { | |||
this.program.release() | |||
fun changeProgram(program: Texture2DProgram, releasePrevious: Boolean = true) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in my implementation, each encoding path has its own fullframerect (since the resolutions might differ) but they all share the same program. So I dont want them to nuke the program out from under eachother
class EglWindowSurface(private val surface: Surface, useHighBitDepth: Boolean = false) { | ||
private var eglDisplay: EGLDisplay = EGL14.EGL_NO_DISPLAY | ||
private var eglContext: EGLContext = EGL14.EGL_NO_CONTEXT | ||
class EglWindowSurface( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I broke EglWindowSurface and EgDisplayContext into two different classes bc I need to create the texture outside the scope of any particular surface context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I broke EglWindowSurface and EgDisplayContext into two different classes bc I need to create the texture outside the scope of any particular surface context.
@@ -25,6 +25,8 @@ import io.github.thibaultbee.streampack.data.AudioConfig | |||
import io.github.thibaultbee.streampack.data.BitrateRegulatorConfig | |||
import io.github.thibaultbee.streampack.data.VideoConfig | |||
import io.github.thibaultbee.streampack.ext.rtmp.streamers.AudioOnlyRtmpLiveStreamer | |||
//import io.github.thibaultbee.streampack.ext.rtmp.streamers.CameraRtmpLiveStreamer | |||
import io.github.thibaultbee.streampack.ext.rtmp.streamers.CameraDualLiveStreamer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm weird probably this would break the demo app. should remove
c86b764
to
994b995
Compare
6a5e4ad
to
7e712ac
Compare
As lot of code has changed for the v3, this won't be applicable anymore. |
Multiple output is a feature of the version after the v3 (probably v3.1). Work in progress there: https://github.com/ThibaultBee/StreamPack/tree/dev_v3_1 with the |
here is my first cut at the multi-encoder, along with a dual streamer which uses it to create both RTMP and file streams. Much of this code is bespoke for my application and will need subsequent modification.
Some of the code is generally useful and/or addresses bugs I found along the way.