-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transmit support #124
Comments
Here's what I'm thinking:
Then to support client-side audio,
|
There's some overdue core refactoring #111 that I want to get in before adding any major features like transmit support that might lock in the current architecture more. (Basically, on the receive side, the hardcoded graph design that That said, some responses:
The picture that I had in mind was that there would be an object that is less "transceiver" and more "frequency of interest" with optionally attached receiver and transmitter. Unfortunately, I don't currently remember the rationale. It might have to do with the future of frequency database interaction (it needs to be more server-side integrated than it is).
Because GR flow graph reconfiguration is disruptive, and transmit sample timing need not have anything to do with receive sample timing, I believe it will be best to have a flow graph for each transmitter, separate from the receivers'. (Syncing them would be relevant if one wants to, say, implement a repeater, but I'm going to call that out of scope.)
AudioManager has the very specific job of mixing and resampling audio to many destinations. Transmitting does neither mixing nor multiple destinations, at least in straightforward cases. Furthermore, AudioManager is going to go away with the #111 refactoring because the dependency graph will make its job implicit. |
I think the changes I proposed here do not actually make the current architecture locked in more; it will add a bit of wiring to
That's sort of what I was thinking - the Transceiver object would track the frequency information and the selected mode, with optionally attached receiver and transmitter. It sounds like in your mind tuning would involve making a new Frequency object and reattaching the receiver and transmitter to it?
Oh, good point. I didn't realize I thought about the repeater usecase and it's certainly interesting but I think it's fine to assume that connection would be made outside GR (e.g. with a Pulseaudio loopback device).
Why does AudioManager go away with #111? You still need the moral equivalent to do mixing and resampling. |
No, that would be a bad modeling of possibly continuous change. Sorry, as I said I don't remember exactly what the rationale was. In practice I'll do whatever fits in well when the refactoring is in progress.
This is one of those under-specified things. Audio devices that might be attached to a transceiver don't care. gr-osmosdr when used with the HackRF will fail to switch over unless you ensure the source block is destroyed before you open the sink and vice versa, which is why the osmosdr plugin has support for doing that. But this is independent of whether the blocks are in separate flow graphs.
Because instead of having a thing dedicated to making audio resampling connections, each audio sink('s managing wrapper) will be able to specify "I want a sum of these receivers' audio outputs at this sample rate" and the dependency engine will construct the necessary intermediate blocks based on that specification. It's not that AudioManager's job will be replaced, but it will be distributed among generic algorithms and independent units of task-specific (audio) rules. |
I don't actually see an open issue about this. Feel free to close as a dupe if I missed it.
ShinySDR has a variety of bits of transmit support (Osmo TX driver, modulators, etc.) but none of it appears to be wired up into a functional UI yet.
I'd like to take a stab at this. Do you have any notes or thoughts about how it was intended to be implemented before I start working on it?
The text was updated successfully, but these errors were encountered: