Skip to content
This repository has been archived by the owner on Jul 19, 2022. It is now read-only.

Releases: gnuradio/newsched

Release v0.5.0

18 Jul 20:29
Compare
Choose a tag to compare

This release brings a significant change to the block API and some structural changes regarding shared_ptr usage throughout.

Also, this should be the last release of newsched, as development is moving to the dev-4.0 branch of the gnuradio repository

Project-wide

  • work() signature changed to pass reference to work_io type which wraps input and output structs
  • port becomes a unique pointer owned by the block
  • buffer becomes unique pointer owned by the port object (output port)

Blocks

  • Tag strobe block
  • Delay block propagate tags

Modtool

  • Only process block directory if yml file exists

CI

  • Ubuntu 22 worker
  • Enforce clang formatting

Release v0.4.0

23 Jun 19:56
Compare
Choose a tag to compare
  • Fixes a critical bug in grc bindings that was squishing the file_format tag onto the previous line leaving us with no usable blocks
  • Removes some transitive boost includes brought in with logger.h
  • Add custom bindings to templated blocks
  • Restructures some of the utility scripts
  • Hier Blocks - some major structural changes
  • Python blocks - use correct format descriptors
  • Soapy - rtlsdr and hackrf as hier blocks
  • Grc: integration with pyqtgraph as flowgraph plotting option
  • Blockbuilder: validation with jsonschema
  • Updated templates for pyshell to handle callback methods
  • Grc: Auto-populate the enums
  • Scheduler/NBT: Don't process null input/output bufs
  • Support for optional ports
  • Hier blocks with Message Ports
  • Updated benchmarking flowgraphs and load block

Release v0.3.0

29 Apr 18:04
Compare
Choose a tag to compare

It is time yet again to tag the newsched code and highlight the recent work. We are marching toward migrating newsched into the dev-4.0 branch of gnuradio, and it seems closer (maybe close enough??). Along the way, the documentation has been updated significantly, so please take a look:

Summary of proposed features for GR 4.0: https://wiki.gnuradio.org/index.php?title=GNU_Radio_4.0_Summary_of_Proposed_Features
User Tutorial: https://gnuradio.github.io/newsched/user_tutorial/01_Intro
Developer Tutorial: https://gnuradio.github.io/newsched/dev_tutorial/01_Intro

Much of the work going on in the project has centered around

  • Improving stability and usability
  • Separating out the Kernel Library from the block modules
  • RPC Support and mechanisms for distributed operation
  • New (ported) blocks

Distributed Node Support

At its simplest, a distributed flowgraph is just a collection of individual flowgraphs connected by some serialized interface. We already support this in GNU Radio with ZMQ blocks, but coordinating the whole thing requires an extra level of configuration and management.

Wouldn't it be nice to create one flowgraph, then tell GNU Radio that I want this part of the flowgraph to run here and that part to run over on those resources, and it seamlessly configures all the pieces.

The main components in this release that enable distributed node support comprise of:

  • Separable runtime component that can operate differently than the "default" runtime
  • Serialization of stream and message data (by default in every block
  • Runtime proxy to support signaling between flowgraph partitions
  • A generalized RPC interface - using block parameters getting many things for "free"

The first part that had to be done to enable a distributed runtime was separating the "runtime" and the "flowgraph". The flowgraph object now is really just a connection of blocks and doesn't control the flow of execution. Under the theme of trying not to solve everyone's problem, allowing for a different runtime component for different purposes allows the distributed mode operation to not impact the common use case of gnuradio that runs all on one compute node.

Serialization is handled very easily by ZMQ blocks, and the addition of runtime proxy objects that tell the other flowgraph partitions to start, stop, etc.

Here is an example flowgraph that would run the "distributed" runtime

nsamples = 1000
# input_data = [x%256 for x in list(range(nsamples))]
input_data = list(range(nsamples))

# Blocks are created locally, but will be replicated on the remote host
src = blocks.vector_source_f(input_data, False)
cp1 = blocks.copy()
mc = math.multiply_const_ff(1.0)
cp2 = blocks.copy()
snk = blocks.vector_sink_f()

fg1 = gr.flowgraph()
fg1.connect([src, cp1, mc, cp2, snk])

with distributed.runtime(os.path.join(os.path.dirname(__file__), 'test_config.yml')) as rt1:
    # There are 2 remote hosts defined in the config yml
    #  We assign groups of blocks where we want them to go
    rt1.assign_blocks("newsched1", [src, cp1, mc])
    rt1.assign_blocks("newsched2", [cp2, snk])
    rt1.initialize(fg1)

    # These calls on the local block are serialized to the remote block
    # This in effect means the local blocks are acting as proxy blocks
    mc.set_k(2.0)
    print(mc.k())

    # Run the flowgraph
    rt1.start()
    rt1.wait()

Kernel Library

Another major restructuring in this release includes the "kernel library". Throughout the GNU Radio codebase, but very inconsistently, are "kernel" namespaces to hold the non-block-work type of code. This includes things like an fft filter object that isn't necesarily tied to gnuradio flowgraph operation, but more general DSP and math operations.

Separating this out into a separate library accomplishes a few things:

  1. Reduce inter-module dependencies
  2. Clean block work functions
  3. Library itself can be useful outside of GNU Radio

Usability

The yaml file structure has evolved to hopefully be more simple and get the block developer to the work() function more quickly

Templated blocks now have simplified options that mimic the sigmf data type naming. Also, mult-way templating is made easier with the type_inst field to limit and label which combinations of types get instantiated. Here is an example from the iir_filter block:

typekeys:
    - id: T_IN
      type: class
      options:
          - cf32
          - rf32
    - id: T_OUT
      type: class
      options:
          - cf32
          - rf32
    - id: TAP_T
      type: class
      options:
          - cf64
          - cf32
          - rf64
          - rf32
type_inst:          
  - value: [rf32, rf32, rf64]
    label: Float->Float (Double Taps)
  - value: [cf32, cf32, rf32]
    label: Complex->Complex (Float Taps)
  - value: [cf32, cf32, rf64]
    label: Complex->Complex (Double Taps)
  - value: [cf32, cf32, cf32]
    label: Complex->Complex (Complex Taps)
  - value: [cf32, cf32, cf64]
    label: Complex->Complex (Complex Double Taps)

We can then rely on the magic of code generation to get all this nicely displayed in GRC

Known issues

  • Hier blocks not entirely working - possibly related to how the pybind11 bindings are set up

What's Next

The process up to this point has been

  • Try and port a block from GNU Radio
  • Find and fix limitations in the design and various APIs that arise as a result

A couple of main areas I'd like to target next would be

  • PDU based flowgraphs
    • the PMTF library and newsched message port implementation offers some huge speedups with PDUs
  • Broader distributed flowgraph runtimes and examples
  • Porting of more blocks
  • More heterogeneous processing examples
  • More benchmark performance examples
  • Generalized mechanism for callbacks similar to parameter_query and parameter_change

Release v0.2.0

09 Dec 20:31
Compare
Choose a tag to compare

It is time to make another newsched release, this time with some more exciting features:

GRC Integration

Since all the information for the block interfaces is in the block .yml file, it only makes sense to now start reaping the benefits and getting things like the GRC bindings for free. One of the benefits of this is that we can include in the generation a selectable domain - see the following example where blocks can be switched from the CUDA to the CPU domain, given that these domain definitions are installed in tree

image

Python Blocks

There are now two ways to make python blocks within newsched. Inside the work function, things are slightly different because we are not passing in numpy arrays, but instead block_work_io objects. There are convenience functions that can be used to get numpy arrays from this, and due to the custom buffer interface, also can get other representations such as cupy or pytorch with other convenience functions.

Arbitrary python blocks

This is the method that most resembles current GNU Radio - a python block inherits from e.g. gr.sync_block and implements the work function which is called from the scheduler through pybind11 embedding. Notice that instead of io_signature, we instantiate ports with the port API. See qa_python_block.py for an example

class add_2_f32_1_f32(gr.sync_block):
    def __init__(self, dims=[1]):
        gr.sync_block.__init__(
            self,
            name="add 2 f32")

        self.add_port(gr.port_f("in1", gr.INPUT, dims))
        self.add_port(gr.port_f("in2", gr.INPUT, dims))
        self.add_port(gr.port_f("out", gr.OUTPUT, dims))

    def work(self, inputs, outputs):
        noutput_items = outputs[0].n_items
        
        outputs[0].produce(noutput_items)

        inbuf1 = self.get_input_array(inputs, 0)
        inbuf2 = self.get_input_array(inputs, 1)
        outbuf1 = self.get_output_array(outputs, 0)

        outbuf1[:] = inbuf1 + inbuf2

        return gr.work_return_t.WORK_OK

In the case of a CUDA processing block, we could also use some convenience methods found in gateway_cupy.py to get the array zero-copied from the CUDA device memory

In-tree python blocks

Using the block yml for code generation, we can have all the benefits of the underlying block structure such as block parameters accessible from python. Note that not all the block parameter functionality is accessible through python yet, but this is the intent of making python blocks this way. This also allows a yml file to define multiply python implementations for a single block. For example, you might have a numpy and cupy implementation. GRC integration for this is not yet complete. See blocklib/math/add/ or blocklib/math/multiply_const for examples.

The usage here would be

from gnuradio import math
blk = math.numpy.add_ff

and the structure in-tree would have the py block under a directory named after the implementation
image

In the yml, the implementation would be added with a lang specified:

implementations:
-   id: cpu
-   id: cuda
-   id: numpy
    lang: python

Still a few more kinks to figure out to make this seamless

Soapy Blocks

Since the gr-soapy blocks have multiple GRC definitions for the single base source block, this required a bit different approach in how grc files are generated without duplicating the entire implementation. Right now, there is a monster yml file that has different information for the grc outputs.

QT GUI Blocks

QT blocks have been cleaned up to work in with the latest GRC generation

image

Here is the full changelog:

meson build

  • Add .gitignore explicitly where autogeneration is expected rather than a global filter

runtime

  • Allow for itemsize of 0 by default to be connected to anything
    • This allows a, e.g., copy block to be connected with no templating or setting of the item size
    • Blocks will take on the itemsize of the first connected block it finds
  • More generic factory interface using yaml string for scheduler
  • Additional logic to flush the scheduler in the sequence of events when a block starts the DONE process
  • Blocks now have a default param_update message port that using a pmtf::map can change any parameter that is exposed
  • Further removal of Boost::format
  • Python Block interface
    • Two methods to create python blocks
      • Through the "gateway" wrapping, arbitrarily create a python block
      • Through the block yml with python lang implementation
        • This gives access to more inherited block features, tighter coupling
  • Add default cmd message port to all blocks
  • Generic factory interface using yaml string configuration
  • Prefix path loading relative path to lib to give correct gr::prefix()
  • Moved implementation out of block.hh

qtgui

  • Update fft and filter blocks to allow for function qtgui

grc

  • Update domain property of a port to be evaluated parameter
  • Selectable domains automatically generated from block yml
    • Automatic in grc file generation to have enum of specified implementations as domains
  • Supports optional tags on ports
  • Port ID from the yml is rendered into the grc file
  • Port in recent changes from gnuradio

blocks

  • Fixed bug in throttle that was causing flowgraph to hang

soapy

  • Restructure the code generation to allow multiple GRC files from a single block yaml
  • Generate a .grc block for RTL-SDR and HackRF

Scheduler-NBT

  • Update the logic to "kick" the scheduler when input has been blocked

Release v0.1.1

12 Nov 00:00
Compare
Choose a tag to compare

Didn't take long to require a patch from the first release

runtime

  • Propagate dependencies through meson
    • not having this was causing build issues on Fedora

blocklib

  • Adds some missing include files

Release v0.1.0

11 Nov 18:18
Compare
Choose a tag to compare

Development on newsched has been ongoing for over a year, so the codebase
has evolved rapidly in that time - thus there are no details for this first
changelist. Just consider this the first drop.

Core Features

  • Modular Scheduler Framework
    • interfaces based on a single input queue
    • default scheduler with N blocks/thread
  • Custom Buffers
  • YAML-driven Block Workflow
  • Consolidated Parameter Access Mechanisms
  • Simplified Block APIs

Detailed documentation can be found here

With this release of newsched, you can easily create your own blocks, custom
buffers, and even your own scheduler if you are so inclined

Special thanks to Bastian Bloessl and Marcus Müller for leading the effort
to architect the runtime and provide guidance as to the design decisions

Also want to acknowledge the Scheduler Working Group who have consulted and provided
feedback and ideas on a regular basis about design decisions. I apologize
if I have left anyone out here, but another special thanks to: Seth Hitefield,
Jeff Long, David Sorber, Mike Piscopo, Jacob Gilbert, Marc Lichtman, Philip Balister,
Jim Kulp, Wylie Standage, Garrett Vanhoy, John Sallay, and all the people associated with
with the DARPA DSSoC program that shared their research giving valuable insight.

There is much work left to do, so please reach out on chat.gnuradio.org #scheduler
room if you would like to get involved