Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add audb.stream() and audb.DatabaseIterator #448

Merged
merged 39 commits into from
Aug 21, 2024
Merged

Add audb.stream() and audb.DatabaseIterator #448

merged 39 commits into from
Aug 21, 2024

Conversation

hagenw
Copy link
Member

@hagenw hagenw commented Aug 15, 2024

Relates to audeering/audformat#440

Provides pseudo-streaming functionality to audb with the new function audb.stream(), which requires that the user selects a table to stream. It returns the new audb.DatabaseIterator object.

The streaming functionality

  • allows reading of tables, that do not fit into memory
  • allows downloading media files of databases, that do not fit onto hard disc
  • provides an easy way to preview the content of a database

Example:

>>> import audb
>>> db = audb.stream("emodb", "emotion", version="1.4.1", batch_size=4, full_path=False, verbose=False)
>>> next(db)
                   emotion  emotion.confidence
file                                          
wav/03a01Fa.wav  happiness                0.90
wav/03a01Nc.wav    neutral                1.00
wav/03a01Wa.wav      anger                0.95
wav/03a02Fc.wav  happiness                0.85

It achieves this by

  1. downloading the complete table file to cache
  2. reading only a few lines (given by batch_size argument) from the table file
  3. downloading corresponding media files on the fly

Note regarding 1.: downloading the table first to cache makes it easy to add support for parquet and csv tables. If we want to stream just a part of the parquet table as proposed in audeering/audbcards#59 (comment), we can add this feature later on. Also note for loading the csv file, I'm only using pd.read_csv() as this provides an easy solution, pyarrow.csv can fail for some csv files, and large tables are supposed to be stored as parquet files anyway.

Random table rows and buffering

audb.stream() provides the argument shuffle that allows to stream the data in a random order. As we cannot read random lines from the table files, this requires that we first read a larger part of the table into a buffer (as given by the buffer_size argument), and shuffling the buffer.

Differences between audformat.Database and audb.DatabaseIterator

audb.DatabaseIterator inherits from audformat.Database and adds a __next__() method,
that uses audb internal functions like audb.load_media() to allow streaming of the database.
In addition, it limits the database object to the selected table.
The biggest difference for the user is that list(db) will now result in something different then before, as we also need to overwrite __iter__() in order to make the iteration work:

>>> db = audb.load("emodb", version="1.4.1")
>>> list(db)[0]
'emotion'
>>> db = audb.stream("emodb", "emotion", version="1.4.1", batch_size=4, full_path=False)
>>> list(db)[0]
                   emotion  emotion.confidence
file                                          
wav/03a01Fa.wav  happiness                0.90
wav/03a01Nc.wav    neutral                1.00
wav/03a01Wa.wav      anger                0.95
wav/03a02Fc.wav  happiness                0.85
>>> # In order to list tables we need to address db.tables or db.misc_tables
>>> list(db.tables)[0]
'emotion'

But I think it should be fine for the user, as the focus is streaming and there it is important that the following works:

[df for df in db]

Loading whole dependency table

In audeering/audformat#440, I proposed to also load only the part of the dependency table that is needed for table and the media files to download. As this requires some effort, and we also don't have an easy way of publishing a database, for which the dependency table does not fit into memory, I would skip that for now.

Possible improvement of code

I decided to implement streaming support directly in audb and not first in audformat (e.g. as part of audformat.Table.get(..., start=..., samples=...)). The main reason is that for parquet files we cannot easily specify a beginning and end line, but we need to iterate through the file.
As a consequence, I needed to copy some code from audformat for post-processing after reading from csv and parquet files, and code for pre-paring reading from the csv file. This can be enhanced at a later stage, by providing this code inside audformat as hidden methods, which we then can reuse here in audb. But for now, I would propose to go with my code, and improve audformat afterwards.

Docstrings

Complete docstring of the new audb.stream() function:

image

Beginning of the docstring of the new audb.DatabaseIterator class:

image

New sub-section in usage documentation under "Load a database" section:

image

@hagenw hagenw marked this pull request as draft August 15, 2024 11:23
@hagenw hagenw marked this pull request as ready for review August 19, 2024 11:47
docs/load.rst Outdated Show resolved Hide resolved
Co-authored-by: ChristianGeng <[email protected]>
audb/core/stream.py Outdated Show resolved Hide resolved
audb/core/stream.py Outdated Show resolved Hide resolved
Copy link
Member

@ChristianGeng ChristianGeng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All points that were there have been addressed in separate conversations.
As the entire stream module and its functionality is new, there needn't be additional checks with respect to integration with other packges.

Therefore I will approve the MR.

@hagenw hagenw merged commit 470091f into main Aug 21, 2024
8 checks passed
@hagenw hagenw deleted the stream branch August 21, 2024 13:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants