-
Notifications
You must be signed in to change notification settings - Fork 132
ExampleCode.wiki
There are essentially two types of programs interacting with LSL: programs that provide data, such as a data source that represents a particular acquisition device, and programs that consume data (and occasionally mixtures of the two), such as a viewer, a recorder, or a program that takes some action based on real-time data.
The main difference between LSL and other data transport interfaces is that it is not connection-oriented (although the underlying implementation uses connection-oriented transport). It is closer to a "publish-subscribe" model. A data producer creates a named "outlet" object (perhaps with meta-data) and pushes samples into it without ever caring about who is listening. So functionally it is offering a stream of data plus some meta-data about it.
A data consumer who is interested in a particular type of stream queries the network for a matching one ("resolves it"), for example based on the name, type or some other content-based query and then creates an "inlet" object to receive samples from the stream. It can also separately obtain the meta-data of the stream. The consumer then pulls out samples from the stream. The sequence of samples that the consumer sees is in order, without omissions (unless a buffer's memory is exhausted), and type-safe. The data is buffered at both endpoints if there are transmission delays anywhere (e.g., interrupted network connection) but otherwise delivered as fast as the system allows.
The objects and functions to perform these tasks are provided by a single cross-platform library (liblsl). The library comes with a C header file, a C++ header file and wrapper classes for other languages.
Note: if you have trouble establishing communication between these programs across different computers especially on Windows, take a look at the NetworkConnectivity page and read the Network Troubleshooting section.
It is recommended that you clone the repository to get the respective code (or check the SDK mirror at SCCN). The documentation is at the following locations:
- C: C header file
- C++: C++ header file
- Python: pylsl module
- Java: JavaDocs
- C#: LSL module
- MATLAB: class files.
- Other languages (R, Octave, Ruby, Lua, Perl, Go): SWIG interfaces (the C or C++ header file is the API reference).
The API documentation covers all classes, functions and types and should hopefully leave no questions unanswered. Note that a typical application will only need a small subset of the API (as used in the example programs).
These two example programs illustrate the bread-and-butter use of LSL as it is executing in almost any device module that comes with the distribution:
These two example programs illustrate a more special-purpose use case, namely sending arbitrary string-formatted data at irregular sampling rate. Such streams are used by programs that produce event markers, for example:
- Sending a stream of strings with irregular timing.
- Receiving a stream of strings with irregular timing.
The last example shows how to attach properly formatted meta-data to a stream, and how to read it out again at the receiving end. While meta-data is strictly optional, it is very useful to make streams self-describing. LSL has adopted the convention to name meta-data fields according to the XDF file format specification whenever the content type matches (for example EEG, Gaze, MoCap, VideoRaw, etc); the spec is here. Note that some older example programs (SendData/ReceiveData) predate this convention and name the channels inconsistently.
These two example programs illustrate the shortest amount of code that is necessary to get a C++ program linked to LSL:
These two example programs demonstrate how to write more complete LSL clients in C++ (they are 1:1 equivalents of the corresponding C programs):
These two programs transmit their data at the granularity of chunks instead of samples. This is mostly a convenience matter, since inlets and outlets can be configured to automatically batch samples into chunks for transmission. Note that for LSL the data is always a linear sequence of samples and data that is pushed as samples can be pulled out as chunks or vice versa. They also show how structs can be used to represent the sample data, instead of numeric arrays (which is mostly a syntactic difference):
- Sending a multi-channel time series at chunk granularity.
- Receiving a multi-channel time series at chunk granularity.
These two example programs illustrate a more special-purpose use case, namely sending arbitrary string-formatted data at irregular sampling rate. Such streams are used by programs that produce event markers, for example. These are 1:1 equivalents of the corresponding C programs:
- Sending a stream of strings with irregular timing.
- Receiving a stream of strings with irregular timing.
The last example shows how to attach properly formatted meta-data to a stream, and how to read it out again at the receiving end. While meta-data is strictly optional, it is very useful to make streams self-describing. LSL has adopted the convention to name meta-data fields according to the XDF file format specification whenever the content type matches (for example EEG, Gaze, MoCap, VideoRaw, etc); the spec is here. Note that some older example programs (SendData/ReceiveData) predate this convention and name the channels inconsistently.
These programs illustrate some special use cases of LSL that are also relevant for C programmers. See the lsl_c.h header for the corresponding C APIs (they are very similar to the C++ code shown here).
This example illustrates in more detail how streams can be resolved on the network:
This example shows how to query the full XML meta-data of a stream (which may be several megabytes large):
This example shows how to obtain time-correction values for a given stream. These time-correction values are offsets (in seconds) that are used to remap any stream's timestamps into the own local clock domain (just by adding the offset to the timestamp):
These examples show how to transmit a numeric multi-channel time series through LSL:
The following examples show how to send and receive data in chunks, which can be more convenient. The data sender also demonstrates how to attach meta-data to the stream.
These examples show a special-purpose use case that is mostly relevant for stimulus-presentation programs or other applications that want to emit 'event' markers or other application state. The stream here is single-channel and has irregular sampling rate, but the value per channel is a string:
The last example shows how to attach properly formatted meta-data to a stream, and how to read it out again at the receiving end. While meta-data is strictly optional, it is very useful to make streams self-describing. LSL has adopted the convention to name meta-data fields according to the XDF file format specification whenever the content type matches (for example EEG, Gaze, MoCap, VideoRaw, etc); the spec is here. Note that some older example programs (SendData/ReceiveData) predate this convention and name the channels inconsistently.
These examples show how to transmit a numeric multi-channel time series through LSL:
These examples do the same as before, but now transmit the data at the granularity of chunks. For the purposes of network transmission the same effect can be achieved by creating the inlet or outlet with an extra argument to indicate that multiple samples should be batched into a chunk for transmission. However, since MATLAB's interpreter is relatively slow, the library calls should be made in a vectorized manner, i.e. at chunk granularity, whenever possible (at least for high-rate streams). Note that for LSL the data is always a linear sequence of samples and data that is pushed as samples can be pulled out as chunks or vice versa:
These examples show a special-purpose use case that is mostly relevant for stimulus-presentation programs or other applications that want to emit 'event' markers or other application state. The stream here is single-channel and has irregular sampling rate, but the value per channel is a string:
The last example shows how to attach properly formatted meta-data to a stream, and how to read it out again at the receiving end. While meta-data is strictly optional, it is very useful to make streams self-describing. LSL has adopted the convention to name meta-data fields according to the XDF file format specification whenever the content type matches (for example EEG, Gaze, MoCap, VideoRaw, etc); the spec is here. Note that some older example programs (SendData/ReceiveData) predate this convention and name the channels inconsistently.
These examples show how to transmit a numeric multi-channel time series through LSL:
The following examples show how to transmit data in form of chunks instead of samples, which can be more convenient.
These examples show a special-purpose use case that is mostly relevant for stimulus-presentation programs or other applications that want to emit 'event' markers or other application state. The stream here is single-channel and has irregular sampling rate, but the value per channel is a string:
The last example shows how to attach properly formatted meta-data to a stream, and how to read it out again at the receiving end. While meta-data is strictly optional, it is very useful to make streams self-describing. LSL has adopted the convention to name meta-data fields according to the XDF file format specification whenever the content type matches (for example EEG, Gaze, MoCap, VideoRaw, etc); the spec is here. Note that some older example programs (SendData/ReceiveData) predate this convention and name the channels inconsistently.
These examples show how to transmit a numeric multi-channel time series through LSL:
The following examples show how to transmit data in form of chunks instead of samples, which can be more convenient.
These examples show a special-purpose use case that is mostly relevant for stimulus-presentation programs or other applications that want to emit 'event' markers or other application state. The stream here is single-channel and has irregular sampling rate, but the value per channel is a string:
The last example shows how to attach properly formatted meta-data to a stream, and how to read it out again at the receiving end. While meta-data is strictly optional, it is very useful to make streams self-describing. LSL has adopted the convention to name meta-data fields according to the XDF file format specification whenever the content type matches (for example EEG, Gaze, MoCap, VideoRaw, etc); the spec is here. Note that some older example programs (SendData/ReceiveData) predate this convention and name the channels inconsistently.
These sample codes are from actual 'production' software that is used to do data transmission:
- Kinect: multi-channel signal with body joint positions and meta-data.
- Keyboard: irregular marker stream based on keyboard inputs.
- B-Alert: reading from an EEG device in a separate thread.
- EyeLink: reading from an eye tracker in Python.
Also, all applications in the Apps directory are open-source and can serve as examples, and most of them are very similar in how they pass on data to LSL.
I want to check the most recent sample of a stream every few seconds. How do I do that? Because the result of inlet.pull_sample() is the next sample in the order provided by the sender, you first need to pull out all samples that have been buffered up in the inlet. You can do this by calling pull_sample() with a timeout of 0.0 -- once it returns zero, there are no more samples. To speed this up, you can set a short buffer size when creating the inlet (e.g., one second).
What clock does LSL use? / How do I relate LSL's local_clock() to my wall clock? LSL's local_clock() function measures the time since the local machine was started, in seconds (other system clocks usually do not have sufficient resolution for use with LSL). The correct way to map its output to the time measured by your preferred system clock is to first determine the constant offset between the two clocks, by reading them out at the same time, and then to add that offset to the result of local_clock() whenever it is needed. Also keep in mind that the time-stamps that are returned by inlet.pull_sample() will generally be local to the sender's machine -- only after you add the time offset returned by inlet.time_correction() to them you have them in your local domain.
What is the latency of LSL? Does the chosen buffer size have anything to do with it? The latency of LSL is typically under 0.1 milliseconds on a local machine, unless a very large amount of data is transmitted (megabytes per sample). The buffer size does not affect the latency unless you allow data to queue up by not querying it for an extended period of time (or when the network connection is temporarily interrupted). In such a case, the queued-up data will be quickly transmitted in a burst once network connectivity is restored. If you only need a limited backlog of data, you can set a shorter buffer size when creating the inlet.
I want to transmit data with multiple data types (e.g., floats, ints) per sample. What is the best way to do that? The simplest way is to use a channel format that can hold all numbers that you want to represent and concatenate your data into a vector of that type -- the double64 format can store integers up to 53 bit, so it will hold virtually anything (floats, doubles, ints, etc.) that you want to store. It is also possible to transmit raw structs, but note that this is generally unsafe and non-portable (e.g., different compilers insert varying amount of padding between struct elements; also on platforms with different endianness your data will be garbled). In principle you can also send multiple streams and use the same time stamp when sending the samples, but that will require some extra complexity at the receiving that is rarely worth the effort.
I want to make an LSL driver for a piece of hardware. What is the fastest way to do that? If a quick-and-dirty solution is fine, then the best way is to take one of the example programs for your favorite language and extend it as needed. If you want a graphical user interface and you know C++ and are on Windows, you can copy one of the applications in the LSL distribution and adapt it as needed.
I am making a driver for a piece of hardware and want to make sure that my time stamps are accurate. How to do that? If your data comes from a separate device your samples will generally be a few milliseconds old. If you know or have measured this delay, you can correct for it by submitting the time stamp as local_clock()-my_sample_age when pushing a sample. However, it is strongly recommended to log any constant offset (here: my_sample_age) in the meta-data of the stream, otherwise it can be hard to later reconstruct what value was used, especially if it is occasionally revised. Aside from a delay, your time stamps will also have a jitter (due to OS multi-tasking). It is difficult to smooth the jitter in real time correctly without introducing inadvertent clock drift and therefore it is recommended to submit non-smoothed time stamps and leave it to the receiver to smooth them if needed. In particular, when you analyze the data offline (e.g., in MATLAB), you or the XDF importer can do a much better job at linearizing the jitter post-hoc.
I am transferring data at high sampling rate or bandwidth. What is the most efficient way to do this? When sending the data it usually does not matter how you perform the sending (via push_sample, push_chunk, or etc.), since the bottleneck at high bandwidth will typically be the operating system's network stack. You can call push_sample or pull_sample at least a million times per second without a significant performance hit. For small sample sizes (few channels), consider to send the data in chunks to avoid forcing frequent OS calls and network transmission. You can do this by either setting a chunk size when creating the outlet, or by using push_chunk() instead of push_sample(), or by setting the pushthrough flag in push_sample() to false for every sample except the last one in a batch. Also, if you have a large number of channels (e.g., transferring image data), make sure that the data type that you pass to the push function corresponds to the data type of the stream, otherwise you pay extra for type casting. When receiving data at very high rate (100KHz+) or bandwidth (20MBps+), it is faster to avoid the basic pull_chunk functions and instead use pull_chunk_multiplexed with a pre-allocated buffer. Make sure that you use a recent version of liblsl (1.10 or later offers a faster network protocol) at both the sender and the receiver.
My hardware can produce time stamps of its own. Should I pass them into LSL? Usually the answer is no -- the preferred way is to either leave it to LSL's push_sample() or push_chunk() functions to time-stamp the data (easiest), or to call the local_clock() function to read out the LSL clock, and then pass that in, either unmodified or with a constant delay subtracted (if you know the delay of your hardware). The only exception is if you have multiple pieces of hardware, all of which have access to the same high-precision clock, and you want to use that clock instead of the LSL clock (if the millisecond precision provided by LSL is not enough for your needs, e.g., demanding physics experiments), and you know exactly what you are doing. If you have any doubt on how you would use your own clock to synchronize multiple pieces of hardware after you've recorded the data, don't use them.
My hardware supports different block/chunk sizes. Which one is best for use with LSL? The chunk size trades off latency vs. network overhead, so we suggest to allow the user to override the value if desired. A good range for the default value is between 5-30 milliseconds of data (resulting in an average latency that is between 2.5-15 ms and an update rate between 200-30 Hz). Shorter chunks make sense in very low-latency control settings, though note that chunks that comprise only a few bytes of data waste some network bandwidth due to the fixed Ethernet packet overhead. Longer chunks can also be used (any duration is permitted, e.g. for sporadic data logging activities), although the longer the chunks are the harder it becomes to perform sample-accurate real-time time-synchronization (specifically, removing the jitter in the chunk time stamps): the longest chunks that can be synchronized in real time would be around 100ms in typical settings.
I am getting more than one matching stream in my resolve query. What is the best way to handle this? You either have to rename one of your streams (if the software that provides them allows you to do that), or you can make the query more specific; for instance, instead of "type='EEG'" you could use e.g., "name='Cognionics Quick-20'" (if that's the name of the stream), or specify the hostname of the computer from which you want to read, as in "name='Cognionics Quick-20' and hostname='My-PC001'" (assuming that this is your hostname). You can find out the names of the streams and of the computers that they run on using the Lab Recorder (it will list them in the format "streamname (hostname)" -- keep in mind that the this is just how the recorder prints it, the " (hostname)" part is of course not baked into the stream name). As the developer of the software, a good way is to warn the user that their query was ambiguous (so they can address it), and inform them that you are using the last-created stream that matches the query. This would be the stream with the highest value for the created_at() property (they come back unordered from the resolve function call). You could also point them to this FAQ entry on how they can make their query more specific.