Saturday, May 2, 2026

Why AbstractX Exists: Streaming First, Bus Second

AbstractX exists to solve a practical embedded systems problem: how to move real-time stream data and control traffic through one coherent FPGA-to-Linux model without creating a giant, fragile stack.

The core idea

Basics

  • Use SPI/QSPI to make the FPGA look like a streaming device to the host, and to minimize data jitter compared with low-speed USB.
  • Still keep a bus interface like Wishbone so configuration stays simple.
  • Use that control path to configure registers, DMA behavior, timers, and related control-plane features.

AbstractX is a switched streaming architecture built around AXIS-style metadata. Streams are marked with attributes such as route and timestamp information, then switched through the fabric, with DMA-style streaming as the main data path and Wishbone used for simple control and configuration.

How TUN works with this

TUN is the Linux-side virtual network interface that receives and injects packet data for AbstractX.

TUN allows packeting and creates an easy interface to a stream of sensors instead of reading one at a time. If there are multiple sensors, even different sensor types, the readings can be synchronized and correlated to each other by the FPGA because the timestamp is carried in the AXIS attributes for coherent timing across channels.

  • On RX: streamed data from AbstractX (via AXIS and transport framing) is delivered into Linux through the TUN interface, so userspace can read it like normal network traffic.
  • On TX: userspace writes packets to TUN, and the bridge forwards them back through AbstractX toward the FPGA data path.
  • This keeps Linux integration simple while the FPGA side handles fast streaming, DMA movement, and deterministic timing behavior.

Streaming platform first, bus second

The most important principle is:

AbstractX is a streaming platform first, and a bus interface second.

  • Streaming is treated as the primary workload.
  • Control/register access is integrated, but it is not the center of gravity.
  • This keeps the architecture aligned with real-time data movement instead of register choreography.

Where Wishbone fits

Wishbone is used as the practical control-plane attachment inside the design for simple configuration, setup, and status, while the switching/streaming path remains the main data plane.

In other words: Wishbone is important, but it is there to support the stream architecture—not replace it.

Why this matters

  • Less architectural drift: one consistent model for growth instead of many one-off interfaces.
  • Better real-time behavior: data movement is designed as a first-class path.
  • Cleaner integration: control and stream traffic coexist without fighting each other.
  • Maintainability: the system stays understandable as complexity increases.

The purpose of AbstractX

AbstractX is around because embedded FPGA systems need a practical middle path: not toy-simple, not framework-heavy, but structured enough to scale from bring-up to serious streaming workloads.

That is the “why” in one sentence: make streaming-centric systems easier to build, reason about, and evolve.

Best of both worlds

AbstractX gives us the best of both worlds: high-speed DMA data movement with explicit TX/RX paths plus auto-triggering behavior.

We can also do auto playback so accel/gyro data streams back continuously via DMA without processor intervention, with timestamps included for timing correlation and analysis.

This means control and stream traffic can coexist while data capture and movement stay deterministic.

Architecture at a glance

Here is the current AbstractX block diagram:

+----------------------+    SPI    +---------------------------+    +---------------------------+
| Linux Host           | <-------> | SPI to/from AbstractX     | <->| AbstractX Switch Fabric   |
| Control + Apps       |           | Transport Translator      |    | route + switch + tags     |
+----------------------+           +---------------------------+    +-------------+-------------+
        ^                                                                         ^
        | IRQ / INT_REQ                                                           |
        +-------------------------------------------------------------------------+

                                                                              /   \
                                                                             v     v

   control path <--> +----------------------+                      +----------------------+ <--> stream path
                     | Wishbone Gateway     |                      | DMA / Stream         |
                     | Register Access      |                      | Endpoints            |
                     +----------+-----------+                      +----------+-----------+
                                |                                             |
                                v                                             v
                     +----------------------+                      +----------------------+
                     | Control Peripherals  |                      | Streaming Sources    |
                     | GPIO / status / ADC  |                      | gyro / accel / IO   |
                     +----------------------+                      +----------------------+

                                 metadata / AXIS tags: route + ts_in + ts_out

First proof of concept: accelerometer / gyroscope offload

The first proof of concept is an accelerometer/gyroscope pipeline focused on full data-path offload and microsecond-level timing behavior.

  • Set up the sensor via SPI commands.
  • Set up DMA auto-playback of X/Y/Z samples for gyro + accel channels.
  • Use an ISR trigger path to launch reads and forward results over AbstractX AXIS into a Linux TUN driver.
  • Keep the processor out of the hot path as much as possible for deterministic, low-latency timing.

Next proof of concept: timer-driven A/D streaming

The next step is an A/D path, likely timer-driven, that automatically samples and streams conversion results through the same AbstractX data path.

Do you have something that needs real-time streaming but carries high processor overhead? Let’s see if we can do it in AbstractX.

All of these core architecture details are documented in the AbstractX README.

No comments:

Post a Comment