Components
Graph organization through modular components
The core of I/O is structured around a set of interdependent components that define the flow, control, and real-time rendering of audio establishing the foundational architecture .
At the center of the system is AudioGraph, the context that coordinates scheduling, processing, rendering, and synchronization of nodes, while also establishing the authoritative sample rate, managing execution threads, and controlling the processing mode.
The graph connects to a physical output endpoint managed by AudioHardware, which is responsible for interacting with the audio device. This component configures channels, canonical parameters and executes the render loop in sync with the context.
Nodes
Nodes (AudioNode) represent the fundamental processing units: each can generate, transform, or route signals, and all operate under the supervision of the audio context ( AudioGraph), which ensures an ordered flow while maintaining route consistency.
For fine-grained parameter control in real time, AudioParameter provides a sample-accurate automation and modulation mechanism, enabling interpolations, timed ramps, and direct control over the values that influence processing.
Complementarily, AudioSetting provides dynamic configuration values—such as flags or discrete adjustments—that can be modified during the graph-execution.
Together, these components form a unified ecosystem: the AudioGraph establishes the timing and communication framework, AudioNode performs the processing, and AudioParameter and AudioSetting govern control. This interaction maintains a deterministic, low-latency architecture ideal for real-time audio processing.
See the examples section for additional details.
The components work together to maintain a coherent, low-latency audio flow. Nodes are organized into three categories —Rendering, Processing, and Analysis— that collectively enable generating, transforming, and observing audio.
Last updated