Introduction

Engineering audio experiences for creators

This document is designed to guide you step-by-step in understanding and using our multiplatform audio development environment with confidence.

By design, I/O provides a comprehensive suite of tools to integrate diverse audio sources, apply sophisticated effects, generate detailed visualizations, and more. All these operations execute within a context specifically conceived to enable highly dynamic routing.

Operations are executed through nodes, which are linked together to form a processing graph. These nodes can be arranged as simple chains or complex networks, connecting their inputs and outputs to establish custom signal paths.

Processing begins with a source that delivers samples at extremely small time intervals —often tens of thousands per second. The output of each node may be sent to others that process the signal, enabling sophisticated routing.

Getting Started with I/O

Quickly access the essential content to take your first steps.

circle-info

Introduction to the architecture and processing flow.

Comparison

The I/O platform is available in two editions:

The community edition focuses on exploration, learning and prototyping, offering a reduced set of features that allow experimenting with dynamic binaural audio.

The professional edition provides the full potential: advanced routing, unlimited and specialized nodes, analysis tools, technical support and more.

circle-check
Plan
Community
Professional

Source playback

✔️

✔️

Node-based routing

✔️

✔️

Node limit

✔️ Máx. 10 nodes

✔️ Unlimited nodes

Modular graph engine

✔️

✔️

Low-latency processing

✔️

✔️

Real-time rendering

✔️

✔️

Binaural audio

✔️

✔️

Multiplatform support

✔️

✔️

FX nodes (EQ, Reverb, Delay)

✔️

Signal visualization

✔️

Complex chains & subgraphs

✔️

Technical support

Community

✔️

Guide Overview

This guide outlines the purpose, capabilities, and fundamental components.

It also examines the architecture and the internal signal-processing flow, explaining how each subsystem —from sources and nodes to the execution context— collaborates in a synchronized manner to ensure deterministic, and stable behavior in real-time audio environments.

I/O is built entirely in Swift, a multiplatform language that combines performance with modern syntax. It offers an execution model optimized by the LLVM compiler, capable of producing code as fast as C, but with memory safety and automatic resource management.

Visit the official website for more information: https://www.swift.orgarrow-up-right

How I/O Fits Architecturally

Understanding the role of I/O requires placing it within the broader technological layers that collectively form a contemporary and fully realized processing architecture.

I/O sits between low-level APIs —such as CoreAudio, WASAPI, or ALSA— and higher-level layers dedicated to audio composition or rendering. This positioning allows it to operate as an engine that orchestrates the signal flow and executes DSP blocks with highly precise timing, while remaining broadly independent from the underlying operating system.

Unlike pure DSP libraries, I/O is not limited to processing functions: it implements a complete structured system for routing, synchronization and deterministic execution, autonomously handling nodes, buffers, and graph dependencies with consistency.

Its modular design supports extending or replacing components, and its hardware integration ensures direct communication with devices. In this way, I/O acts as the operational core: it translates signals, organizes internal processing and delivers with minimal latency.

Within the ecosystem hierarchy, I/O provides the proper infrastructure upon which production, analysis, and professional rendering systems can be built.

I/O combines thoughtful design with high performance, ensuring precision and low latency.

circle-info

A DAG (Directed Acyclic Graph) organizes the processing flow as a network of connected nodes where the signal always moves forward without feedback loops. Each node represents an operation, and the connections define how the signal travels.

Infrastructure

As noted above, at the base of the hierarchy lie the low-level APIs and operating system drivers. CoreAudio, WASAPI, ASIO, or ALSA handle access to devices and hardware-level synchronization, ensuring stable communication with the underlying audio subsystems.

On top of them sits I/O, which abstracts these differences to enable a multiplatform implementation. Alongside it operate DSP libraries, designed to provide functional blocks such as filters, reverberation, renderers, compressors, oscillators, processors and/or generators.

I/O represents the first layer in the processing flow.

Implementation

I/O adopts a modular, declarative, and extensible design philosophy.

Audio can be described as a set of interconnected nodes, each responsible for a specific DSP task. These connections form a dynamic graph that defines the signal flow, allowing real-time reconfiguration without interruptions.

Each system component is self-contained, enabling both simple chains and complex mixing, analysis, or rendering systems. Communication between nodes follows a deterministic pull-based model, ensuring each block processes only the required data.

Nodes process, and the context coordinates the audio flow. This structure promotes an architecture where operations are reproducible at sample level, ensuring that each computation follows an identical sequence and yields fully predictable outcomes across every render pass.

I/O prioritizes transparency and extensibility.

Design Philosophy (Manifesto)

I/O was conceived on four principles: precision, performance, functionality, and stability.

These form the foundation guiding every architectural decision.

chevron-rightPrecisionhashtag

Processing is performed in Float64 to preserve numerical integrity and avoid error accumulation. Parameters are continuously interpolated to eliminate artifacts and ensure smooth transitions. DSP modules integrate high-fidelity algorithms with aliasing management and oversampling.

chevron-rightPerformancehashtag

Real-time execution demands efficiency and determinism. I/O adopts a block-based model that reduces overhead and optimizes CPU usage. On the audio thread, the engine follows a strict zero-dynamic-allocation policy, ensuring minimal latency and consistently robust stability even under varying or highly demanding loads.

chevron-rightFunctionalityhashtag

Each graph node acts as an autonomous unit that can be integrated, replaced, or combined without affecting the system structure. This composition enables complex flows while preserving clarity and predictability. The API provides a flexible parameter system for external control and automation.

chevron-rightStabilityhashtag

A dedicated offline execution mode enables validating graph behavior and running automated tests without depending on real-time constraints. Visual tools —such as FFTs, spectrograms, and meters— are included, along with a continuous logging and diagnostics system that ensures traceability and early anomaly detection.

Graph Architecture

The architecture allows modeling the flow as a network of operations where nodes can be connected or disconnected dynamically without interrupting execution.

This flexibility is paired with sample-accurate event scheduling, ideal for automation or synchronized playback. The pull-based design ensures efficiency, and support for subgraphs enables building reusable and composable processing blocks.

Core Concepts

  • Directed graph: operations and connections defining the audio flow.

  • Dynamically connectable nodes: add or remove nodes without audio interruption.

  • Sample-accurate scheduling: precise execution of events on the audio timeline.

  • Pull-based processing: data flows from outputs to inputs.

  • Subgraph/modular support: encapsulated reusable sections.

Together, these elements enable a flexible and stable processing flow. The result is a modular system capable of adapting to any audio scenario.

circle-check

Last updated