# Introduction

This document is designed to guide you step-by-step in understanding and using our multiplatform audio development environment with confidence.

By design, **I/O** provides a comprehensive suite of tools to integrate diverse audio sources, apply sophisticated effects, generate detailed visualizations, and more. All these operations execute within a context specifically conceived to enable highly dynamic routing.

Operations are executed through **nodes**, which are linked together to form a **processing graph**. These nodes can be arranged as simple chains or complex networks, connecting their inputs and outputs to establish custom signal paths.

Processing begins with a source that delivers samples at extremely small time intervals —*often tens of thousands per second*. The output of each node may be sent to others that process the signal, enabling sophisticated routing.

#### Getting Started with I/O

Quickly access the essential content to take your first steps.

{% hint style="info" %}
Introduction to the architecture and processing flow.
{% endhint %}

<table data-card-size="large" data-view="cards"><thead><tr><th align="center"></th><th data-hidden data-card-cover data-type="image">Cover image</th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td align="center"><strong>Fundamentals</strong></td><td><a href="https://2318878013-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FjBqiSAguDgHM58hpUdnB%2Fuploads%2FaCwbecatXOHJ6tV6Q8uS%2Fhorizontal.png?alt=media&#x26;token=d4c99180-9c72-45dd-bf54-5bcffe1dcbb7">horizontal.png</a></td><td><a href="foundation/fundamental-concepts">fundamental-concepts</a></td></tr><tr><td align="center"><strong>Implementation</strong></td><td><a href="https://2318878013-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FjBqiSAguDgHM58hpUdnB%2Fuploads%2FaCwbecatXOHJ6tV6Q8uS%2Fhorizontal.png?alt=media&#x26;token=d4c99180-9c72-45dd-bf54-5bcffe1dcbb7">horizontal.png</a></td><td><a href="i-o/introduction">introduction</a></td></tr></tbody></table>

#### Comparison

The **I/O** platform is available in two editions:

The **community** edition focuses on exploration, learning and prototyping, offering a reduced set of features that allow experimenting with dynamic binaural audio.

The **professional** edition provides the full potential: advanced routing, unlimited and specialized nodes, analysis tools, technical support and more.

{% hint style="success" %}
Each **edition** shares the same core, but differs in *scope, capabilities, and support level*. The following table offers a clear view of what each includes.
{% endhint %}

<table data-full-width="false"><thead><tr><th width="341">Plan</th><th>Community</th><th>Professional</th></tr></thead><tbody><tr><td>Source playback</td><td>✔️</td><td>✔️</td></tr><tr><td>Node-based routing</td><td>✔️</td><td>✔️</td></tr><tr><td>Node limit</td><td>✔️ Máx. 10 <strong>nodes</strong></td><td>✔️ <strong>Unlimited</strong> nodes</td></tr><tr><td>Modular graph engine</td><td>✔️</td><td>✔️</td></tr><tr><td>Low-latency processing</td><td>✔️</td><td>✔️</td></tr><tr><td>Real-time rendering</td><td>✔️</td><td>✔️</td></tr><tr><td>Binaural audio</td><td>✔️</td><td>✔️</td></tr><tr><td>Multiplatform support</td><td>✔️</td><td>✔️</td></tr><tr><td>FX nodes (EQ, Reverb, Delay)</td><td>—</td><td>✔️</td></tr><tr><td>Signal visualization</td><td>—</td><td>✔️</td></tr><tr><td>Complex chains &#x26; subgraphs</td><td>—</td><td>✔️</td></tr><tr><td><strong>Technical support</strong></td><td>— <em>Community</em></td><td>✔️</td></tr></tbody></table>

#### Guide Overview

This guide outlines the purpose, capabilities, and fundamental components.

It also examines the architecture and the internal signal-processing flow, explaining how each subsystem —from **sources** and **nodes** to the execution context— collaborates in a synchronized manner to ensure *deterministic*, and stable behavior in real-time audio environments.

> **I/O** is built entirely in **Swift**, a multiplatform language that combines performance with modern syntax. It offers an execution model optimized by the **LLVM** compiler, capable of producing code as fast as **C**, but with memory safety and automatic resource management.
>
> — *Visit the official website for more information*: [*https://www.swift.org*](https://www.swift.org/)

#### How I/O Fits Architecturally

Understanding the role of **I/O** requires placing it within the broader technological layers that collectively form a contemporary and fully realized processing architecture.

**I/O** sits between low-level APIs —such as *CoreAudio*, *WASAPI*, or *ALSA*— and higher-level layers dedicated to audio composition or rendering. This positioning allows it to operate as an engine that orchestrates the signal flow and executes DSP blocks with highly precise timing, while remaining broadly independent from the underlying operating system.

Unlike pure DSP libraries, **I/O** is not limited to processing functions: it implements a complete structured system for routing, synchronization and deterministic execution, autonomously handling nodes, buffers, and graph dependencies with consistency.

Its **modular** design supports extending or replacing components, and its hardware integration ensures direct communication with devices. In this way, **I/O** acts as the operational core: it translates signals, organizes internal processing and delivers with minimal latency.

Within the ecosystem hierarchy, **I/O** provides the proper infrastructure upon which production, analysis, and professional rendering systems can be built.

**I/O** combines thoughtful design with high performance, ensuring precision and low latency.

{% hint style="info" %}
A **DAG** *(Directed Acyclic Graph)* organizes the processing flow as a network of connected nodes where the signal always moves forward without *feedback loops*. Each node represents an operation, and the connections define how the signal travels.
{% endhint %}

#### **I**nfrastructure

As noted above, at the base of the hierarchy lie the low-level APIs and operating system **drivers**. *CoreAudio, WASAPI, ASIO, or ALSA* handle access to devices and hardware-level synchronization, ensuring stable communication with the underlying audio subsystems.

On top of them sits **I/O**, which abstracts these differences to enable a multiplatform implementation. Alongside it operate DSP libraries, designed to provide functional blocks such as filters, reverberation, renderers, compressors, oscillators, processors and/or generators.

**I/O** represents the first layer in the processing flow.

{% embed url="<https://youtu.be/l_F2vykYnQ8>" %}

#### **Implementation**

**I/O** adopts a *modular, declarative, and extensible* design philosophy.

Audio can be described as a set of interconnected nodes, each responsible for a specific DSP task. These *connections* form a dynamic graph that defines the signal flow, allowing real-time reconfiguration without interruptions.

Each system component is *self-contained*, enabling both simple chains and complex mixing, analysis, or rendering systems. Communication between nodes follows a *deterministic pull-based* model, ensuring each block processes only the required data.

**Nodes** process, and the **context** coordinates the audio flow. This structure promotes an architecture where operations are reproducible at sample level, ensuring that each computation follows an identical sequence and yields fully predictable outcomes across every render pass.

**I/O** prioritizes transparency and extensibility.

#### **D**esign Philosophy (Manifesto)

**I/O** was conceived on four principles: precision, performance, functionality, and stability.&#x20;

These form the foundation guiding *every* architectural decision.

<details>

<summary>Precision</summary>

Processing is performed in `Float64` to preserve numerical integrity and avoid error accumulation. Parameters are continuously interpolated to eliminate artifacts and ensure smooth transitions. **DSP** modules integrate high-fidelity algorithms with aliasing management and oversampling.

</details>

<details>

<summary>Performance</summary>

Real-time execution demands efficiency and determinism. **I/O** adopts a block-based model that reduces overhead and optimizes CPU usage. On the audio thread, the engine follows a strict zero-dynamic-allocation policy, ensuring minimal latency and consistently robust stability even under varying or highly demanding loads.

</details>

<details>

<summary>Functionality</summary>

Each **graph node** acts as an autonomous unit that can be integrated, replaced, or combined without affecting the system structure. This composition enables complex flows while preserving clarity and predictability. The API provides a flexible parameter system for external control and automation.

</details>

<details>

<summary>Stability</summary>

A dedicated **offline** execution mode enables validating graph behavior and running automated tests without depending on real-time constraints. Visual tools —such as **FFTs**, spectrograms, and meters— are included, along with a continuous logging and diagnostics system that ensures traceability and early anomaly detection.

</details>

#### Graph Architecture

The architecture allows modeling the flow as a network of operations where **nodes** can be connected or disconnected dynamically without interrupting execution.&#x20;

This flexibility is paired with *sample-accurate* event scheduling, ideal for automation or synchronized playback. The pull-based design ensures efficiency, and support for **subgraphs** enables building reusable and composable processing blocks.

**Core Concepts**

* Directed graph: operations and connections defining the audio flow.
* Dynamically connectable nodes: add or remove nodes without audio interruption.
* Sample-accurate scheduling: precise execution of events on the audio timeline.
* Pull-based processing: data flows from outputs to inputs.
* Subgraph/modular support: encapsulated reusable sections.

Together, these elements enable a flexible and stable processing flow. The result is a modular system capable of adapting to any audio scenario.

{% hint style="success" %}
There are *alternative architectures*, such as *linear pipelines*, *bus-based models*, or *event-driven systems*. However, the *graph-based* (**DAG**) model has become the industry standard due to its powerful combination of flexibility, scalability, and high performance.
{% endhint %}
