Trending:
Software Development

Charity builds synced internet radio with PHP and maths - no streaming infrastructure

A charity organisation needed synchronized radio playback across all listeners without budget for Icecast or WebRTC. The solution: schedule-driven MP3 serving with client-side offset calculations using PHP, JavaScript, and MySQL. It's clever, but it won't scale.

Charity builds synced internet radio with PHP and maths - no streaming infrastructure

The Problem

A charity wanted proper radio synchronization on their website - all listeners hearing the same content at the same position, mid-track if necessary. The constraint: no streaming infrastructure budget. No Icecast, Shoutcast, or WebRTC.

The solution uses PHP, JavaScript, MySQL, and time-based mathematics. It's worth understanding because it shows what's possible with minimal infrastructure, and where the approach breaks down.

How It Works

The architecture has four components:

  1. Admin upload - Media files go to a directory, ffprobe extracts duration
  2. Schedule management - MySQL stores when each track plays
  3. Now-playing API - PHP calculates what should be playing and at what offset
  4. Client player - JavaScript fetches the API, loads the MP3, seeks to the calculated position, plays

No WebSockets. Just an HTTP API and HTML5 audio elements.

The clever bit is the sync algorithm. For scheduled content, it's simple math: offset = current_time - schedule_start_time. A 30-minute programme scheduled for 14:00 will be 12.5 minutes in at 14:12:30.

For loop/filler content between scheduled programmes, it uses a fixed reference timestamp ("loop epoch") stored in the database. Total loop playlist duration is known. The calculation: time_since_epoch % total_duration gives a position in the loop, then the system walks through tracks until it finds which one contains that position.

The Trade-offs

This works for low-traffic charity sites. It fails for anything approaching scale:

  • No CDN support - Every client hits origin for full MP3 files
  • Client clock drift - Browser time variance breaks sync (Web Audio API outputLatency can help but adds complexity)
  • Network jitter - Variable fetch times mean listeners aren't actually synchronized
  • Browser decode latency - Different browsers decode MP3s at different speeds
  • No adaptive bitrate - Mobile listeners with poor connections stall

The implementation skips the audio synchronization libraries (like synaudio for correlation-based alignment) that would tighten multi-client sync.

What This Means

For CTOs evaluating audio sync requirements: this approach proves you don't always need managed services like Twilio Sync or streaming infrastructure. The math works. But the implementation assumes stable clients, low concurrency, and tolerance for drift.

The real question is whether "synchronized" actually needs to be sample-accurate (multiplayer games, collaborative audio editing) or just "close enough" (community radio with 50 concurrent listeners).

History suggests DIY audio sync gets replaced by proper infrastructure as usage grows. The charity will likely migrate to a streaming service when listener numbers justify the cost. Until then, this holds.

Three things to watch: Client-side Web Audio API adoption for lower latency, browser support for network clock synchronization primitives, and whether WebTransport changes the calculus on DIY streaming.