I design and build humane interfaces โ mostly in Swift (iOS & macOS), sometimes in Rust and Python.
My work bridges HCI research, developer tools, and new ways of working with AI.
๐ San Francisco
- Stitch โ founding engineer of Stitch, an open-source tool for designers
- Chroma-Swift โ Swift package for Chromaโs on-device database engine
- TiktokenSwift โ Swift bindings for OpenAIโs tiktoken via UniFFI
- Roboflow Swift SDK โ first SDK for running Roboflow-trained models on iOS
- AudioKit โ helped launch this open-source audio synthesis/analysis framework
- Visual iMessage โ what if Siri could describe images in a thread?
- Diffusion Demo โ SwiftUI interfaces for Inception Labsโ model
- ASL Classifier โ detecting ASL signs on-device with CoreML
- Emulating Touchรฉ โ open-source capacitive sensing with plants & water
- O Soli Mio โ radar-powered gestural interfaces for music
- Whistlr โ contact sharing over audio on iOS
- Push-to-Talk Chat โ lightweight audio chat app
- Plus experiments with BLE sensors, CoreML sound recognition, LED control, and more (archive)
- O Soli Mio: Exploring Millimeter Wave Radar for Musical Interaction (NIME 2017)
- Investigation of the use of Multi-Touch Gestures in Music Interaction (MSc Thesis, University of York)
Send me a note at nicholasarner (at) gmail (dot) com, or find me on Twitter.






