A crowd dances in red-violet light to deep electronic bass sounds fused with ethereal melodies and Thai rhythmic patterns. In the background, visuals show puppeteers performing Thai shadow theatre behind a backlit screen. This is ญาบอยฮานอย yaboihanoi’s show, who has been invited to Barcelona’s Sónar Festival to present AI-assisted compositions that would have been technically impossible just five years ago.
A few weeks later, back in his Bangkok home, he opens an AI music programming software and plays a metallic sound. “I love these tones because it's really hard for traditional synthesizers to create a metallic sound, but AI models are surprisingly good at it,” explains yaboihanoi, whose real name is ลำธาร หาญตระกูล Lamtharn “Hanoi” Hantrakul. “Many people ask me, ‘Which Thai instrument did you sample?’ But I didn't sample anything; it’s all completely imagined by AI, but still sounds authentically Southeast Asian.”
He tunes all his sounds, from melody to bass, to match the notes of Thai instruments. This isn't just an aesthetic choice; it’s a technical necessity to overcome cultural bias in AI models, a challenge that took years to solve.
“Many AI models only understand Western composers like Bach or Western Pop Music, because they’re trained on the Western tuning system called equal temperament, based on the notes of a piano”, explains Hanoi. “Thai music, however, uses a different one: the octave is divided into roughly 7 principal notes. Most AI models, when trained on symbolic note data, assume Western notation. So there’s simply no way to input music played on, for example, a Thai gong instrument like the ฆ้องวง Kong Wong, because the notes just don’t align with the 12 notes on a piano.”
ญาบอยฮานอย yaboihanoi's live set at Sónar by Day from June 2025.
Tone Transfer: Transforming Sounds into Instruments
Hanoi realized the solution wasn't to teach AI models new notation systems; it was to bypass notation entirely, through Audio AI, which models and analyzes sound as raw audio frequencies and pitch data measured in Hertz.
The breakthrough came during his time at Google's Magenta team in 2018, where he worked as an AI Resident and co-authored their open-source Differentiable Digital Signal Processing (DDSP) library. Hanoi and his colleagues built a tool called Tone Transfer, which uses machine learning to re-synthesize the melody, pitch, and expressive elements of any sound into an instrument. For example, they can take someone's singing and transform that sound into a cello. Hanoi also input birds chirping and converted it into the sound of a flute. The result was a flute performance that captured all the subtle tonal variations found in the original bird song.
“That was when I had an aha moment. If it can follow the pitch of the input so accurately, that means I can give it melodies that have been played on Thai instruments, and it will also track the pitch of that really well, too.”
In music, pitch is measured in Hertz (Hz), which is how fast the sound wave is vibrating. For example, the “A” note in Western music vibrates at 440 Hz, while a similar note from a Thai instrument may vibrate at a slightly higher frequency, say 487Hz depending on the maker of the instrument.
Tone Transfer can take that 487 Hz note and change the way it sounds without altering its pitch. That was a game-changer for Hanoi: he could now take melodies from Thai music and experiment with them in ways that weren't constrained by Western notation systems, giving him new tools to reimagine and challenge Thai classical music.
"Being able to write music that sounds super electronic but is also very Thai at the same time, that wouldn't have been possible with the tools available just five years ago," says Hanoi.

Enter Demons and Gods: AI in Music and Dance
One of the songs he created using Tone Transfer, อสุระเทวะชุมนุม “Enter Demons and Gods,” won first place for Hanoi at the AI Song Contest 2022. The competition invites artists from around the world to submit AI-assisted compositions.
Since then, the AI Song Contest has not only brought him attention but also new collaboration opportunities, including with Thai dancer and choreographer พิเชษฐ กลั่นชื่น Pichet Klunchun, known for his contemporary approach to the classical Thai mask dance, โขน Khon. Together with MIT researcher พัทน์ ภัทรนุธาพร Pat Pataranutaporn, he has worked on a project called Cyber Subin, exploring the concept of “Human-AI co-dancing”, pairing human dancers with virtual dance partners powered by AI models trained on dance principles.
“Pat showed Pichet my Enter Demons and Gods track, and he was very interested in the music,“ says Hanoi. “We were talking about what it means to use technology to reimagine dance and music, and we had very similar ideas. And so Pichet asked if I was interested in writing songs for this particular performance and directing all of the music and sound related to the project.”
For the collaboration, Hanoi delved deep into the rhythmic cycles of the classical Thai Khon dance. Demons and monkeys form the core characters of the performance, and each role has individual movement styles, costumes, masks, and melodies. The demons ยักษ์ Yak convey an assertive and powerful dance style, while the monkeys ลิง Ling move acrobatically and playfully.
“This project gave me the opportunity to look at rhythm and how it interacts with the dancers' bodies. I really enjoy writing music and then working with choreographers and I also love working with sculptors or visual artists. So I think going forward, I'm excited to go in cross-disciplinary directions to create performances,” he says.
What excites Hanoi most is watching how his work enables other artists to rethink their own practices and blur the boundaries between performances that combine live music, dancers, projections, and AI-generated compositions. "It's closer to a มหรสพ Mahorasop, which is an opera-like spectacle,” he says, “than just a musical performance.”

In addition to the project with Pichet Klunchun, Hanoi also works with other dance troupes including หนังใหญ่ Kid Buak Sipp, which specializes in Thai Shadow Theater called หนังใหญ่ Nang Yai, originally performed in rice fields or temple grounds with a bonfire and a large cloth screen. Hanoi’s multi-modal approach reimagines the art form through electronic music, live projections, and dance.
"The songs I wrote for คิดบวกสิปป์ Kid Buak Sipp also formed the basis for my performance at Sónar Festival in Barcelona. I was nervous because it was my first time performing in Europe, and I wasn’t sure if my take on Thai electronic music would resonate with a Western audience."
The Ethics of AI in Music: Culture at Risk?
Looking back, the fact that it did resonate was Hanoi’s full circle moment. He found a way to bring Western technology and Southeast Asian music into dialogue with each other.
However, when asked if AI has the potential to make the music industry more inclusive, he pauses for a moment and says: “The synthesizer, the drum machine, the use of Ableton on laptops instead of needing a big studio, these were all tools that eventually democratized music and allowed more people to express themselves, which is a wonderful thing. Before ChatGPT, I thought it was very important for a machine learning model to also be able to ingest music from Southeast Asia and understand, as well as create, music that is representative of the region. But in a post-ChatGPT world, there's a part of me that is also very worried about that.”
He refers to the scandal involving Studio Ghibli, when users had copied its iconic style to generate random images by prompting ChatGPT. To Hanoi, who considers Studio Ghibli an intangible cultural heritage of Japan, this was a moment that made him question what it meant for culture to be ingested into a machine learning model. “How would I feel if Thai classical art forms were ingested into the model like Studio Ghibli, but then people start using it to generate all these other images in that style, but it's completely out of context and used in a very inappropriate manner. If that's what is possible, do I really want my materials and the traditions of my home culture to be used in that way?
Co-Creating to Break New Musical Ground
At the same time, Hanoi also views AI as the next logical step in the evolution of electronic music, and he believes that technological breakthroughs have enabled artists to achieve musical breakthroughs as well.
“Without the electric guitar, Rock and Roll would never have existed. Without the synthesizer, there would be no Thriller by Michael Jackson. Without the laptop and Ableton, there would be no Skrillex or Dubstep. Without the drum machine, there would be no J Dilla and Hip Hop. So I think AI tools will naturally become part of an artist's workflow.”
The technology has enabled Hanoi to generate sounds impossible to achieve with traditional synthesizers or software like Ableton. When professional musicians start using AI, they can bend the tool in unexpected ways, creating initial outputs that come up with new musical ideas, resulting in a virtuous cycle and creative process: the AI-generated material inspires original compositions, which are then fed back into the model, producing new variations that the musician then layers with additional original elements.
"This is where exciting new genres of music can be created, genres a machine learning model couldn’t produce on its own, and that a composer alone might not have achieved. It’s this co-creation between the two systems that allows us to break new musical ground."