Co-created Concert Visuals

Implemented created by Sean Bradford English
12d ago update
ORIGIN STØRIES is a creative tech studio specializing in custom generation tools and experiences.

We've been testing whether its interesting and/or impactful for audiences to co-create visuals based on aesthetics that are pre-designed with a musical artist.

On 7 June in Stockholm we held a livestream concert with four musical acts and a total of 27 musicians. The audience got to help make visuals for the show using generative AI.

The audience was given access to our in-house AI tool, VPM, prior to Friday evening. People watching the livestream, were also able to create images that were then turned into video and displayed during the concert.

Over the past six months we have tried to contextualize the craft behind AI prompting and also solve for relevant and personal creator pain points.

Music visuals are expensive and audiences everywhere demand content. Younger generations are actively seeking out experiences, and fans of artists and brands are looking for more immersive and customized interactions.

Our current thesis around this is that it can be omnichannel almost out of the box, with opportunities for co-created and co-owned merch or collections.

Here we will discuss our key objectives, what we did and how it was received:

Contributing team:

Sean Bradford - Founder
Adam Mocarski - CTO, Founding Engineer
Bror Karlsson and Eric Andersson - Videography
 

Musical Acts:
Anton Kröll
Angie
Sean Bradford
Clarendon

+23 more musicians  A/V livestream and stage crew

Location: The Node under Sergels Torg

Test co-creating concert visuals with a live audience of 100+ people in Stockholm

Broadcast a livestream of the concert and  build a website which would allow anyone to watch the concert and co-create visuals in near real-time.

Onboard at least 20 artists to our prompt swiss army knife VPM and gather data to help refine quality, user needs, and user experience.

Social impressions from Swedish artist, Angie

As a technical product we used many different solutions in our tech stack.

For transcoding and retrieval of video we use Livepeer Studio.  The Stockholm event was presented due in part to a grant from Livepeer Network, which is currently building an AI Subnet. In this case the stream was delivered by a live crew on-site, which also helped with visual display.

We used a fine-tuned model from SDXL - Realistic Vision to provide the lense for this proof of concept. Because we are fine-tuning again at the prompt level, we use different models depending on the general style we wish to convey. Some models are better for animation and caricature, while others are better at abstract object, environmental settings, or human-like quality.

HOW IT WORKS

To arrive at the custom generators for each artist, we first have to discuss with them factors about their song, stye, and storytelling vision (if there is one).

We currently gather this information via chat bot. Once this is received we do two things:

1.  Create a series of potential questions that can be posed to the user. These questions correlate to a part of the prompt that can be adjusted for unique input.

2. Run an initial comp sheet. This can be done in any model. We prefer to use either one of the models we've integrated into VPM or Midjourney for this ideation and curation phase.

Once we are happy with the two of these, we adjust the base code we have for custom generation logic.  These questions and image generation flow for the user. They appear together in the live stream view.

Users have a unique id made via Privy which allows us to access their submissions in the backend. These submissions are handled via AWS and are segmented according to artist and song for easy access.

For now generators close 30 minutes before an act is scheduled onstage. This is to allow time for the next steps which include curation, upscaling, and customization of aspect ratio.

In the case of the Stockholm event, it was held at The Node, a forward-thinking music venue under the Sergels Torg. Because of the pre-existing screen installation, assets must be resized in order to display properly behind the artist and their band. 

As a result we are still employing manual techniques to ensure precision and quality. However we are already ideating how to implement editing windows and features than can also assist in this “post-production” phase of the work.

[WATCH THE HIGHLIGHT REEL](//WATCH THE HIGHLIGHT https://vpm.studio/live/stockholm-2024)

  • 27 musicians playing
  • 3 local and 1 international music act
  • 100k+ Social Impressions
  • 100+ guests
  • 500+ generations made for the livestream
  • 200+ livestream views
  • 20% of guests made at least one visual

Iteration on Midsommar theme for a customizable bot in Telegram which we debuted during Cannes Lions - 

Broader interest from audiences and Gen-Z that this is a fun way to interact with live events

Some brand interest to pilot - signaling interest in custom-generated, co-created, and co-owned experiences.

Attributes

Media
Culture & Leisure, Education, Innovation, Marketing & Communications, Production
New Business
Experimenter, Practioner, Shaper
Creation, Optimization
Clustering, Generative AI, ML Ops, Machine Learning, Multimodal, NLP