Personalization Is More Than What You Think It Can Be

There has been a lot of talk over the years about making the streaming video experience more “personalized.” But what does that really mean? For the most part, personalization has been about content recommendations. When the viewer can come to the streaming platform and see content that might be more relevant to their likes or behavior, it can feel like the platform is more personalized for them and offers a different experience than another viewer might have.

Personalization has been applied to elements within the video experience as well. For example, ads are targeted based on demographic or
geographic data. These can be within the video asset itself or in the area surrounding it. But video personalization is about modifying
the video itself to suit the viewer, such as by swapping out colors, switching out imagery (for example, a specific brand), and even modifying
the video flow itself based on a user’s specific predilections.

But that vision has faltered over the years—stumbled, if you will. Throughout the past 15 years, I have seen plenty of attempts at dynamic image recognition (through sequential frames), which have just failed to materialize in a meaningful way. Oftentimes, they required player plugins or other proprietary technology.

As the executive director of the SVTA, I get to see a lot of up-and-coming technology providers, especially as part of our grant program, which focuses mainly on startups. One of those companies, Infuse Video, has demonstrated the next phase of personalizing the video experience: dynamically replacing elements of the video itself.

Infuse Video uses a combination of stitching and just-in-time rendering to deliver customtailored experiences to the viewer. The platform utilizes segmented streaming and deduplicates segments that are common for multiple users, thereby enabling audience segmentation and personalization at a sustainable scale.

Let me distill that down to a use case: enterprise marketing videos in the pharmaceutical industry. To improve the engagement of viewers, videos can’t be a one-size-fits-all approach. And a higher level of engagement, such as watching the entire time and even clicking on a call to action, can equate to medication uptake: the viewer talking to their doctor about a certain medication and even asking for it to be prescribed.

But this kind of engagement requires tailoring the messaging to the viewer. So imagine that these videos are animated explainer videos. Rather than having a white man deliver the message to a Black woman, a Black woman doctor is the one delivering the information.

Of course, this can equate to hundreds or even thousands of different variations, which would be cost ineffective to produce manually. Infuse Video’s platform allows this to happen dynamically, leveraging video “templates” (which really just represent the words and the flow) to create the different variations and stitch them together in real time based on data about the viewer. This can possibly leverage generative AI (when using digital avatars). The different variants can be created around B-roll or other filmed footage produced by an agency. Think about a car model being featured with a different color depending on the viewer.

Studios could make use of this as well by allowing their viewers, through a set of clickable filters in a streaming player, to edit out elements of a movie. Don’t want nudity? The viewer could click that filter, and a platform like Infuse Video would stitch in alternate frames (which may include objects blocking the questionable elements).

With advances with generative AI, just-intime transcoding, SSAI stitching, and other streaming video tech stack components, companies like Infuse Video are demonstrating that the true vision of video personalization—changing the video content itself—is finally at hand.

Go ahead, enterprise video marketers and movie studios—take my data. If it means I get content dynamically built for me, it’s well worth the spam.