IBM and Wimbledon are leveraging the former’s Watson AI platform to develop highlight packages for the tennis tournament. Watson is able compile video snapshots of key action across all of the tournament’s courts at a fraction of the time it would take a human editor to do so.
Watson identifies moments of excitement based upon changes in crowd volume, player reactions, and other characteristics from the source video. Additionally, Wimbledon is using Watson to automatically tag videos and images to organize its database of content from the tournament. These processes were once the sole domain of entry-level employees. Now, brands and events like Wimbledon can efficiently mine meaningful content through intelligent systems, so to create a broad portfolio of short-form content for digital media channels.
Imagine you’re a brand with an enormous library of archived content. You have a digital signage system but you don’t know what to do with it, and you’re spending endless hours feeding the content beast. My recommendation would be to look at the content partnership between IBM and Wimbledon. It is a powerful illustration of how AI can be leveraged to bring incredible value to rights holders of long form content. It highlights ways that machine learning can be applied to videos and news stories to fulfill the needs of media platforms that rely on shorter content formats.
Companies like Screenfeed in the digital signage space have been doing this successfully for years through the utilization of designers and human intelligence. Imagine a hybrid system that employs a blend of computer intelligence and human decision making. Large video libraries could be transformed into collections of short-form content in a fraction of the time. Systems like Watson can perform the mundane tasks of tagging and organizing content, such that designers can leverage the full extent of their creative talents. I believe that this hybrid machine learning plus “human touch” model will be a fundamental component of content creation and editing for the digital signage industry. Applying machine intelligence to these types of content problems will free creatives to to do what they do best.
I hate seeing digital signage networks dominated by content that was created for another purpose. How many fashion show videos have you seen in a high-end apparel stores? The worse culprits are brands that merely repurpose their TV spots for distribution in-store.
In theory, you could create an AI system that renders key elements from these content stacks to create videos, images, and overall packages that are more ideally suited for digital signage screens. Cherry picking the best elements from a mountain of existing content lessens the overall cost while taking advantage of existing assets. Systems like Watson ensure that such tasks don’t take days, weeks, or months, but are completed in a matter of hours. Such is the promise of machine learning when applied to digital signage’s content problem.