Transforming media asset management and quality control
The DPP and Digital Catapult, with the support of Microsoft and Cognizant, have launched an initiative to identify technology startups and scaleups working in artificial intelligence and machine learning who can help them to transform media content creation and distribution, including media asset management and quality control. “Companies with bespoke solutions pushing the limits of how AI can be applied in media will be highly favoured”.
Click here for the event page.
On Tuesday 28 July from 15:00 to 17:00 and Wednesday 29 July from 16:00 to 17:50 companies who meet the criteria will have a chance to pitch their solutions to organisations including Sky, IMG, Vice Media and the BBC. Applications for startups are now open, and close on Wednesday 1 July.
The challenge
Content producers and broadcasters must efficiently manage large volumes of content (video, audio, timed text, graphics, etc). They need to identify, organise, manage, monetise and store this content in an effective way.
Due to the scale and quantity of media content created, the production and management of this content is complex with many different processes that still require manual and repetitive tasks. AI and ML can help automate these repetitive processes and offer more enhanced solutions.
There are two key challenge areas:
- Enriching video content with relevant metadata/tags
- Quality control of audio visual data
Challenge one: Enriching video content with relevant metadata / tags
Thee use of artificial intelligence to analyse content, offer insights and recommend actions using technologies such as computer vision and speech-to-text for accurate content tagging and validation.
Examples:
- Content identification and reporting: Assist in production reporting by identifying music, graphics, and contributors (for example specific actors) from the content. This can accelerate production activities like video editing and rights reporting
- Automated content clipping for monetisation: Assist in accurately analysing large volumes of video data to then clip out footage which could have sales value. For example, sophisticated search of an archive based on metadata/tags which are generated from the video content itself
- User generated content tagging: Assist in content tagging of the user generated content to surface the most relevant content for given editorial use, without producing overwhelmingly large volumes of data
- User generated content provenance: Assist with the validation of the content’s provenance – source, location, authority – for example, Is a video clip claiming to be from a reputable broadcaster genuinely from this broadcaster? Is a piece of user generated content actually showing what it claims to?
Challenge two: Ensuring quality control of Audio Visual data
Improve and enhance QC in a way that helps media organisations to more easily identify and correct technical or editorial problems before a piece of content is distributed or broadcast. This would replace existing manual and automated processes that are sometimes inefficient, unreliable, time consuming, and expensive.
Examples:
- Audio classification and identification: Assist in identifying the audio content from video clips or tracks. This may include identifying the audio content from clips in an edit timeline (for example, dialogue, music, effects, etc) and labelling them for easier editing, or identifying the audio tracks included with a received programme (for example, stereo, surround, mix+effects, etc)
- Content source resolution detection: Assist in detecting the resolution at which content was shot. For example, received content may be upscaled from a lower resolution, and it’s not always possible to automatically determine what the true resolution of the source content is
- Video script validation: Assist in comparing, validating, or otherwise manipulating programme’s scripts, subtitles and audio – for example, comparing the content and highlighting differences or errors, comparing timing or matching up dialogue to the speaker (via speech transcription)
- Dynamic loudness correction: Assist in identifying and extracting dialogue from a programme audio mix, in order to maintain audibility of dialogue when automatically conforming audio to comply with loudness standards such as EBU R128
What do applicants stand to gain?
- An opportunity to receive valuable feedback from relevant companies
- An opportunity to receive support from Microsoft engineers, including an introduction to Azure Cognitive Services platform elements, that can be used to build solutions, and assistance via ‘one-to-one sessions’ with AI and Cloud specialists on the day of the DPP AI in Media event
- Following the event, DPP will facilitate connections between successful startups, media companies and solution providers, encouraging and supporting collaborative relationships to solve real business problems faced by the media industry. This will provide the opportunity to develop solutions with real use-cases, and real data sets, leading to strong chances for commercial opportunities.
- The most successful collaborations will be invited to showcase the work at DPP Tech Leaders event (London, November 2020, or online dependent on COVID-19 situation)
- The DPP Tech Leaders’ Briefing is the pre-eminent business intelligence event for the media industry. It is a two-day conference and networking event bringing together the supplier and content provider community such as the BBC, Google, FOX Networks, Al Jazeera, Channel 4, Sky UK, BT Sport, CBC/ Radio-Canada, Viacom & Vice Media
- This is an exclusive DPP members only event
- The opportunity to develop business relationships with DPP stakeholders that could lead to potential future collaborations
- DPP will post blogs on its website sharing information about the events and mentioning linking to successful solution providers
Who should apply?
Established companies, startups, experts, and scaleups that are developing and deploying AI/ML technology solutions with the potential to solve the media asset management and quality control challenges.
Companies must have a demonstrable product/service and focus on showing their product rather than taking the audience through a slide deck. Areas of advanced digital technology applications and solutions include (but are not limited to):
- Artificial intelligence/machine learning
- Human-machine teaming/virtual assistants
- Data visualisation
- Data integration, processing and analytics
- Machine vision
- Computer vision
The solution does not need to currently be applied in the media industry. We are interested in the solution’s capabilities and potential applications that exist.
Applicants should ensure at least one person per company is available to attend both event sessions on Tuesday 28 July from 15:00 to 17:00 and Wednesday 29 July 2020 from 16:00 to 17:50. Only team member(s) that understand the company’s technology/product capability in depth should attend to ensure impactful conversations with the DPP members. They are likely to further delve into how applicable solutions could be for their organisation.
Before submitting an application, the DPP recommends reading this page about metadata and AI on its website, which provides more information and understanding of DPP’s work and recent developments on metadata for media organisations
Event agenda – Tuesday 28 and Wednesday 29 July
Tuesday 28 July (15:00 – 17:00)
15:00: Intro from Digital Catapult and DPP
15:10: Main event rehearsal with pitch test
16.00: Introductions to Microsoft and Cognizant, their tools and services
16.15: One-to-one sessions with Microsoft and Cognizant engineers
17:00: Close
Wednesday 29 July (16:00 – 17:50)
16.00: Welcome and introductions
16.15: Panel and presentation on the shape of AI and creative industries
16:45: Startup pitches
17.15: Discussions with media organisations
17.50: Close