London is almost always bustling with technology and media industry events, especially during the springtime runup to London Tech Week in June. It can be an overwhelming task to choose which ones to attend and, unfortunately, sometimes the juice just isn't worth the squeeze. Especially as someone who works from home - yes, I typically do my lawyering while wearing athleisure wear! - I sometimes just resign myself to watching a webinar online instead.
But not today. I’m so glad I made the effort to attend the Media Production & Technology Show 2024 (MPTS), which proved to be a brilliant trade show for anyone interested in the intersection of media and technology. MPTS is where you get a firsthand look at the innovations set to transform the industry — including AI solutions.
Exploring AI in Political Coverage
The highlight of the day for me was a keynote session featuring a conversation between British broadcaster Felicity Barr and Anushka Asthana, deputy political editor at ITV News. The two journalists discussed the potential of AI to transform political reporting, as well as the need for those in the news industry to be trusted advisers when communicating with the public. Anushka cited a report that found journalists to be among the "least trusted" of professions, and both women agreed that content verification and reliance upon reputable news sources are more important than ever, especially during this General Election year. There were also some interesting comments made about the power of AI in so-called crowdsourced investigative journalism and open-source intelligence (see also: Bellingcat).
Unsurprisingly, the conversation also touched upon my specialism of generative AI and deepfakes - not just in the context of non-consensual image-based sexual abuse ("deepfake porn") and other types of misinformation, but the positive and beneficial use cases, too. Might news programmes one day use AI to recreate important news footage where genuine content is unusable or otherwise unavailable? This remains an open question but I’d point to the incoming transparency obligations under the EU AI Act… noting, of course, that the UK regulatory approach remains to be seen!
Innovations on the Exhibition Floor
After Felicity and Anushka's presentation, I navigated through the bustling aisles of the exhibition: it was genuinely inspiring to see live demonstrations of the technologies that are shaping and improving the way in which content is conceived, produced, and distributed. It was also really humbling to consider just how many talented people are needed behind the scenes in order to put a production on screen.
The exhibition was a playground of innovation, and it was great to chat to representatives from a handful of interesting companies.
Speechmatics caught my attention with their sophisticated speech recognition technology. Their AI-driven platform offers highly accurate transcriptions and translations in real time, which could be an absolute game-changer for global media. It's supported across 50 languages which is impressive by any standard, but especially so when you consider that they're really keen on pinpoint accuracy. The Cambridge-headquarted company claims the tech is incredibly accurate "regardless of the accent or dialect, even in challenging, noisy environments" and from what I saw, on a very noisy exhibition floor, they're right.
One of the coolest things that they're working on is an emotional sentiment analysis, which allows users to visualise and capture the sentiment of the audio or text. As explained on their website, "you can quickly transcribe and extract the sentiment, either positive, negative or neutral to quickly detect areas of conversation to analyse further." This means that you can quickly - and at scale - obtain qualitative insights, which could be very beneficial when it comes to things like analysing hundreds (or thousands) of film reviews posted online, for example.
Next, I visited Moments Lab, whose technology identifies key moments within footage, making the editing process remarkably efficient. The Moments Lab team demonstrated the tech's capability to analyse 500 hours of video... per minute. That's more than 20 days of footage, analysed in 60 seconds! This powerful tool finds the exact moments you need super quickly, which could slash production times by up to 70%, according to their sales pitch.
Now, I fully appreciate the fear that many people have about the rise of AI tools potentially displacing creative jobs, or making them redundant all together. But I also believe that tools like those developed by Moments Lab exemplify how AI can act as an enabler. These systems take over the tedious tasks of media logging and indexing, to free up creative teams to focus on what they do best, which is bringing innovative and engaging content to life. Besides, we'll likely always need an experienced professional to make the final call as to whether or not a particular shot or clip is the right one to use. And if production times (and costs) are reduced, perhaps that means more content, which in turn, requires more human involvement on the creative side?
A third highlight was meeting the impressive couple behind Choppity, who showcased their AI-powered content editing platform. Designed to transform long videos into short, engaging social media clips, Choppity looks a dream for podcasters, influencers, and big brands alike. The technology simplifies the editing process with machine learning algorithms that suggest edits and enhancements to videos, and then creates relevant and engaging clips perfect for sharing on social media.
The demo was particularly impressive, and I thought the platform's ability to do things like detect sensitive words - not just cursing, but things that could be controversial or otherwise offensive - was an important feature. It's obvious to me that Choppity's founders really understand the market: even those with minimal editing experience will want to produce professional-quality videos quickly and efficiently. Again noting the human-vs-AI labour debate, I also thought it very appropriate that Choppity readily admited that their technology is simply a tool that makes suggestions, but cannot - and perhaps, should not - be fully relied upon to make important editorial decisions. Their website notes that "AI is powerful, but it cannot fully replace human creativity. That's why Choppity provides a professional timeline editor [...] so that you have full control to edit whatever Choppity's AI creates." With that said, in the coming days, I might actually test the tech for myself with one of my YouTube videos...
When I spoke to a few people about my job, I could tell they were somewhat surprised that a lawyer would bother to attend a trade show - after all, I wasn't there to buy (or indeed sell) any of the products on offer. But understanding content production workflow is crucial for legal professionals like myself who navigate IP and contractual landscapes. If a client comes to me and asks about how ABC will impact their ability to do XYZ, I need to know what that actually means in practice!
Without a doubt, being at the Media Production & Technology Show 204 was a great reminder of how good it feels to step out from behind the desk, and into a buzzing venue filled with innovative, creative professionals. It’s not just about keeping up with technology—it’s about meeting the people who are driving these changes, and obtaining insights that can help how we approach our work in media.
Commenti