AI in Media: What Does Good Media Look Like?

While AI-generated content and hyper-realistic deepfake videos capture headlines, they represent just the tip of the iceberg. The real transformation reshaping media lies in the infrastructure revolution happening behind the scenes. As artificial intelligence becomes central to media workflows, the underlying infrastructure requirements are fundamentally different from traditional broadcasting systems.

According to McKinsey analysis, AI's potential economic impact ranges from $2.6 trillion to $4.4 trillion annually across industries, but realizing this value depends on implementing infrastructure that can support AI workloads' unique demands. For media companies, this means balancing the computational requirements of AI with the real-time demands of content production and distribution.

Three essential infrastructure foundations are enabling media companies to successfully integrate AI: cloud virtualization, edge computing, and high-performance networking.

Cloud Virtualization

Traditional media workflows operated on relatively predictable schedules. Content moved through linear pipelines: production, post-production, and distribution. This predictability previously allowed companies to build infrastructure around known capacity requirements. However, cloud-based media operations had already disrupted this model before AI emerged, creating demand for dynamic scaling to handle transcoding workloads, real-time translations, and fluctuating consumer usage patterns.

AI workloads amplify these existing scaling challenges while introducing distinct computational requirements. Unlike traditional media processing that primarily demands CPU power and storage bandwidth, AI introduces a lifecycle with fundamentally different infrastructure needs. Data preparation requires massive parallel storage systems and high-bandwidth networking to process petabyte-scale datasets. Model training demands GPU clusters with specialized memory architectures and high-speed interconnects for distributed computing. Inference prioritizes low-latency response times and the ability to dynamically scale compute resources based on unpredictable request patterns.

Cloud virtualization addresses these AI-specific demands by enabling media organizations to provision the right computational resources for each stage of the AI lifecycle. Rather than maintaining expensive GPU servers that remain idle during off-peak periods, virtualized infrastructure allows seamless transitions between lightweight content analysis on standard instances and intensive model training requiring clusters of specialized accelerators.

The economic transformation is remarkable: AI inference costs have dropped dramatically, with the cost per million tokens falling from $20 to $0.07 in less than three years. This cost reduction, combined with AI’s capability advancements and cloud virtualization's flexibility, enables media operations that may have been previously uneconomical at scale, such as AI-enhanced real-time personalized content generation and automated live event analysis.

Edge Computing

Forrester's 2025 predictions highlight the emergence of edge AI as transforming how media companies deploy AI-powered applications. As AI shifts from centralized clouds to distributed edge environments, the difficulties no longer stem from just model training. The challenge now is scaling inference efficiently across diverse media production environments.

For media and broadcast applications, proximity between data sources and processing locations is critical. Without this proximity, latency inevitably causes delays that make real-time insights obsolete, degrading the accuracy and effectiveness of AI models. This proximity requirement drives the need for edge infrastructure that can support AI workloads where content is created and consumed.

Edge AI delivers specific benefits for media applications by running inference locally, either on devices or at edge locations, providing immediate insights to end users while improving system-level economics. This approach is particularly crucial for live broadcasting applications where real-time processing cannot tolerate cloud round-trip delays.

According to McKinsey research, by 2030, 60% to 70% of AI workloads will require real-time inference, creating urgent demand for low-latency connectivity, compute, and security at the edge. This shift represents a fundamental change in how media infrastructure must be architected to support AI-enabled workflows.

Network Intelligence

Network optimization determines whether AI-powered media workflows succeed or fail. When AI processing creates unexpected traffic patterns during live broadcasts or intensive rendering tasks, intelligent network management ensures critical workflows maintain priority while automatically adjusting less essential data flows.

The integration of AI into network management itself creates powerful improvements. Machine learning algorithms continuously monitor performance, predict bottlenecks before they occur, and automatically optimize routing decisions. However, integrating AI capabilities with existing production workflows presents significant challenges, a topic highlighted at 2025 NAB Show. As noted in industry coverage, media professionals consistently express demand for comprehensive solutions that merge AI capabilities with existing workflows, rather than dealing with isolated systems that create operational silos.

Successful implementations start with focused applications that address specific workflow bottlenecks, then gradually expand to more comprehensive AI integration as organizations build confidence and expertise with the technology.

Moving forward

The transformation of media through AI depends not just on algorithmic advances or creative applications, but on the infrastructure foundation that enables scalable, reliable AI operations. Media companies should fundamentally rethink their infrastructure approach to support AI-enabled workflows.

Success in this transformation requires understanding that AI infrastructure isn't simply an upgrade to existing systems. it's a complete reimagining of how media technology platforms are designed, deployed, and operated. Organizations that recognize this reality and invest accordingly will be positioned to lead in the AI-powered future of media.

Find out more on how FPT can reimagine your legacy systems with AI

 

References:

  1. McKinsey & Company, "The Economic Potential of Generative AI: The Next Productivity Frontier" - https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
  2. Gartner, "Artificial Intelligence" - https://www.gartner.com/en/topics/artificial-intelligence
  3. Andreessen Horowitz, "Welcome to LLMflation - LLM Inference Cost is Going Down Fast" - https://a16z.com/llmflation-llm-inference-cost/
  4. Forrester, "Predictions 2025" - https://www.forrester.com/predictions/
  5. McKinsey & Company, "QuantumBlack AI Insights" - https://www.mckinsey.com/capabilities/quantumblack/our-insights
  6.  https://www.cheqroom.com/blog/2025-nab-show-highlights/
  7.  https://www.newscaststudio.com/2025/04/14/nab-show-ai-media-production-practical-applications/
Author FPT Software