Measurement

Broadcasters Race to Fix CTV Measurement as Programmatic's Old Problems Resurface

By SOS. News Desk | Nov 21, 2025

CTV works when it's easy to buy and easy to measure. If the industry doesn't anchor AI and creative testing in real transparency, we risk repeating the worst habits of early programmatic.

As brands siphon billions in ad spend from low-attention performance channels to pour into the high-attention world of CTV, they're running straight into an ecosystem that, while powerful, is painfully messy. While the technology promises a return to big-screen brand building, CTV still carries the same transparency and quality headaches that once plagued programmatic. The opportunity is enormous, but none of it lands until buying and measuring CTV becomes a whole lot simpler.

Daniel Best, SVP at AI-powered creative effectiveness platform DAIVID, has spent his career building and scaling companies in the adtech world. His perspective on CTV comes from years of watching the industry repeat old patterns and chase new promises. In his view, the real breakthrough depends on bringing clarity back into how the channel is bought, tested, and evaluated.

"CTV works when it's easy to buy and easy to measure. If the industry doesn't anchor AI and creative testing in real transparency, we risk repeating the worst habits of early programmatic," says Best. The "easy-to-buy" mandate is particularly important for broadcasters. For years, some legacy media companies made the buying process intentionally difficult in order to protect their linear businesses, Best explains. In his view, that era is now ending as a new strategy—informed by past industry challenges—forces a different approach.

  • Panel pains: The new reality of CTV shatters the outdated measurement model built for linear television. Industry leaders see the slow, expensive, panel-based process as fundamentally unfit for an era of high-volume, iterative CTV campaigns. "For a long time, creative testing has been based on panel data. That is a model built for linear TV and its handful of hero assets," Best explains. "But in modern content manufacturing, brands now have a high volume of assets, and trying to use one or two to predict how the rest should work is a model that is simply not fit for purpose."

  • Proof in the performance: The situation is driving a move toward real-world, outcomes-based measurement. That pivot is made more challenging by a persistent signal gap that complicates scale, but a new focus on accountability appears to be taking hold. "When you test creative at scale, you need the right framework in place. The next step is to connect it to real-world performance, so you can see if there’s a strong correlation between what you predicted would happen and what actually happened with those two datasets."

The creative process itself is becoming more scientific. The technology now exists to decode emotional resonance at a massive scale. But, Best injects a dose of realism about its current limitations.

  • Maybe next year, Coca-Cola: "The technology now exists to match the emotions in the creative to the emotions in the content, because the outcome is significantly higher when that alignment is achieved. That capability was science fiction not long ago," Best says. But he also warns against assuming the machines are flawless. "But tests of AI-produced creative show it's still not perfect. We've moved past the early six-fingered uncanny valley, yet if you look at the latest Coke ad, it’s still a bit weird."

Generative AI is accelerating this change by dramatically lowering creative costs. Even so, Best cautions against the allure of infinite iteration without a strong creative idea at its core. He notes that the power of AI to create limitless variations also highlights the need for rigorous quality control and better benchmarks.

  • The tip of the iceberg: "Think of it as the 'iceberg of content.' The highest quality creative is still about human craft. The bottom of the iceberg, the simple performance variations, has now been automated," Best explains. "The real action is in the average advertising of the middle ground. It begs the question: why spend half a million dollars on an ad when AI can now deliver something of the same standard for $10 thousand?"

  • Cannes and cannots: "AI is raising the creative floor, but the risk is that everything starts to look slick and forgetful. It hasn’t raised the ceiling. Are we really seeing Cannes-winning work coming from AI? On average it’s more polished, but not award winning yet," he continues.

The emerging playbook, defined by simplicity and transparency, could point toward a more effective future. But it faces a significant challenge: ad fraud, already a multibillion problem in the CTV market. Without a commitment to quality control, Best believes the industry could repeat the very mistakes that broke trust in the early days of programmatic.

"We cannot make the same mistakes in CTV that we made in early programmatic. That was an era defined by the promise of turning copper into gold, where special ad networks claimed they could take cheap inventory and magically turn it into something amazing. Of course, that wasn't true," Best concludes. "The same goes in CTV. If it looks too good to be true, it probably is." The opportunity in CTV is real, but its success is not guaranteed.

Credit: Outlever

Key Takeaways

  • Brands shifting spend into CTV run into a complex ecosystem with weak transparency and outdated measurement that slows real growth.

  • Daniel Best, SVP at DAIVID, explains how broadcasters and buyers need clearer buying paths, better testing, and more accountable measurement to avoid repeating programmatic’s old problems.

  • He points to AI-driven creative and large scale testing as a path forward, as long as the industry pairs automation with strong quality control and real world performance proof.