APPLE FINAL CUT PRO: Apple Final Cut Pro X FCPX Debates FCP Legacy FCP Tutorials

Re: Apple to ditch Intel?

COW Forums : Apple Final Cut Pro X Debates

VIEW ALL   •   ADD A NEW POST   •   PRINT
Respond to this post   •   Return to posts index   •   Read entire thread


Joe Marler
Re: Apple to ditch Intel?
on Apr 17, 2018 at 11:45:30 am

[Michael Gissing] "In that time frame I've moved from SD to 4k and have real time playback whilst grading. Standing still in terms of processor and going backwards in terms of GPU is unacceptable to me over the past ten years."

My experience is even the highest end desktops don't always have adequate performance -- because of the now-ubiquitous use of 4k acquisition, the higher shooting ratios now common, and incredible computational demands entailed by 4k H264.

When I used Premiere CS5 in 2010 on standard-def DV, it was fast on a desktop or a top Windows laptop of that era. It was great to just drop in camera files without transcoding and edit with high performance. Shooting ratios were lower then, so that helped.

Today both Premiere CC and FCPX can struggle on 4k H264 -- on any platform. Adobe's "Mercury" playback engine is no longer like quicksilver when editing that format, esp. for multicam.

The worst combination is 4k H264 on Premiere on a Mac because Adobe doesn't even use Quick Sync on Mac. But with *either* FCPX or Premiere CC on the latest hardware, we are often knocked back a generation to the previous workflow of "transcode before edit" -- not to a mezzanine codec but to proxy.

Traditional GPUs cannot help this because the core algorithm of long GOP formats is inherently sequential. GPUs can muster thousands of lightweight threads which can attack certain parallelizable tasks, but many tasks cannot be (or have not been) parallelized. Highly compute-intensive plugins such as Neat Video, Digital Anarchy Flicker Free, and Imagenomic Portraiture only partially leverage the GPU or not at all. E.g, Neat Video is slower if configured to use the iMac Pro Vega 64 GPU than if using all CPU cores and no GPU. The problem isn't the Vega GPU and it can't be fixed by a faster GPU or an eGPU.

My documentary team can produce 1 terabyte of 4k H264 per day. I'd like to screen dailies without building proxies, but it's just too slow, especially on a laptop. I've only tested one machine that can scrub though single-cam 4k H264 with moderate smoothness using FCPX, and that's the top-spec 2017 iMac. It is way faster than the 12-core D700 Mac Pro and faster decoding 4k H264 than the 10-core Vega 64 iMac Pro. So we'd have to take a 2017 iMac 27 on site to get adequate editing performance to screen dailies without proxies.

For those doing scripted narratives or other productions with lower shooting ratios which can use ProRes or similar acquisition, even a laptop is pretty fast -- at least with FCPX. Lower-compression intra-frame codecs are more an I/O problem than CPU. A top-spec MacBook Pro using SSD or Thunderbolt RAID storage can handle those codecs pretty well.

What I'd like is a desktop machine that regains the same timeline performance on today's 4k H264 that we had in 2010 on Premiere using standard def DV. That machine does not yet exist, at least from Apple -- even using FCPX.

So far Intel has remained absolutely intransigent on adding Quick Sync to any Xeon except the 4-core version. On the iMac Pro this forced Apple to write to AMD's UVD/VCE transcoding hardware, which is better than nothing but thus far slower than Quick Sync on handling 4k H264. There are lots of factors at play here, but if Apple controlled their own CPU design for desktops they wouldn't restricted by Intel's decisions.

CPU design has now reached a point where major performance gains are difficult -- as measured by traditional metrics such as clock speed and Instructions Per Clock. It's unclear whether an A-series architecture would greatly improve this, as the problems seem fundamental. However -- there are still major gains possible using "heterogeneous" processing -- IOW specialized subsystems like Quick Sync. There are probably other software functions amenable to silicon-based acceleration -- provided the chip vendor was cooperative and software harnessed this. Using an A-series CPU in a Mac would allow Apple to control both hardware and software.

The initial rumors of A-series CPUs on Macs focus on lower-end laptops, and those are a natural fit for some future iOS/macOS integration which a common instruction set might facilitate. The improved power consumption would help battery life. However this might be a testing ground to evaluate future use of higher-end A-series CPUs in higher-end desktop machines.


Posts IndexRead Thread 


Current Message Thread:





© 2019 CreativeCOW.net All Rights Reserved
[TOP]