FORUMS: list search recent posts

Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas

COW Forums : VEGAS Pro

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
Kell Hymer
Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 6, 2014 at 10:30:49 am
Last Edited By Kell Hymer on Dec 6, 2014 at 10:38:38 am

Hello All,

These forum's and Sony's are saturated with talk of GPU support, OpenCL, and CUDA. My unyielding OCD to squeeze out as much performance from my PC as possible has caused me to read up on the topic for weeks. I am no expert on any topic but I get the general idea...I think. In posting this, I hoping to validate that I understand correctly and then can make the best choice for my hardware. I also hope this will make for a decent centralized resource for other Vegas users. Sony Vegas' GPU utilization capabilities are determined my many factors. Here is what I have gathered:

1) The CUDA language is proprietary to NVIDA GPUs. CUDA is hardware agnostic but software must be coded specifically to utilize it. I assume NVIDIA likely controls or licenses the use of CUDA. If not, NVIDIA must have and still does push CUDA support among software/gaming developers. By doing so, they have secured demand for their GPUs. CUDA is arguably a better language and more efficient than OpenCL.

2) AMD's GPUs utilize OpenCL which is also hardware agnostic but not proprietary. It is managed by the non-profit Khronos Group. OpenCL seems to be more widely used across many industries and applications but it is not updated as frequently as CUDA. As such, it is usually some years behind in some features when compared to current generation CUDA revisions. Nonetheless, it is still highly effective.

3) Vegas uses OpenCL for real time previewing.

4) Sony Vegas does not really utilize CUDA at all, but some of the rendering codecs used by Vegas do.

5) Main Concept limited their AVC codec to work with only certain GPU chips. Main Concept has not been updated for some time so newer cards may not be supported when rendering to Main Concept AVC (MC MPEG 2 as well?). What I do not know is if this limitation applies to both CUDA and OpenCL or just one of the languages. Anyone know? I have the option to render to either one "if available".

6) In an effort to beat the competition and corner the market, NVIDIA, for the most part, stopped supporting OpenCL after their Fermi cards. Technically, pre and post Fermi NVIDIA GPUs should still boost rendering performance for CUDA enabled codecs. However, this is not the case with Main Concept due to their codec intentionally limiting all but certain unknown GPU chips.

Conclusion:

So which GPU is best for Sony Vegas? There are multiple answers depending on what your needs are. If improving rendering speeds are your primary concern and you often work with various codecs, my answer would be "good luck. Buy an 8 core Intel processor". If you primarily stick to one codec, find the best GPU and driver for that codec within Vegas. If rendering speed is of no concern and you need a smooth real-time preview to make your editing more efficient, a modern AMD card seems to be the clear choice.

We could all cross our fingers and wait for the day when Sony Vegas enables CUDA support within the program to improve frame rates on real-time unrendered playback. Vegas GPU support has been enabled for a number of years and each subsequent version of the program has yet to do so. I do not think it is coming anytime soon. In all reality, it seems like Sony did us all a favor by sticking with OpenCL, the more widely compatible of the 2 languages. After all, it is not Sony's fault that NVIDIA and Main Concept decided to play consumer-unfriendly market games. Software developers often write their code to support both OpenCL and CUDA. However, maintaining scalability in both languages is very challenging. This becomes exponentially more difficult when writing code for computationally heavy software...such as Sony Vegas. I am guessing that we would significantly more instability if Vegas were to attempt this.

-------------------------------------------------------

Subsequent questions:

1) My source material is often H.264 GoPro video and I render to the Sony AVC Blu-Ray templates. However, I am primarily concerned with improving the preview frame rate while I edit and will likely buy an AMD card soon. To get improved frame rates with OpenCL/GPU support, Should I match the project setting to the GoPro source video or transcode it to another format before editing so that I can make full use of the OpenCL enabled GPU? In otherwords, do the project settings and source video, as long as matched, impact the frame rate and OpenCL's ability to improve it?

2) Which AMD cards are suggested and what should I look for in one? I have almost zero knowledge of AMD's GPU lineup other than reading positive reports of the 79XX series and the R9-290. How do FirePro cards compare to their gaming counterparts in Sony Vegas? Are they worth the premium charged for them? I have found that NVIDIA's Quadros don't seem to be any better than their comparable gaming cards. If anything, they can be slower in my experience.

Current System: Intel i7 3930K | Asus P9X79 Deluxe | Nvidia Quadro 4000 & 2000 | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index

Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 6, 2014 at 2:14:12 pm

If you want to experiment, replace your Quadro 2000 with an AMD card (R280 for example, if you have the required power connectors)and use it to connect the display. In this way you will have OpenCL from AMD for timeline acceleration.
You will still be able to use MainConcept encoder with the CUDA (from remaining Quadro 4000).

Personally I am using a GTX480 (Fermi) modded in Quadro 6000. Not the same like real thing - more cores but less memory - but very stable and per formant.
Using GPU-Z to verify the GPU utilization, I realized that the video card is NOT the bottleneck - it doesn't reach 100% at any time.
Previously I was using a Quadro 2000 and that was maxed out during rendering, so as of now I know that adding a faster card won't do anything for me.

My CPU is not reaching 100% either, so at this point I blame the HDD's for being the bottleneck. I just purchased a RAID card and I will try to populate it with HDD's to see if I get any improvement.


Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 6, 2014 at 7:22:08 pm

Personally, I have found it very difficult to determine with any certainty why my GPU and CPU do not seem to be utilized more. This applies to both rendering and previewing. Oftentimes, my preview window frame rate is low and my CPU and/or GPU utilization are low. CPU utilization is not necessarily the best measure from what I gather. If you think about it, the various codecs and effects may not be able to fully utilize parallel processing (hyperthreading and multi-cores). Other determinants are the motherboard bus speeds, RAM speeds, GPU to onboard memory bandwidth limitations, and PCI-E lane restrictions and bandwidth. Sometimes a modern CPU can process the data faster than the time it takes for the CPU to offload it to the GPU, the GPU to compute and move data back and forth between the on-board memory, package it up and send it back to the CPU. Removing known bottlenecks is the best thing to do to speed up this process. I recently upgraded to an i7-4930k and have dedicated OS, read, and write solid state drives to work from. My working read drive is a PCI-E SSD that is roughly double the read and write speeds of the best Samsung SSDs on the market. I use SATA 3 ports for the remainder of my drives. I have plenty of RAM on a P9-X79 deluxe board that is still top of the line for consumer boards. Despite this, I still have bottleneck :). Simply put, a high level of complex multi-layered video with effects will tax any system. The only thing I see as being a possible bottleneck to real-time high frame rate previews is getting a better GPU that will utilize OpenCL.

Current System: Intel i7 3930K | Asus P9X79 Deluxe | Nvidia Quadro 4000 & 2000 | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index


John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 6, 2014 at 8:15:24 pm

[Kell Hymer] "What I do not know is if this limitation applies to both CUDA and OpenCL or just one of the languages. Anyone know? I have the option to render to either one "if available"."
I believe it applies to both. We have had people report that CUDA doesn't work on anything above the Fermi series and that OpenCL doesn't work for AMD above the 7000 series. I have an AMD Radeon HD 5870 so mine works.
[Kell Hymer] "1) My source material is often H.264 GoPro video and I render to the Sony AVC Blu-Ray templates. However, I am primarily concerned with improving the preview frame rate while I edit and will likely buy an AMD card soon. To get improved frame rates with OpenCL/GPU support, Should I match the project setting to the GoPro source video or transcode it to another format before editing so that I can make full use of the OpenCL enabled GPU? In otherwords, do the project settings and source video, as long as matched, impact the frame rate and OpenCL's ability to improve it?"
Yes the project settings definitely affect timeline playback if the source doesn't match because Vegas Pro will try and conform the source "on-the-fly" which could lead to stuttering playback. So matching the project to the media is recommended for improved playback. As for transcoding, that is one solution. Proxies are another. Vegas Pro 13.0 has a proxy workflow but I've never tried it because I don't need it You might want to look into it.
[Kell Hymer] "2) Which AMD cards are suggested and what should I look for in one? I have almost zero knowledge of AMD's GPU lineup other than reading positive reports of the 79XX series and the R9-290. How do FirePro cards compare to their gaming counterparts in Sony Vegas? Are they worth the premium charged for them? I have found that NVIDIA's Quadros don't seem to be any better than their comparable gaming cards. If anything, they can be slower in my experience."
I would go for the Radeon R9 290x. If you can't afford that get the R9 280. They won't accelerate MainConcept AVC but they will accelerate everything else and provide good value for the money over a FirePro which Vegas Pro won't really take advantage of the "workstation' features.

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 6, 2014 at 9:48:55 pm

Thanks John,

Much of the information I gathered for this thread came from you and a handful of others.

[John Rofrano] "Yes the project settings definitely affect timeline playback if the source doesn't match because Vegas Pro will try and conform the source "on-the-fly" which could lead to stuttering playback. So matching the project to the media is recommended for improved playback. As for transcoding, that is one solution. Proxies are another. Vegas Pro 13.0 has a proxy workflow but I've never tried it because I don't need it You might want to look into it."


Makes sense but does the source material itself make a difference even when the project setting match it? I just dropped some 1080p 29.97fps GoPro footage onto a test timeline and then matched the settings. When I click on the media properties, it tells me the format is AVC. How do I know if this is MainConcept AVC or Sony's version of AVC which has better OpenCL support? When rendering from one format to MainConcept AVC, newer GPUs are not supported as mentioned previously. However, does this change when, instead of rendering to, one is working with/editing MainConcept AVC material and needs a decent fps preview? I know Vegas itself will utilize OpenCL here but I am not sure if it will be hampered by using MainConcept AVC source material. If so, I could render to a different format before editing but transcoding the material multiple times might result in a quality loss. Proxy editing may solve this dilemma and is on my list of subjects to read up on.

[John Rofrano] "I would go for the Radeon R9 290x. If you can't afford that get the R9 280. They won't accelerate MainConcept AVC but they will accelerate everything else and provide good value for the money over a FirePro which Vegas Pro won't really take advantage of the "workstation' features."


Considering the price difference, the R9 290x is a much more appealing option :). What do you mean that Vegas will not take advantage of "workstation" features? Based on my experience with NVIDIA Quadro and GTX cards, it seems to me that the drivers are more geared for CAD and 3d modeling applications more than for NLE software. Additionally, gaming cards must be able to render video editing previews on the fly in the same manner in which they do for video games. Are the FirePro cards similar? It makes me wonder if video editing processes are more similar to video games than the "workstation" processes associated with CAD and 3d modeling.

Current System: Intel i7 3930K | Asus P9X79 Deluxe | Nvidia Quadro 4000 & 2000 | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index

John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 6, 2014 at 11:07:38 pm

[Kell Hymer] "Makes sense but does the source material itself make a difference even when the project setting match it? "
Yes, absolutely! The codec that is used has a big influence over how fast it can be decoded. HDV compression is much easier to decode than AVCHD compression so HDV will playback smoother than AVCHD.
[Kell Hymer] " I just dropped some 1080p 29.97fps GoPro footage onto a test timeline and then matched the settings. When I click on the media properties, it tells me the format is AVC. How do I know if this is MainConcept AVC or Sony's version of AVC which has better OpenCL support?"
It's neither. It's GoPro's implementation of AVC/H.264. That's the important part. Not all implementations are the same and Vegas Pro plays back some smoother than others.
[Kell Hymer] "I know Vegas itself will utilize OpenCL here but I am not sure if it will be hampered by using MainConcept AVC source material. If so, I could render to a different format before editing but transcoding the material multiple times might result in a quality loss. Proxy editing may solve this dilemma and is on my list of subjects to read up on."
If I were you, I would use GoPro Studio to convert the GoPro footage to CineForm Digital Intermediary format. There will be no quality loss and playback will be very smooth.
[Kell Hymer] "What do you mean that Vegas will not take advantage of "workstation" features? Based on my experience with NVIDIA Quadro and GTX cards, it seems to me that the drivers are more geared for CAD and 3d modeling applications more than for NLE software. "
That's exactly what I mean. Workstation graphics cards have features designed to aid in CAD and 3D modeling that Vegas Pro doesn't use. So you are paying for features that you don't need when you buy these cards. If you also use After Effects or other 3D applications that's a different story. But if you are just using Vegas Pro, you won't be taking advantage of these features.
[Kell Hymer] "Additionally, gaming cards must be able to render video editing previews on the fly in the same manner in which they do for video games. Are the FirePro cards similar?"
FirePro's are AMD's workstation line just like Quadro's are NVIDIA's workstation line. Same thing... different company.
[Kell Hymer] " It makes me wonder if video editing processes are more similar to video games than the "workstation" processes associated with CAD and 3d modeling."
People don't buy Quadro's and FirePro's because they perform better when video editing. They buy them because the drivers are more stable and you get better support. I have to admit that people have a lot of problems finding the right consumer card that gives stability because the drivers are tweaked for games and not video editing.

The bottom line is that Vegas Pro makes use of OpenCL and AMD has the best implementation of OpenCL in their drivers which is why many recommend AMD cards for Vegas Pro editors.

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index


Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 6, 2014 at 11:23:07 pm

About workstation features - they might be useful in a NLE too, it just depends of NLE.
For example my GTX 480 when I modded it to Quadro 6000 it acquired a bidirectional Async Engine between the video memory and system memory, while previously it was just one way (from system memory to video memory). That helped the encoding speed in MainConcept.
There are other limitations that don't apply to Vegas necessary (nvec encoder has more capabilities for newer Quadro cards then their gaming equals, like more that 2 streams, interlaced support).

However, getting a "faster" video card doesn't mean that the Vegas will be "accelerated". In my experience, upgrading from Quadro 2000 to Quadro 6000 didn't speed up the original quad-core system. The GPU utilization just dropped from 80-90% to 20%, because the CPU was pegged at 100% already.
I upgraded to a six core CPU, it got faster, GPU raised to 46-50%, the CPU is not hitting 100%, so the limitation just moved somewhere else.


Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 7, 2014 at 12:09:15 am

Thank you John and Sorin. This has been a very helpful thread!!!

Current System: Intel i7 3930K | Asus P9X79 Deluxe | Nvidia Quadro 4000 & 2000 | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 15, 2014 at 6:49:07 am

[John Rofrano] "If I were you, I would use GoPro Studio to convert the GoPro footage to CineForm Digital Intermediary format. There will be no quality loss and playback will be very smooth"

As an amateur editor, I decided to look this up along with "transcoding" and "intermediate codecs". Wow! My mind is blown! So if I understand correctly, the GoPro is recording in a highly compressed H.264 format. The compression must be intense as Quicktime can't even playback the 1080 60fps files without dropping frames. This confirms the online reports of various software not being able to playback GoPro footage. In-fact, GoPro's website highly recommends using GoPro Studio to convert to CineForm before using the content in other NLEs.

So, converting to CineForm will decompress the video into a much larger, but more manageable file as far as editing is concerned. I read that it literally copies the reference GOP and places it back into the video where it was once removed for compression by the GoPro camera. This way Vegas does not have to reference the original GOP on top of adding effects and other edits. This is why transcoding is lossless. Wow! This is awesome stuff! The tech side of video editing is just as intriguing as the creative side of the work.

Current System: Intel i7 4930K | Asus P9X79 Deluxe | Nvidia Quadro 4000 & 2000 | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index


John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 15, 2014 at 1:58:57 pm

[Kell Hymer] "So, converting to CineForm will decompress the video into a much larger, but more manageable file as far as editing is concerned. I read that it literally copies the reference GOP and places it back into the video where it was once removed for compression by the GoPro camera. This way Vegas does not have to reference the original GOP on top of adding effects and other edits. This is why transcoding is lossless. Wow! This is awesome stuff! The tech side of video editing is just as intriguing as the creative side of the work."
Yup, you understand correctly. CineForm uses wavelet compression and you can re-render those files again and again and again and see no visual loss in quality. That's what a good Digital Intermediary should do.

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index

Rob James
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Mar 29, 2015 at 3:31:20 am

John, I know this is an old post, but my search results took me here. I am finally building a new system, and I would love your input on one of my components. Is the R9 290X still the way to go? As that's the one I've ordered, but it's not too late to change my mind. If you have any new suggestions I'm all ears?

Rob,
http://www.robjames.net


Return to posts index

John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Mar 29, 2015 at 11:31:20 am

[Rob James] " Is the R9 290X still the way to go?"
Yes, that seems to be the card that everyone is saying gives the best performance with Vegas Pro. I would buy one myself if they made a Mac version but they don't make cards for my old 2010 Mac Pro anymore but from what others are saying, that's still the card to get right now.

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index


mark thompson
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 7, 2014 at 1:11:54 am

Hi,
just adding my 2 cents. I'm not saying anything is wrong but I would shift the emphasis slightly.
CUDA is an architecture. You program to it in C or CUDA C which is an extension to C/libraries for C. That enables you to program the CUDA based chips from NVIDEA.
OpenCL is a standard that implementers can use provide a common programming interface across multiple gpu architectures.
In theory OpenCL should be a little less efficient than programming the chip's native interface. There is no fundamental reason that you couldn't provide an OpenCL interface on top of CUDA.

In keeping with the Horse theme :-) there is an expression "you can take a horse to water but you can't make him drink!". In this case I think that means that whatever architecture/interface is used it all boils down to how efficiently it was programmed. So benchmarks are the best way to determine how good the implementation is.
mark


Return to posts index

Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 7, 2014 at 2:02:43 am
Last Edited By Sorin Nicu on Dec 7, 2014 at 3:38:34 am

nVidia supports OpenCL too... works for time line acceleration.
It's just harder to test in Vegas.
It's just the MediaConcept implementation of OpenCL that is limited to ATI - because they choose CUDA for nvidia.

PS: A generic video composition test for OpenCL performance, is this:

http://compubench.com/result.jsp?benchmark=compu20&data-source=1&version=al...

My Quadro 6000 (modded from GTX 480) gets 55fps in "video composition". Don't know how relevant is for Vegas, maybe not at all...


Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 15, 2014 at 12:24:39 am
Last Edited By Kell Hymer on Dec 15, 2014 at 12:36:41 am

Anyone familiar with OpenCL 2.0? Most gamer GPUs seem to support up to OpenCL 1.2, not OpenCL 2.0. I assume Vegas uses OpenCL 1.2?

In another thread I asked for advice on GPUs. We discussed that the AMD R9 series gaming cards are my best choice and that the FirePro and Quadro cards are designed more for 3D application than NLE work. I do notice though that the FirePro supports OpenCL 2.0 while the R9 gaming cards do not. I am almost certain that Vegas does not use OpenCL 2.0 but when/if they do, it looks like it would be a huge improvement. The following link explains that OpenCL 2.0 allows the CPU and GPU to reference the same memory, improving performance:

http://www.cnet.com/news/opencl-2-0-brings-new-graphics-chip-power-to-software/


AMD has beta drivers for OpenCL on their gaming cards but it has limitations and only works with Windows 8:

http://support.amd.com/en-us/kb-articles/Pages/OpenCL2-Driver.aspx


I am about ready to buy an AMD R9 290x, but part of me wondershow long until Vegas supports OpenCL 2.0 and if I should wait to get a compatible card. That may be a way down the road still, but it does have me thinking.

Current System: Intel i7 4930K | Asus P9X79 Deluxe | Nvidia Quadro 4000 & 2000 | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index


John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 15, 2014 at 1:52:38 pm

This is the only information that Sony has on GPU hardware from their Release Notes:

NVIDIA
  • Requires a CUDA-enabled GPU and driver 270.xx or later.
  • GeForce GPUs: GeForce GTX 4xx Series or higher (or GeForce GT 2xx Series or higher with driver 297.03 or later).
  • Quadro GPUs: Quadro 600 or higher (or Quadro FX 1700 or higher with driver 297.03 or later).
  • NVIDIA recommends NVIDIA Quadro for professional applications and recommends use of the latest boards based on the Fermi architecture.

AMD/ATI
  • Requires an OpenCL-enabled GPU and Catalyst driver 11.7 or later with a Radeon HD 57xx or higher GPU.
  • If using a FirePro GPU, FirePro unified driver 8.85 or later is required.
  • Radeon HD 7xxx or higher recommended for native 4K editing.

Intel
  • Requires an OpenCL-enabled GPU (such as HD Graphics 4000 or higher).

There is no mention of what version of OpenCL that Sony requires.

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Dec 18, 2014 at 7:30:23 am

I emailed Sony and they confirmed that they use OpenCL 1.2 and 2.0. However, they did not provide details about the specifics of the OpenCL support and if the shared GPU/CPU memory makes a significant performance difference when using an OpenCL 2.0 GPU.

On a side note, the following article does a good job in explaining the difference between a professional GPU and a gaming GPU. I am posting here so others have access to it should they have questions.

http://www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-CC-Professiona...

Current System: Intel i7 4930K | Asus P9X79 Deluxe | Nvidia Quadro 4000 & 2000 | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 8, 2015 at 9:59:39 am

[John Rofrano] "I would go for the Radeon R9 290x"

I recently upgraded my CPU from an i7 3820 to an i7 4930K. In light of the discussions in these forums about the AMD R9-290 series, I also pulled the trigger on the XFX R9-290X 8GB model. 8GB is overkill for Vegas, but I wanted the extra headroom to run 3 monitors from the card and to dive into 3D applications. I wanted to know how much better it would perform when compared to the CPU, my NVIDIA GTX 460, or my NVIDIA Quadro 4000. I ran a series of rendering benchmarks using the Sony Vegas Red Car project and codecs that I frequently use. However, I am more concerned about real-time preview performance. To test this, I ran FPS tests by setting my preview window to maximum settings. I hope this information is helpful for others.

Current System: Intel i7 4930K | Asus P9X79 Deluxe | 32GB RAM | AMD R9-290X w/8GB RAM | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)
Overclocking Note: CPU OC'd to 4.339 GHz During CPU and NVIDIA GPU Renders. CPU OC'd to 4.608GHz for R9-290X Renders. A more accurate test would maintain a constant CPU speed but the 4.339GHz OC Failed After AMD GPU Installation.
GPU Drivers Employed: Quadro Driver 341.05 | GTX Driver 344.75 | Radeon Driver 14.501.1003.0
Software: Sony Vegas Pro Version 12, Build 770


Render Benchmarks


Screenshot #1


Screenshot #2


The R9-290X significantly improved rendering times. I also noted that setting the dynamic RAM preview to 0 significantly lengthens the render time. However, no improvement is gained by increasing the allocated RAM amount once it is above zero (5MB in these tests).


Preview Frame Rate Benchmarks

Note: Sony's Red Car project is split into 7 regions. I notated the minimum and maximum frame rate observed during each region. The values observed are relatively accurate, give or take a frame. This is because the fps indicator fluctuated rapidly, making it difficult to notate.





Note: As I was able to attain fairly high frame rates, I did not test the R9-290X here. To test the new GPU, I modified Sony's Red Car project by adding the below effects to the project (Kelken Version below):





During the course of the testing I changed the following setting and found it had no impact on frame rates:


Additionally, I found that the fps increased with subsequent playbacks/previews. It appears that it is loaded into the dynamic RAM in this process.

Conclusion:

The R9-290X drastically improved performance and is far more stable than the GTX or the Quadro cards. I am very happy with the upgrade!

Current System: Intel i7 4930K OC'd to 4.6 GHz| Asus P9X79 Deluxe | 32GB RAM | AMD R9-290X w/8GB RAM | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index


John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 8, 2015 at 4:01:30 pm

[Kell Hymer] "The R9-290X drastically improved performance and is far more stable than the GTX or the Quadro cards. I am very happy with the upgrade!"
Thanks for providing definitive data to back up what we all suspected. This makes sense since AMD and Sony where working closely together on the GPU acceleration in Vegas Pro.

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index

Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 9, 2015 at 12:06:05 am
Last Edited By Sorin Nicu on Jan 9, 2015 at 12:11:58 am

My experience is not that good. Just replaced my nVidia Quadro 6000 (actually was modded from a GTX480) card with an ATI HD 7970 one (equal to R9 280X).

Encoding times of my test file @ 1080-60p on a PC with CPU Xeon X5650, 15GB of memory:
1:25 min. GPU Quadro 2000 @50% MainConcept using CUDA
1:18 min. GPU Quadro 6000 @34% MainConcept using CUDA
1:16 min. GPU HD 7970 (GHz edition) @39% Sony Encoder
4:18 min. GPU HD 7970 (GHz edition) @0% MainConcept OpenCL (didn't make use of GPU)


Return to posts index

John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 9, 2015 at 2:34:35 pm

[Sorin Nicu] "4:18 min. GPU HD 7970 (GHz edition) @0% MainConcept OpenCL (didn't make use of GPU)"
That's correct. The MainConcept encoder stops at the 6000 series of GPU so it won't use your 7970. This is why I've stayed with the Radeon HD 5870. I already said that in this post when I said to use, "the Radeon R9 290x. If you can't afford that get the R9 280. They won't accelerate MainConcept AVC but they will accelerate everything else..."

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index

Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 9, 2015 at 11:13:23 pm
Last Edited By Sorin Nicu on Jan 9, 2015 at 11:24:16 pm

Yes John, I was aware, I just wanted the HD 7970 for other things too and reported my findings in Vegas...

I have one more x16 slot available, I might use another HD 6950-6970 just to use it for MainConcept...Even if Sony encoder seems to do a decent job.
I wonder when Sony will push DivX to update their encoder software or... drop it all together form Vegas.

However I am baffled why the utilization of GPU cannot go higher (CPU wasn't used at max either).
Wonder if Windows 8.1 can use more efficient the video card (WDDM 1.3 versus WDDM 1.1 in Windows 7)?


Return to posts index

John Rofrano
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 10, 2015 at 12:29:42 pm

[Sorin Nicu] "However I am baffled why the utilization of GPU cannot go higher (CPU wasn't used at max either). "
I agree. I've seen renders where my drives were hardly being accessed, CPU is at 13% and GPU is at 40% and I'm sitting there wondering why I wasted my money on all this compute power when obviously none of it is being taken advantage of by Vegas Pro. :(

~jr

http://www.johnrofrano.com
http://www.vasst.com



Return to posts index

Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 10, 2015 at 1:50:00 pm

I just test a random file - render it with Sony encoder that supports OpenCL.
During this rendering I used Windows Resource Monitor and GPUZ to monitor usage of various parts of my PC.

1. HD7970 GPU utilization - maximum 24%, but it was variable (sometimes to zero). Average is maybe at 15%.
With my previous nVidia GTX480 (modded as Quadro6000) card and MainConcept it was at 35-40%, but very constant during rendering.
2. CPU - my Xeon 6 core with HT (12 logical) was used at 38%.
3. Memory - from my 15GB DDR3, Vegas was using only 1.2GB (it can be selected to show in Resource Monitor)
4. Disk utilization - below 0.5MB/s. My rendering HDD is a RAID5 from 3 HDD that can push 70-90 MB/s.

I am puzzled of the bottleneck...


Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 11, 2015 at 12:45:05 am

Sony once told me this is hard to diagnose and it is likely a bottleneck somewhere in the system. However, many of us have some wicked fast systems and simple Main Concept projects should not be too taxing. However, I used to have a GTX460 with my i7 3820 and I noticed that CPU only renders were faster than CPU + GPU renders! After some research, my best hypothesis was that my CPU was too fast....well, rather, the CPU could process the data faster by itself than the time it took it to process, determine what could be sent to the GPU, offload it from the cached memory, send it to the GPU cache and/or VRAM, the GPU process it, unload it from the GPU cacheand/or VRAM, send it back to the CPU memory cache, and then the CPU patch it into the appropriate place in the data stream. I think for some codecs, video that is not too compressed, or less complex projects, it is simply easier for the CPU to do it all. The GPU really helps when heavy processing is required on supported codecs. In such cases, the extra time required to transfer the data back and forth between the CPU and GPU is worth it because the CPU would require significantly longer to process it.

I think it was on this thread that I mentioned OpenCL 2.0. I am very intruiged by OpenCL 2.0 because it allows for the CPU and GPU (or other acceleration hardware/cards) to share the same memory cache. So instead of transferring memory between RAM and caches, both can access the same cache. I do not know if the GPU would have to access the CPU cache or if both can use the GPU cache and RAM. Either way, both processors will be able to access the same data, process it, and spit it back out to the same data location simultaneously. At this time, only Quadros and Firepros support OpenCL 2.0. However, AMD just released a new driver supporting it. Sony confirmed that Vegas 12 does support OpenCL 2.0, but does not take advantage of the shared memory enhancement. OpenCL 2.0 is still very new and software developers are only experimenting with it. Additionally, the firmware for various hardware would almost certainly have to support this as well. As such, shared memory might not be around for some time in Sony Vegas and it will likely require much newer hardware yet to be released.

Current System: Intel i7 4930K OC'd to 4.6 GHz| Asus P9X79 Deluxe | 32GB RAM | AMD R9-290X w/8GB RAM | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index

Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 11, 2015 at 1:53:17 am

Haha Sony supports OpenCL 2.0 - translation it is "it will work because is backwards compatible".

About the transfers between GPU and CPU: I had modded a GTX480 into a Quadro 6000. That activated a hidden bi-directional DMA between system memory and card memory - as a gaming card it have it active only in one direction, from system to video memory. I think that helps with computing tasks slightly.

However, even on pure CPU encoding, my CPU (6 cores, HT, with 12MB shared L3 cache on board) is utilized only at 50%. More strange, from my 12 virtual cores, two are "parked" constantly and another one toggles between "parked" and "in use".

I always thought that video encoding a highly parallel process and will scale perfectly. After all, you can process hundreds/thousands of segments at once, picked between I-frames (key frames).


Return to posts index

Kell Hymer
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 11, 2015 at 11:21:55 pm

So how difficult was it to turn the GTX into a Quadro? I have seen a few things online but nothing that breaks it down to well.

Current System: Intel i7 4930K OC'd to 4.6 GHz| Asus P9X79 Deluxe | 32GB RAM | AMD R9-290X w/8GB RAM | OCZ Revo 480 GB PCI Express SSD | Windows 7 64 bit | Vegas Pro 12 (64)


Return to posts index

Sorin Nicu
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 12, 2015 at 2:25:25 am

Well, after a lot of reading, eventually I had it made. But GTX480 remain power supply hogs, needed a bigger power supply for it anyway.

Also the GTX have only 1.5GB, not 6GB ECC like the real Quadro 6000, is not like memory will grow up magically :)


Return to posts index

Dave Haynie
Re: Beating a Dead Horse For Good Measure: OpenCL, CUDA, and GPUs in Sony Vegas
on Jan 19, 2015 at 5:37:37 am

[mark thompson] "just adding my 2 cents. I'm not saying anything is wrong but I would shift the emphasis slightly.
CUDA is an architecture. You program to it in C or CUDA C which is an extension to C/libraries for C. That enables you to program the CUDA based chips from NVIDEA. "


CUDA is actually more like a language than anything else. Yes, you program it via a C API. That creates a functional graph, which will be compiled when loaded on any given nVidia GPU. You are creating a high level program when you write CUDA code. Yes, it is designed for nVidia GPUs... but their architecture has changed quite a bit over the years, while CUDA programs still run.

[mark thompson] "OpenCL is a standard that implementers can use provide a common programming interface across multiple gpu architectures.
In theory OpenCL should be a little less efficient than programming the chip's native interface. There is no fundamental reason that you couldn't provide an OpenCL interface on top of CUDA."


OpenCL works much the same way, only it's completely architecture independent. Again, there's a run-time compiler in your OpenCL subsystem and graphics driver that targets any OpenCL program to the device of choice. OpenCL runs on GPUs, naturally. You can also run it on a CPU (AMD offers an OpenCL driver for AMD CPUs). And non-GPU compute engines -- Intel's Phi board runs OpenCL. Xilinx and Altera have OpenCL compilers for their FPGAs.

Both CUDA and OpenCL may be somewhat less efficient than programming "to the metal" on a GPU (keep in mind, CUDA isn't directly programming a GPU either). Much like programming in C vs. assembler. On the other hand, modern CPUs are actually often more efficient when programmed in C or other HLLs, simply because a good compiler knows everything about the CPU architecture in use. It's difficult to keep all of the various complexities of a modern CPU in one's mind (virtual register allocation, performance of every instruction, prefetch efficiences for different instruction ordering, the various parallel integer, floating point, and special units in each CPU core that run concurrently, pipeline efficiencies and stalls, etc). CPUs are simpler processors, but more complex as a system. You can get very "low level" with CUDA, as there's basically an assembly language mechanism available -- and in fact, some developers have found that necessary to get the best performance out of Kepler processors.

As far as CUDA vs. OpenCL, your mileage may vary. CUDA may in general be a little more tightly coupled to the hardware, but it's also largely synchronous. OpenCL is largely asynchronous, which allows it to run faster on some algorithms. And while CUDA may outperform OpenCL on some things on nVidia, it's really difficult to get an honest comparison. AMD has long had better OpenCL implementations, and nVidia intentionally cripples some OpenCL and CUDA operations on standard and Workstation cards, since their "Compute Engine" cards also use basically the same processors. So you can find some benchmarks in which the AMD is totally clobbering the nVidia -- this has been the case with Bitcoin mining, for example (well, at least back when it made economic sense to mine Bitcoins on GPUs -- and curiously, it was really hard to get a top AMD GPU card for a short time as a result).

-Dave


Return to posts index

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
© 2017 CreativeCOW.net All Rights Reserved
[TOP]