Graphic Card Benchmark for Sony Vegas Pro 13 or 12 ?
Is there a graphic card benchmark somewhere for Sony Vegas Pro 13 or 12 ?
I've heard that buying a very powerful graphic card doesn't necessarily make Vegas faster because it only speeds up certain models.
Thanks for your help :)
EDIT : I've found the beginning of an answer, the AMD Radeon R9 290 seems to get some of the best rendering times with SVP12 :
This benchmark was made by SONY, so it's reliable.
AMD Radeon R9 290 : 22 seconds
AMD Radeon R9 290X : 22s
AMD Radeon R9 280 : 23s
AMD Radeon HD 7950B : 24s
AMD Radeon HD 7970 : 24s
AMD Radeon HD 6970 : 34s
AMD Radeon HD 5870 : 35s
NVIDIA GeForce GTX Titan : 38s
NVIDIA GeForce GTX 780 : 39s
NVIDIA GeForce GTX 770 : 45s
Does IMPROVED render times equate with IMPROVED Preview FPS?
Video Content Creator and Potter
PC 7 64-bit 16gb * Intel® Core™i7-2600k Quad Core 3.40GHz * 2GB NVIDIA GEFORCE GTX 560 Ti
Cameras: Canon XF300 + PowerShot SX50HS Bridge
Those are some very interesting figures Cedric!
I have always been told that Nvidia cards are superior for video editing; but I think that is just in regard to Adobe products.
My current card, an old AMD Radeon 5870 is faster for SVP than the best Nvidia card.
Which, considering how slow my system is, is quite depressing.
[Ryan McRobb] "I have always been told that Nvidia cards are superior for video editing; but I think that is just in regard to Adobe products."
One problem with memes and other things "everyone knows"... they die a hard death.
When GPGPU computing was young (oh so much younger than today), it was pretty much just CUDA. AMD has this thing called Streams that no one supported, and once Adobe figured out how to have the GPU help you out, it made nVidia a clear leader. nVidia was also faster at gaming for the most part, and maybe had an edge in OpenGL, at least from time to time, so no one really questioned this idea.
But looking at least at OpenCL benchmarks, and in particular at Vegas benchmarks, that's no longer the case. If it ever was. As I've mentioned before, I bought both the nVidia GTX570 and the AMD HD6970 right after Vegas 11 came out, did lots of benchmarks, and found the AMD faster at everything, and bugs in some of the OpenCL stuff (non-Vegas) on the nVidia. They cost the same, so this was a no-brainer.
But it's not just that. Ok, sure, you'd expect the newer AMDs and nVidias to do better -- both nVidia's Kepler architecture and AMD's GCN architecture were designed from the ground up to do this General Purpose GPU computing, as well as the usual stuff. Look at the latest stuff... the nVidia Titan should, by all rights, be wiping the floor with my 2011-vintage HD6970, like the R9 290 is. But it's not... the HD6970 is actually beating it on the Sony benchmark.
But here's the weird part: nVidia hitting the wall. What you'd expect to see is something like the DirectCompute benchmarks (first and last in that linked article)... gradual dropping of performance as you go to older or lesser cards. And sure, there's that double precision Folding @ Home run that has the Titan pull way out in front of everyone else... the Titan does have the same chip as the Tesla 20X, which is sold specifically for "compute" applications. But look at any of the other OpenCL benchmarks -- nVidia just hits a wall. Sure, the HD5870 and HD6970 are most of those benchmarks, but keep in mind, the HD6970 is from 2011; it's a contemporary of the GTX5xx series. And the HD5870 is from late 2009. They're not even testing the nVidia GTX5xx or GTX6xx devices form that era.
Hitting a wall like that usually tells you there's a problem of some kind. No telling if it's intentional or not, but it would sure be interesting to see a Tesla 20X run those same benchmarks... same chip as the Titan, the Kepler GK110. The Titan is higher clocked, too. But the Kepler's 3x the price. I wouldn't be surprised if the Tesla did better on those benchmarks anyway. Could be chip yield, could just be software holding the Titan back. Which never looks good if the other guy isn't doing that.
[Ryan McRobb] "I have always been told that Nvidia cards are superior for video editing; but I think that is just in regard to Adobe products."Yes, NVIDIA is only good with products that have bought into their proprietary CUDA architecture. They are not very good, however, with open standards like OpenCL.
[Ryan McRobb] "My current card, an old AMD Radeon 5870 is faster for SVP than the best Nvidia card. Which, considering how slow my system is, is quite depressing."How slow your system is may have more to say about your other components than your graphics card. I have the same AMD Radeon 5870 in my 2008 Mac Pro 8-core and it plays back the Sony "Red Car" project at full frame rates which my brand new Core i7-3930K with NVIDIA Quadro 4000 can't even do! So I don't consider my 2008 Mac Pro system slow. It's old... but it's not slow.
The secret is to balance your components. If you have some really slow CPU then that might be your problem and a new CPU with your old AMD Radeon 5870 may be extremely fast... faster than with an NVIDIA card I would guess.
Those results are correct but slightly misleading. If you read the description on the website, the renders were tested with the XDCAM EX format. I bought a 290x mainly to render videos for Youtube, and the best format for Youtube is MainConcept AVC/AAC (.mp4). It compresses videos really well, and minimal loss in quality. If you don't mind rendering in XDCAM EX, then the 290 cards are perfect. It renders that format fast and it's high quality. The only problem is that the size of the file will be dramatically bigger; almost twice the size. XDCAM has very little options, and you can only render at two set bitrates. If you are looking to have render acceleration of MainConcept and Sony AVC, then the GTX 570 and the AMD Radeon HD 6850 are still your best options. It's sad to say, but Sony still hasn't updated compatibility. Newer cards do help accelerate the preview, but it does not accelerate the render for the most commonly used format.
[Miguel Peraza] "It's sad to say, but Sony still hasn't updated compatibility. Newer cards do help accelerate the preview, but it does not accelerate the render for the most commonly used format."In fairness to Sony, it's not under their control. MainConcept as a company has been bought out twice and apparently is not updating their codec to be compatible with the newer cards.
[Miguel Peraza] "It's sad to say, but Sony still hasn't updated compatibility."
As others have stated, the Main Concept issues are strictly Main Concept. I actually checked their web site, and the H.264 CODEC they're advertising is exactly what Sony's shipping today... unchanged since Vegas 11 or so. That's the problem when you get acquired. When Main Concept was Main Concept, their primary function was developing CODECs to sell, mostly embedded as with Sony, in other folks' products. When DivX bought them, the focus was certainly shifted, at least in part, to incorporating their CODEC technology in other products. When Rovio bought DivX, even more products, and pretty much zero development on the CODEC. And even with the split off of DivX and Main Concept to Parallax, I wouldn't hold my breath for improvement.
If you want to fault Sony, go ahead, but the real problem is that they've stuck with Main Concept for AVC. They really should find another CODEC provider, and they should fully embrace the use of Windows-standard Direct Show CODECs... much better than Video for Windows, because they can output any format, not just AVI. Or fully document the Sony plug-in interface for CODECs. Or support a built-in frame server. This isn't uniquely an H.264 problem, since there are a bunch of new CODECs up and coming that may become important son enough.
[Miguel Peraza] " Newer cards do help accelerate the preview, but it does not accelerate the render for the most commonly used format."
That's not strictly true. If you see Vegas speeding up during preview, that same compositing engine is used -- and sped up -- during rendering. Which doesn't just feed whatever CODEC you're using faster, but also frees up more CPU cycles for that CODEC. So no, the GPU doesn't speed up the rendering CODEC itself, but it does speed up rendering. Try a CPU-only versus OpenCL (as set in video preferences -- don't forget to reboot) to see the effect of the GPU with any un-accelerated CODEC.
And sure, Main Concept does an excellent job of using the GPU for H.264... I see about a 6x speedup over no GPU at all. They're doing practically everything except entropy encoding on the GPU. But the quality is not as good as the CPU-only version.
Does anyone know if Sony Vegas pro 12 or 13 supports NVIDIA GeForce GTX 760M for GPU rendering? I appreciate if someone answers my question.
I am trying to buy an Asus N56JR-S4075H 15,6" laptop which has NVIDIA GeForce GTX 760M graphic card.
[Sonam Sherpa] "Does anyone know if Sony Vegas pro 12 or 13 supports NVIDIA GeForce GTX 760M for GPU rendering? I appreciate if someone answers my question."Yes, it "supports" it but the real question is will it make your renders faster or slower? Vegas Pro has two areas that will be accelerated by your GPU. One is timeline playback and the other is encoding while rendering. The MainConcept AVC encoder will not take full advantage of the GTX 760M while rendering but the timeline playback will. However, our testing has found that ATI cards work better for timeline playback as they have a better implementation of OpenCL which Vegas Pro uses for timeline playback GPU acceleration. So the GTX 760M will work, just not as good as other GPU's.
I figure I use this thread instead of creating a new one. I did a search and found this here. So my issue is I was using a Geforce GTX 260 and I used the Kuda cores from it to render and it was if I recall 3 x faster than the CPU. I since upgraded (so i thought) to a Geforce GTX 750Ti But it seems that it doesn't take advantage of Kuda cores at all. There is no difference when using the CPU vs KUDA if available.
with the GTX 260 I had 192 Kuda Cores
with the GTX 750Ti I have 640 Kuda cores
To do basic 720p rendering of a 20 min video that was from my lumix gh3 it takes 45 minutes using CPU
I find this very frustrating. There is no crazy effects or anything. Why is the rendering so long? over twice the time of the actual length of footage.
AMD FX 83-20 Eight Core Processor 3.5ghz (maybe i should of went intel?)
16 GB RAM DDR3 ram
OCZ Vertex2 SSD drive OS and NLE on it
3 TB USB 3.0 segate output files to this.
I don't have a beast or anything, but I thought it would be decent enough for basic editing and rendering.
I guess I am not sure what to do. use the old video card for getting faster rendering speeds. But then I can't play 1080p youtube videos etc.
I am kinda bummed out :(
The problem is that the newer NVIDIA cards (600/700) use a new architecture and Vegas Pro doesn't take advantage of all the processing power which causes them to actually be slower than other 400/500 series cards. I would recommend you upgrade your 200 series card to a 500 series card (570). That will give you faster rendering.
Thank you for your response. I will be testing a 580 that my buddy has for sale. that should be ok since it's a 5 series I assume... he has 2 of them for sale. Does that make a difference if I ran both? would it be even faster than the one card at rendering?
Ok.. there are a couple of things at work here. You probably won't like any of them.
1) It's CUDA, not KUDA. Stands for Compute Unified Device Architecture. If you're German, I'll give you a pass on this one. But nVidia calls it CUDA. Use the right name so you don't sound ignorant.
2) Your CPU. AMD FX 8320. That's actually a very good value these days, so you're probably getting more CPU per dollar than most folks. You're seeing a CPU Passmark of 8079. My six core Intel i7-3930K delivers 12135 on the same benchmark. But it'll actually do better on video rendering. My old CPU, an AMD Phenom 1090T, did 5703 on the Passmark, on six cores. The problem with the new AMD architecture is that you don't have an actual 8-core processor, you have a processor composed of four "compute modules". Each compute module contains two integer processors and one floating point unit that's shared. They also have individual L1 caches and a shared L2 cache. The idea is that you'll get better performance than a single core, which is true... but less than two individual cores, also true. Yours is a Piledriver chip, at least, which did make a bunch of improvements over the Bulldozer architecture -- an 8-core Bulldozer wasn't necessarily any faster than the six core 1090T.
So in reality, I can render some stuff faster than realtime on my faster system. Other stuff, not so much. It's dependent on the job. Even if you add little things, like level or color adjustments to a video, you'll add what can be very noticeable overhead. And when you do a just-plain render of video to output, there's no much help that Vegas' built-in GPCPU stuff can give you (see #3).
So should you have gone Intel? It's a price-performance thing. Your system was probably pretty close to what I have long described as the "knee of the commodity curve". If you're looking for value, when an item is found to be a commodity, you basically get at least twice as "much" for twice the money... so you spent on an 8-core CPU, and you more or less get 4x the performance of a 2-core CPU (one compute unit) from the same architecture). Thing is, as you pass that "knee", you start having to pay exponentially more money for increased performance. So for example, my CPU cost $500 new in 2013. It's certainly not twice as fast as yours, but I probably paid several times as much. And more for the motherboard, since it's a rarer part. You have to be the judge of your cash vs. performance needs, but overall, I don't think you made the wrong decision without more information. Are you under very strict timelines, delivering the final video product? In that case, and particularly when there's pay involved, you want the faster CPU despite the cost. If it's more of a casual thing, you did well saving the money.
3) The Vegas Architecture. Vegas is actually a collection of plug-ins plus the main program. When you render a video, you are setting parameters for Vegas itself, and for the particular CODEC that's doing the rendering. If you find a setting in the CODEC for "CUDA" (and you know that's the right spelling if you set it), OpenCL, or CPU-Only, you have found the controls for just that CODEC, the Main Concept AVC CODEC most likely. We'll get to the reason (one of those things you don't want to hear and no one's happy about), but consider the other control.
So fire up Vegas, go to Preferences, click on the Video Tab. Look at the third line down, "GPU acceleration of video processing". That should not say "Off", it should have your nVidia driver listed. That's an OpenCL thing... Vegas doesn't actually use CUDA internally. This the internal control. Some plug-ins get their GPGPU settings from Vegas (like all of the plug-ins that come with Vegas), others make you adjust parameters independently of Vegas.
4) Geforce GTX 750Ti vs Main Concept. So as I mentioned, there are TWO places to set your GPU. If you go to Vegas' Preference and set your GPU, Vegas will use the GPU for anything a GPU can do for Vegas internally, and it'll pass that GPU selection on to plug-ins that use Vegas' preference information. So you want to do that. If you don't see your nVidia there, get some recent drivers that support OpenCL. This WILL make every video render faster, and it's also the only thing that'll enable the GPU to make editing and preview faster.
Now back to that AVC plug-in. Main Concept was a company that just did video CODECs. Sony used their stuff going way back. In Vegas 11, released on 2011, Sony first bought the version of the Main Concept AVC plug-in that did a pretty good job of using the GPU for rendering. The problems were already at work then. While Main Concept was established (in Germany) to make video CODEC technology, they were a little too successful. So they were acquired in 2007 by DiVX, at the time a very successful company selling MPEG-4 ASP products, looking to get more advanced video CODEC technology. This wasn't a huge problem yet. But in 2010, Sonic Solutions, a division of consumer video products company Rovio, bought DiVX. And they haven't put much into the company. So Sony engaged with Main Concept/Rovio to get the video CODEC technology for AVC into Vegas. It was pretty good back then. But it was essentially the same for Vegas 12 and Vegas 13.
And none of that would have itself been a problem, except Main Concept did a bad and very evil implementation of the GPGPU computing. The whole point of both CUDA and OpenCL is to allow any kind of device with sufficient math performance, particularly for Open CL, to do computing with applications that known nothing about the specifics of that processor. In fact, AMD has a version of OpenCL for its CPUs, just to allow OpenCL development/support without a supporting GPU. Intel has a few non-GPU massively parallel processor boards, like the Intel Phi, that use OpenCL. It's a very good thing. Any recent nVidia card can do OpenCL in addition to CUDA -- CUDA is a proprietary, nVidia only system.
Here't the thing: Main Concept hard-wired their CODEC to only work with a list of very specific GPU chips. No good reason... presumably, they did this to force folks like Sony to pony up more cash for the new version of their CODEC for new versions of Vegas. Only, Main Concept was the one that failed to deliver. As a result, the Main Concept CODECs only support GPUs that were around in 2010 and perhaps up to mid-2011. My AMD Radeon HD6970 helps me render video up to 6x faster than I'd get just using my 6-core i7... usually it's more like 2-3x. Newer GPUs will do the part of the GPU acceleration that Vegas handles faster than mine will. And that's both editing performance as well as the rendering pipeline for any CODEC. But they'll take longer in the actual rendering component of your video, since MC refuses to use your perfectly good GPU that should be able to render with either CUDA or OpenCL.
5) The complexity of your project. That matters, big. It's not just video rendering.... how many files is Vegas loading for your project. Where are they -- which HDDs or SSDs? Particularly for HDDs, you don't want too much media coming from the same disc, or you may thrash. Check your Task Manager/Performance display, and look for around 90% CPU being used during a render. If it's significantly less, you have a bottleneck tht's not the CPU. You don't want that, ever.. the CPU (and GPU, if it's helping) need to be the bottleneck, because they're the fastest things in the system.
And finely tuned, some projects are just too big. I had a couple of music videos with animation, composited in Vegas with 40-70 layers, that took 4-8 hours to render. For a 2-3 minute video. Nothing I could do about it but bite the bullet.
WOW! thank you very much for schooling me. I really do appreciate this in depth response from you. This was amazing. ok first I got it now. CUDA! not kuda. I wanted to just say that, I notice when I had GPU acceleration of video processing set to use. SV wasn't stable, it would crash all the time. Once I disabled it. I didn't crash. So I am a bit hesitant to set that back to use. I understand that with rendering times can very depending on layers etc but, my example is taking my footage from a gh3 in avhcd 1080p and adding a 12 second intro that was already rendered, 2 lower thirds that last 6 seconds, one at the start, one near the end. the video in total was 18 minutes long. and it took 40 minutes to render using internet 720p stock. I just find that frustrating. But again that must just be using the cpu as there is no difference in time when I select Cuda or open CL in the option before rendering.
I will get the 580 series card in a couple of days from my buddy and install it, render the same file and see what happens. It's a shame that the newer cards are not benefiting.
Thanks again for your response. I will be reading over it a few times.
"As a result, the Main Concept CODECs only support GPUs that were around in 2010 and perhaps up to mid-2011."
I have to add to this that for nVidia, that means Fermi-based cards.
The Main Concept CUDA encoding works very well on those cards, but Main Concept OpenGL encoding doesn't work well on nVidia (several times less GPU utilization).
Also newer generation nVidia cards have the floating point capability FP64 crippled by design to 1/24 in Kepler and 1/32 in Maxwell (compared to FP32).
Fermi had 1/8 for gaming cards, up to 1/2 in the professional line.
I think that makes a difference in the encoding process.
[Sorin C. Nicu] "I have to add to this that for nVidia, that means Fermi-based cards. "
Yup.. that's one big reason that Vegas is fast on renders than the newer Keplers.
[Sorin C. Nicu] "The Main Concept CUDA encoding works very well on those cards, but Main Concept OpenGL encoding doesn't work well on nVidia (several times less GPU utilization)."
Actually, no one has any idea how well Main Concept's OpenCL works on the nVidia cards, because of that whole chip keying thing I mentioned. They only enable CUDA or OpenCL for specific chips, despite the whole point of CUDA and OpenCL being that nothing chip-specific isn't needed. They only enable OpenCL for those older AMD/ATi GPUs, HD5xxx and HD6xxx series. Any other card, any nVidia, the GCN AMDs, etc. are all running on the CPU only when set to OpenCL.
OpenGL is something different.... "Open Graphics Library" versus "Open Computing Language". Vegas will use OpenGL for some 3D graphics plug-ins. OpenCL is used by Vegas internally for some compositing work, and of course, some plug-ins use it for things like rendering.
You are right, I meant OpenCL.In my tests I see some minimal GPU utilization if I select OpenCL, as opposed to CPU only, but that might be due to playback of resulting video, not due to encoding itself.
Main Concept encoders page states that OpenCL works only for ATI.
Thank you for the eloquent and in depth explanation on the sony Vegas pro. Been using SVP on and off for 5 years with some of the exact questions your response answered.
Best regards, Glenn Marino
Thanks for the clarification... but i still have some doubts.
I have been using SVP for personal projects, using only cpu.
I am thinking of doing some video work more frequently and the decrease of render times is a must.
So, with SVP 13, with new cards (amd R9xxx or nvidia gtx 7xx or gtx 9**) is gpgpu viable?
By what i have read amd (with openCL) help in the play mode.
But are they useful in render times?
Or should i change for Premiere?
[Rui Alberto] "By what i have read amd (with openCL) help in the play mode. But are they useful in render times?"By definition... Yes! You need to playback in order to render. So if your playback is 5 fps your render will never be faster than 5 fps. If, however, you playback is 25 fps with GPU, you just also increased your render speed by 5x! So increasing your playback rate definitely helps during render.
I think your questions was wether the encoders will use the GPU to encode the video and the answer depends on which encoder you use. They are all different and some have no GPU acceleration at all, while others work with all GPU's, still others only work with older GPU's (e.g., MainConcept AVC)
"They are all different and some have no GPU acceleration at all, while others work with all GPU's, still others only work with older GPU's (e.g., MainConcept AVC)"
Thanks. I see it more clearly now.
With all the critics "against" MainConcept, i got the impression these were the only enconders.
[Rui Alberto] "With all the critics "against" MainConcept, i got the impression these were the only enconders."Well... it's the most popular one because everyone is delivering MP4 these days so it gets a lot of attention.
I just bought a Radeon HD 7950 which I assume will not work with MainConcept but I'm more concerned with timeline playback. I don't expect a big boost over my Radeon HD 5870 but it's the fastest card that's supported by my 2010 Mac Pro and I wanted to upgrade before they're not available anymore. I'm hoping Bois FX will take better advantage of the 3GB memory of the 7950 vs the 1GB memory of the 5870. We'll see when it gets here.
I created an account just to add to this discussion. I was searching for answers to the same question many are having: How come I can't see a noticeable difference in render time with my new PC?
I had a small sample clip that I rendered with my old PC, my new PC sadly rendered just a few seconds faster only. Same vegas project file, same Sony AVC 1080p output.
Old Dell PC: 3:25sec render time
Nvidia GTX 560
rendering to separate WD Black HD.
New PC: 3:14sec render time
Asus z97 Mobo
rendering to separate WD Black HD.
New PC: Also no noticeable render time difference with GPU on vs CPU only.
In summary, nope.. no big difference in render time with brand spanking new system. =(
[Ty Yang] "In summary, nope.. no big difference in render time with brand spanking new system. =("What did you specifically upgrade that you thought was going to reduce render times? Simply "buying a new PC" isn't a plan. So let's look at your configuration.
Intel i7 920 vs i7 4790k
That is an upgrade in architecture and a bump in clock speed from 2.8Ghz to 4.0Ghz so you should see a small boost from that upgrade but you still only have 4 cores so don't expect much.
Nvidia GTX 560 vs Nvidia 750ti
Vegas Pro uses OpenCL for timeline GPU acceleration and NIVIDA has poor support for this so I wouldn't expect you to see any benefit in timeline GPU performance from this upgrade. A better choice would have been to switch to an ATI Radeon card which excels at OpenCL support.
Sony also seems to have trouble with newer NVIDIA cards for render GPU acceleration so again, I would not expect you to see any benefit from upgrading your NVIDIA GPU.
24GB RAM vs 16GB RAM
That's actually a downgrade but I don't beleive it would have any impact anyway because 16GB is more than enough memory for Vegas Pro to use.
Old Dell PC: 3:25sec render time vs New PC: 3:14sec render time
It looks like the only thing buying a new PC did for you was increase your CPU clock speed from 2.8Ghz to 4.0Ghz. Other than that, there is no reason to believe that your new PC would perform any better than your old one.
If you really wanted better performance you should have upgraded to more cores (i.e., 6-core, 8-core. etc). When I went from 4-cores to 6-cores I saw a pretty good improvement. I'm currently using a Mac Pro 8-core and my next purchase will be a 12-core. Just buying another 4-core computer with 1Ghz bump in processor speed is not going to help much as you have seen.
Sony Vegas supports GPU encoding only for Fermi generation cards. Your 'new' card therefore is unsupported, good for gaming but crippled in Vegas.
Also, you have to select CUDA (or OpenCL for ATI) manually, otherwise it will default to CPU encoding (rendering). Even if OpenCL is available for nVidia rendering, use CUDA...
OpenCL is very useful for timeline view (supported there by both nVidia and AMD), but your tests measured basically only the encoding (rendering) process.
There are so many variables that could affect this and You didn't described very well Your situation so I will assume You don't know none of what i'm going to write.
Have You already compared Your processors?
It seems Your old one hadn't an onboard graphics and the new one have, so You should make sure that is not a conflict of configurations with Your off board and onboard going on here.
Also I could see that Your old processor had 3 memory channels(not pretty shure about this one) and it could affect somehow depending on how much channels You were using on Your last cpu.
Also I will assume that You didn't compared the CPU usage of both CPUs while rendering so the bottleneck could be anywhere.
Besides that as another member alread mentioned You must chose to use CUDA on the rendering options to take advantage of nvdia tecnology for rendering.
There is also HDs channels memory frequency etc......
I have and I7-4770 and i am planing to buy a GTX 750 TI. We can share experiences later on if You want.
Can someone tell me if the 750 would be an upgrade from Intel Graphics 4000 (integrated)?
I realize it doesn't use opencl and Intel Graphics 4000 but I'm not too impressed what I have.
Is this a good deal and if not can someone suggest a better deal on a GPU in the same price range please?
Windows 7, 64 bit, Vegas Pro 12
Intel Core i7-3770S
Asus P8Z77-V LK mobo
2X8GB Corsair XMS3 memory
180GB Corsair Force series 3 SSD
Intel Graphics 4000 (integrated)
Presonus Audiobox 22vsl
Thanks very much,