Ae Render "Failures"
I apologize for the length of this message. I am working on a motion graphics animation tutorial for a board game company. For some reason my machine is not releasing RAM between shots in my Render Queue which causes my shots to "fail" (message Ae gives me) during the night when moving from one shot in the queue to the next. I posted this same message over at Video CoPilot and have not gotten any messages.
I have included a screen shot of one of my comps so you all can see the layers. The "Decks" are pre-comps with 25 layers in each. The three pre-comps near the top are overlay layers (animated drop-downs that explain rules when the VO talks about stuff) that are not 3D. I have 6 Element 3D layers; bottom is a table and chairs in the shot, the other 5 are token pools (3 of which are animated tokens moving from one area to another). For the most part, in E3D, I am leaving the particle size at the default 10 and upscaling my 3D objects in the E3D panel. The bottom pre-comp is a room (floor, walls, windows and a door. All 2D layers).
I would prefer to have shadow casting on the card layers when animating so I can lift the cards from the play surface and have a shadow. The renders take longer than they should and does not seem to matter if I have shadow casting on or off for my card layers. On or Off, it gives me "failure" messages when going from one shot to the other in the queue.
I thought there might be a bug with E3D in CC 2017, but I opened the project in CC 2015 and I run into the same issue. I have double checked and run a RAM test ("option D" on start up) on my machine and it came back clean. I am giving Ae 48 GB of my 64 GB of RAM. I tried using the "secret" menu to force Ae to dump the RAM after 25, 10, and 5 frames, but it does not seem to do anything; no change in render "failures" and it does not seem to dump the memory. I am monitoring my RAM using the SystemPal App that runs in the menu bar.
I went back to all of my E3D models and made sure they are not huge for vertices and faces. Right now my largest model (some curtains in the background) have 10,000 vertices and 4,000 faces. All of my tokens that are on the table are in the hundreds so I know they are small enough.
I double checked my textures for size. Nothing is over 2k and all of them are 72 dpi and in RGB color space.
I have trouble shot this thing all to hell and I cannot seem to find the issue. So now, here I am, asking you all. I think you in advance for your time. And, just to answer this before someone asks, I am doing all of this in Ae because that is what my company has given me. I don't have access to a dedicated 3D program.
Late 2013 MacPro running 10.11.6
3.5 GHz 6-core Xeon E5
64 GB 1866 MHz DDR3 ECC
AMD FirePro D500 3072 MB
After Effects CC 2017 220.127.116.11
Ae is utilizing 48 GB leaving 16 GB (memory preference)
Monitoring system RAM using SystemPal App (runs in menu bar)
Thank you again everyone. I look forward to hearing your thoughts
Mind you, I've never used Element 3D. However, I do notice you're mixing 2D and 3D layers, which generally isn't a good idea.
You're also displaying a lot of tokens and cards. They may be small, but it still takes RAM to contain them to create the image for each frame.
Instead of doing all this in one huge comp, I would try to prerender anything I could, reducing the amount of RAM needed to create elements of the image. The background. The table. Things on the table that don't move. Anything you don't have to time out to the audio. You appear to have been ready to render anyway, so these prerendered things shouldn't be a problem. Set the prerenders as the bottommost layers (ovbviously in a duplicated comp), keep them 2D, and no one should know the difference.
KGAN (CBS) & KFXA (Fox) Cedar Rapids, IA
Thanks for the reply, Dave.
I have not had a problem in the past with mixing 3D and 2D layers in the same shot. Below is a link to one of my previous video using not only 3D and 2D images, but baked in animation out of E3D. This is why I am so confused. I did not encounter this "fail" issue in any of the shots I created for that video. And some of the shots in there are 2+ minutes long with just as many overlays.
I do think you are correct on doing pre-renders. I have a feeling that will be the only solution to this issue. I edit and compile in Premiere anyway, so it is not a problem to do. My OCD just likes to have everything in the one shot. LOL. I will probably be kicking out BG, main, and overlay renders and just put them together in Pr.
Thanks for your input.
You have to also keep in mind that E3D uses your vRAM, not just your RAM. Maybe you're reaching a limitation there. I've had projects that would not render on lower than 4GB gpus. Try to render with just some e3d layers on to see if you can find exactly what jams the system.
And If it's just a matter of freeing ram. Can't you render to an image sequence first, and keep starting from the failure frame until you're done?
Thanks for your reply. I did not know that E3D uses vRAM. Is there a way (if you know of one) to verify what my vRAM is on my machine or any way to optimize it so I don't encounter this "failure" issue? I have not yet just tried rendering out the E3D elements to see if they are the culprits in this. But I will be attempting that tonight.
Dave (the post above yours) suggested that I just kick out background, main area, and overlay comps and compile them in Premiere. If I could not find a solution to this, I was just going to do that. I am just OCD in my shots and I like to have them all together so I don't have to add layers in Pr if I don't have to.
Thanks for you thoughts.
Well you said it yourself on your system specs. If you're actually using a AMD fire pro with 3GB. Thats your vram capacity. E3D should state that, along with its usage in the top right part of its scene setup window. And since you are using more than one instance of the plugin, each will have to load, compute and free the same vram.
I'll guess its only one instance that when previewed in full will cause the failure. And if you find which you may be able to adjust it and keep it all in the project.
And now that I think of it, you should check Video Copilot's site for info on known errors with AMD gpu. (I'm a nvidia users and I know for sure I can only currently use one driver if I want raytraced stuff)
Perhaps you're just one driver update away of having to adjust nothing.
Hurray! I found the answer. Bear with me while I attempt to explain what is going on. It ended up being a scaling issue with the 3D models inside of E3D. I am including an image so it will be easier to understand. The smaller token (the one on the right) is a new model.
Top Panel: In this image there are two tokens. The token with the arrow pointing to it is the original model as created by the 3D modeler in my department. When importing and leaving the scale at 100%and turning on "normalize size" it is massive.
Bottom Panel: When you turn off "normalize size" (for the original token model), you can see that the scale becomes tiny.
The issue was the scale of the original model. I did not think anything of this when originally creating my scene since 3D models are vector based objects. I just shrunk the "normalized size" down to 4%. Conversely, the "non-normalized size" would have to be scaled up past 250% to be in scale with everything else (not shown) in my scene.
And this was the problem. Ae was using all of my vRAM attempting to deal with the scale of the tokens for some reason. I went back to my modeler and had him export the tokens again. First he upscaled the object 261%, which put it at 100% (non-normalized size) in my E3D scene. He then used a self-made script to reset all aspects of the object to zero, and exported a new .obj for me. This reset the token to a perfect 100% for me.
I ran a series of renders over the weekend and none of them "failed". My RAM sat at around 50% usage (where it should be) and the RAM cleared after each shot rendered (like it should). So there you go. When working with E3D, keep an eye on the model sizes in your shots, they can kill your vRAM and cause you to have "failed" renders.