FORUMS: list search recent posts

Re: EPISODE QUESTIONS re. Distributed Rendering

COW Forums : Telestream Episode

Respond to this post   •   Return to posts index   •   Read entire thread

Craig Seeman
Re: EPISODE QUESTIONS re. Distributed Rendering
on Jan 14, 2011 at 5:18:13 pm

Round Robin does not take into consideration load or hardware characteristics of nodes when distributing jobs in the cluster.

Load balancing takes into account how much work the node is doing currently when distributing,

Hardware balanced adds the capabilities of the machine (number and speed of cores and RAM) as well as load.

Here's a deeper explanation of clustering from Telestream's Episode Product Manager Kevin Louden.

The first is load balancing of multiple encodes across a cluster of multiple machines. let's use the cluster you have as an example.
Your cluster has 3 machines each with Episode Pro installed. Episode Pro is capable of executing any 2 encodes at a time, so in your cluster you are capable of encoding a total of 6 simultaneous encodes (2 on each machine X3). The encoding and processing engine in Episode is fully multiple-threaded so any encode process will utilize more than on core on a multiple core machine such as hour macro. That being said not all codecs are multiple-core aware, if you submit a single h.264 encode yiu will notice not full saturation on all cores but you should see quite a bit of activity across the board. Now take that same encode and just change the video codec to VP-6 and you will see a different activity level. You will still see activity across multiple cores (this is audio and filtering such as de-interlacing and resizing), and a spike on a single core. The difference you are seeing here is that the Main Concept h.264 codec is very multithreaded and the VP-6 codec only operates on a single physical core. So a single encode in Episode will never occur o n just a single core if there is more than one available.

Now this is where you will got the efficiency of scale with load balancing and job distribution. You have 6 possible encodes possible at one time across the 3 machines. If yiu submit 6 source files at once each to a single output, depending on how you have the load balacing set you will get jobs running on all three machines. As more jobs a submitted to the cluster they will be distributed and queued for processing. If added another Episode Pro machine, the cluster would be capable of encoding any 8 jobs at once (again 2 jobs per machine X4 this time).

The second method of accelerating your work is for a single encode. For example you may wish to use all 6 of your encode slots available on the cluster for the encode of a single file. Episode has a function called Split and Stitch to do this. Split and Stitch is only available with the inclusion of and Episode Engine in the cluster. If one of the machines in your cluster was an Engine, you could execute a Split and Stitch across all of the machines. Split and Stitch distributes multiple sub encodes to available encode slots on a machine in the cluster and the stitches the results together to make a complete output file. You can try this on
Your Episode Pro cluster, it will just act as a demo encode and watermark the output. Just enable Split and Stitch in the encode inspector and then set how many splits you want to do. If you want to use all the licensed slots you have available in your cluster (6) then set the max number of splits to 5 (one for audio)

Posts IndexRead Thread 

Current Message Thread:

© 2018 All Rights Reserved