I have a scenario that requires high speed realtime encoding at 658x492x120fps. Currently we have been using the FFDShow_tryouts directshow filter. It is just barely out of range to keep up on many computers and we were hoping to parallelize the encoding. Currently we see one core saturated and if we could distribute to two or more, I think we could achieve the necessary performance increase.
I don't have experience with the FFMpeg code, but I was hoping there were some resources available describing the various sections of code as the comments seem fairly minimal. Does such an introductory/overview document exist?
As a disclaimer I am independently working with a small company and I would at least like to understand the necessary components and possibly recruit someone to help out (for hire of course). For the most part, I am looking for information on the current codebase.
The MJPEG encoder is only single threaded, but it should be fairly trivial to make it multi-threaded since it's intra frame only. (Round-robin distribution of frames to encoding threads, plus a thread to write the encoded frames handed to it from the encoder threads in the correct order.)
Michael Niedermayer is listed as the maintainer of the mjpeg codec, so you might start by asking him if he is interested in the contract. If he isn't, then you could ask for a recommendation of another developer to work on it.
Also, be aware that about a year ago there was a big disagreement between developers, and the project was forked. There are now 2 separate development projects: ffmpeg and libav. They passive-aggressively co-exists and pull in developments from each other's code-bases, but over time their code will no doubt diverge to the point where that is no longer easy to do. I'm not sure which code base ffdshow pulls from. There is no MAINTAINERS file in the libav tree, so I'm not sure who is the MJPEG maintainer in that project.