I currently evaluate CatDV Enterprise server for handling backup for P2 cards. We produce approx. 1TB of P2 material every day, and normally a backup for a P2 card will live for 30 days. The 1TB of data will be copied to 3 different folders with a subfolder named "time and date" for each card. With the workernode I see that I can generate a new catalog automatically every day, and that will create the catalogs a day, and I believe that the size of the catalog will be manageable. Some metadata are added from the camera (took a while to convince the photographers). So far so good I hope.. My problem right know is that we did the switch to P2 a while ago and know I have 3 top folders containing quite a lot of date-named folders containing one P2 card each. When I create a watch for these folders its starts the scanning but slows down and get unmanageable big. The question is: Is there a more sophisticated way to auto generate catalogs like a new catalog is created after a certain size, or will only contain files from a certain timerange?
That's something that would have to be done in an outside script.
Probably the easiest way to deal with it, if you didn't want to or couldn't script it, would be to have your folders spill over and import into catalogs based on the folder names or something like that.
Also, doing an auto-import of a large folder that's already full might not be best as it would take a while to run the initial import whereas doing it incrementally wouldn't tax the system. If this is the case, try doing the first imports manually and then let the system import as the new files show up.
Getting the catalogs generated in a timely manner is the key in my view, even into a generic ingest catalogs for example. Worker 5 can run multiple processes and this would likely help speed this up with the volume you are talking about. Once in CatDV in those ingest catalogs using the Worker and server queries you could organize the clips into catalogs automatically. Sometimes breaking these processes up into steps can make it easier to manage and spread the workload. Your integrator should be able to get you eval keys to test this in your real world environment of not let us know. Are you shooting in time of day TC?
I do have evaluation licenses and are testing this out. I hope Worker node 5 can speed up thing for this is really taking a long time to import.
Instead of have all cards in (produced up to now) in one top folder and one timenamed folder for each card, I now have organize the cards in weekfolder like //ENG/2012/1 were 1 is the weeknumber, and card produced withing that week are put in this folder. My plan is to create a catalog for each week and hopefully that will create managable catalogs.
I then create a watch for each week (looks like I can just edit the worker.xml files to create watchers), but will eatch watch create a new catalog? For me the catalog generation is a little confusing, but hopefully I get it in the end.
The worker can create catalogs automatically based on the time and date when the script runs, which is useful once your workflow is running normally and new files trickle in slowly.
You can also create catalogs in other ways however. One approach is to use CatDV Pro rather than the worker node, and import one folder at a time manually. That way you can organise clips into catalogs however you want.
The other approach, if you use the worker, might be to specify the catalog name based on part of the filename, for example the top level folder after the watch folder root or something like that.
I do have to agree with Rolf, manual cataloging is the way I'd do this initial ingest and then you could set up a watch for just the new files.
Once you have the metadata in, the files can move, and do all sorts of things through automation which is where the benefit really kicks in.
But the initial ingest is a lot of time for maybe not a ton of benefit.
Also, his point about moving after ingest and getting them out of the watch is critical. We never recommend anything living in a folder that is being watched and note that almost every other MAM behaves the same way. Watch a folder, find data, process and move it to another location where it will live and then update the db of the move. That way you aren't scanning a huge folder structure looking for changes.