Serious concerns regarding reviewing platform
So I have started to use the review platform for my ancestral states in Brazil, and in theory I liked the idea of the computer-generated indexing to help catch up on so many records that (I suppose) the indexing process isn't completing fast enough for the demand. However... it's absolutely horrible. I am spending hours doing the advanced editing for one image (when it could take me 10-30 minutes on the indexing platform) because there are SO MANY errors, not only in what the computer's guesses of names, but attaching words and names from all over a document image. Many of them also don't indicate what kind of record or its location. There is no way most of these images' indexes will serve useful for researchers, including the more experienced ones. The platform itself is really clunky to use, and its improvements have only resulted from people using the 1950 Census process. I can appreciate that that's a much larger demand, but the equity implications in the process make me very concerned.
So, can the indexing platform (which I actually quite like) be integrated with the reviewing one to make it easier to make these edits?
If a document is computer-reviewed, does that mean it won't enter the more sensitive and intensive indexing process, and if so, why do those documents not deserve the same treatment?
If I wasn't allowed to index/review documents from specific collections of my interest, doesn't the reviewing platform contradict that to some degree (I can choose a name specific to an area that hits the towns I'm looking for)?
When the computer is getting SO MUCH wrong, either due to lack of enough data input of regional specific writing styles, or lack of system/styling parameters by those inputting the code for the computer, is this actually more equitable and efficient?