Better Bibles Blog has moved. Read our last post, below, and then
click here if you are not redirected to our new location within 60 seconds.
Please Bookmark our new location and update blogrolls.

Tuesday, August 23, 2005

Top Down or Bottom Up...or both?

I have an idea for how a translation process can achieve a better translation. It assumes the truth of two linguistic fundamentals:
  1. The meaning of a text is greater than the sum of its parts (bottom-up)
  2. The context of each part selects the meaning of each part (top-down)
In other words, there is a top-down cognitive mechanism to both understanding and constructing a text as well as a bottom-up one. Both processes (top-down and bottom-up) happen simultaneously.

Here is the translation process in a nutshell.
  1. Draft or use a "as literal as possible, as free as necessary" translation. Since it is quite difficult to think in ancient Koine Greek (we have no one to converse with), it is necessary for the next step to use an English translation that gets you most of the way there. I don't think one needs to be created; however, it would be beneficial to have a analytic translation along side of the one being produced by this process. See point 6 below.
  2. Write up a structured outline of the book. The outline is linguistically based and fairly high level. The elements of the outline refer to the book, sections and subsections, and paragraphs. It is not verse oriented (poetry excepted). This outline is developed from existing English translations since people synthesize information best in their own language. However, this still won't be easy since the majority of English translations (and commentaries) don't flow well to begin with (For example, NT Wright is the only person I know that ties Romans 9-11 in with the rest of the Pauline letter! In fact, some commentators think 9-11 was redacted in at a later date). Note, however, that what we are trying to accomplish is a synthetic translation that allows for people to simply read the text. That requires the text to just flow. The primary purpose of this step and the next is to get a good feel for how the original author develops the point (or points) of the book. There is a lot of work to be done in this step and the next. And it will not be easy.
  3. Associate a precis with each line of the outline. For example, the precis for a paragraph would look very much like a topic sentence. Larger lingusitic constituents will require larger than sentence precis. This part of the process helps force the coherence.
  4. The outline is peer-reviewed for coherence. In other words, it should notread like a list of unsorted daily devotionals. Nor should it look like a list of headings. It should, pretty much, read like a synopsis (or abstract) of the entire book and it should be clear and natural. There should be logical/rhetorical transitions between each element. For pragmatic as well as authentic reasons, multiple outlines would be allowed at this point. There will be quality control feedback loops feeding back to this step so the outlines will gravitate toward greater accuracy. Someone who is familiar with the underlying book should understand the flow as given in the outline. Scholars might (very likely would) hold strong reservations regarding agreeing with it without further research; but, the flow would "make sense." The quality control for the outline--that is, does it authentically reproduce the author's original intent--would be hammered out in the steps outlined next. It is very important to understand that the metric of success of this step is this: Is the annotated outline coherent?
  5. Each paragraph is translated. This is fundamentally a bottom-up procedure. The precis would be adjusted and frequently reconsidered as the paragraph is translated. Note, however, that the flow of the overall document must be upheld--coherence must be maintained. That is, the coherence of the outline must be maintained even if the outline itself is modified. The precis become extremely valuable in this step since they define the context within which the cognitive processes disambiguate linguistic and translation choices. Each of the multiple outlines would be considered against the word-level choices in order to better assess the value of each outline. A complete rewrite of the precis would be allowed, and changes to the outlines would be allowed, but those changes require redoing step 4 above and maintaining the metric of coherence.
  6. If a "literal as possible, as free as necessary" translation was produced in step one, it would also be adjusted. That translation is meant to serve a more analytic audience than the synthetic translation being developed by the processed outlined here. However, the analytic translation would also serve as support for the more synthetic translation.
  7. Field test the language of the synthetic translation. Note: This is not a field test of the content, certainly not a field test of the accuracy or authenticity of the content. It is to test the communicative accuracy. In other words, whether the test taker agrees with the content is irrelevant. Does the average person "get the point" of the text? That's the question. This step would result in appropriately revisiting any or all the steps above.
Lastly, each of the above steps would produce various kinds of supporting documentation. These articfacts would be openly available so the translators remain ultimately accountable to those being served by the translation.

Categories: , , , , , , ,

4 Comments:

At Tue Aug 23, 02:14:00 PM, Blogger Trevor Jenkins said...

As an ex-computing scientist I can see the virtue in Mike's proposal. It follows the style of program development that I use. This is based on Don Knuth's Literate Programming that he used for his TeX typesetting system. Although I can't recall him using the term to describe his literate programming regime my term for it is middle's out, with both design from the top-down simultaneous to implementation from the bottom-up.

 
At Tue Aug 23, 09:32:00 PM, Blogger Tim Bulkeley said...

As a Hebrew Bible teacher my reaction to the scheme is mixed. On the one hand it makes really good sense, and should (I'd think) lead to a good readable and useable translation. BUT it seems to assume one thing that the last several generations of biblical scholars all claimed could not be assumed - that the text is coherent. Now, I know we have been moving away from an assumption of incoherence (what I think David Clines called "the hypothesis of the idiot redactor"), but is it not methodologically equally suspect to assume coherence, before it is demonstrated. So should there somewhere be a step that examines the coherence of the text? That is open to at least the possibility that there are significant breaks between parts ofd the whole...

 
At Wed Aug 24, 04:29:00 AM, Blogger Peter Kirk said...

I suspect that Mike mainly had in mind New Testament books. These are attested as single books from quite soon after their composition, and there is very little evidence that any of them have an overall structure different from that intended by their original authors - except perhaps for John 7:53-8:11 and the various endings of Mark. For these books Mike's assumption of coherence is surely justified at least in general terms.

Tim, as a Hebrew expert, is naturally looking more at Old Testament books. And for some of these there is much more reason to believe that there is less large scale coherence. The Psalms are a collection of separate writings, not randomly arranged, but even so coherence between them should not be assumed. Large parts of Proverbs consists of separate proverbs with little clear coherence. The same may be true of some collections of prophetic oracles. Nevertheless, it is reasonable to assume that each book, in the form in which we have it, is the product of an intelligent editor, not of Clines' idiot redactor or of someone shuffling pages as has sometimes been suggested. The underlying coherence may simple and superficially obvious, but surely there is something there.

 
At Wed Aug 24, 11:18:00 AM, Blogger Mike Sangrey said...

Tim,

You make a good point. I think Wayne and Peter have given my answers. Thanks guys. Though I'll add some epistomological assumptions I make. One, truth is coherent; and, two, the Bible is true. The combination of those two assumptions drives exegesis and, in fact, gives it purpose. Obviously, these assumptions have considerable ramifications and there's a whole pile of modern and postmodern issues. I'm willing to brush all of that aside because of the following.

To directly answer your question of "is it not methodologically equally suspect to assume coherence?": the proof is in the pudding.

I believe that the process I outlined, if followed, will prove out the coherence. In other words, too many things will just sync up. The reason for this is that it appears that the coherence of communication (or maybe more precisely, truth) appears to be a very highly complex network. My sub-title on Exegetitor is "A network of highly cohesive details reveals the truth. And that about sums it up.

I might add something which at first blush appears quite scarey: Deception, by definition, is a combination of truth and error. So, there is a corallary to my hypothesis that essentially says that inaccurate Bible translation is deceptive. Now it is very important at this point to realize that there are two attributes of language that ricochet off of each other: One, language constituents are ambiguous; and, two, the context of those constituents disambiguates them. In other words, the human mind synthesizes information by chunking and the chunks exhibit coherence. I personally think a major issue we have today is we build too large an inference from too small a detail. And that is the reason I started Exegetitor. In short, accurate Bible translation is very important. I simply believe that accurate Bible translation requires much, much more than simply looking up words in a dictionary (if I may make a point by stating the extreme).

Just to be quite clear (since it will be easy for someone to quote me out of context and thereby prove what I'm saying here!): I'm not saying some or many Bible translations are deceptive (however, several are too ambiguous). I'm simply saying that the way we've handled the text has created significant exegetical and interpretive problems. As I said above, I believe the Bible is true. And, yes, I know there is a Pilatism in there, and I'm certainly not like Jesus enough to know the answer. But, that is an epistomological assumption I'm willing to make since I think the Bible will prove itself to be true if we just let communication work the way communication works. What can I say? Here I stand.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home