Subtitles
- Jul 12, 2019
I’m realizing that something I need to do some research on is the standards for subtitling English in Japanese. I’m thinking about this right now because of a project that we just finished up that involved three languages: the first interviewee spoke in French, the second interviewee spoke in Japanese, and we needed to produce both an English and a Japanese version. Subtitling French in English was not such a big deal. For the most part the grammar, the number of words per sentence, pauses, phrasing, etc. all lined up in both languages. Excluding a few instances, most of what was spoken was represented in the subtitle directly beneath it in real-time. Doing the same when subbing in Japanese was much more difficult though. For obvious reasons it was more difficult to align the grammar, phrasing, and even some of the particular nuances between the audio and the translated subtitles. Of course, from the perspective of somebody watching this is totally meaningless. If you’re reading the subtitles, you’re probably not registering the audio in terms of meaning so much as tone. Likewise, if you understand the audio, you’re probably not really paying attention to the subtitles. I suppose the occasional bilingual French-Japanese viewer, might be thrown off by the rhythmic dissonance, but I think my qualm with the slight mismatch is from an editor’s perspective. I would prefer if I could get the subtitles to match the rhythm of the interviewee’s speech perfectly. Understanding that it’s not possible to perfectly mold two languages that rely on different sets of structural rules, I’m wondering what standards editors have come up with for putting subtitles on videos. It might seem a bit nitpicky, but as a company that’s focusing on international video production I think it’s worth thinking about.
Next post:
Tunnel ThoughtsPrevious post:
Promo Video