The Globalese team led by CEO Gábor Bessenyei is looking forward to meeting you at the GALA's annual Language of Business conference in sunny Munich.
Neural Machine Translation was an amazing break-through from many points of view. It has improved the overall quality of machine translations compared to pre-neural times. It has provided, for the first time, truly usable and sound quality output for the language industry. It has also opened up opportunities for languages like Japanese, Chinese or Russian, which otherwise performed poorly on Statistical MT technology.
The downside of the Neural Machine Translation revolution: terminologyAs with every groundbreaking invention, NMT technology also had its limitations. One of the major issues with Neural was handling terminology. This major challenge stems from the very reason of what makes NMT so truly exciting. Unlike with statistical MT technology, where it was possible for users to provide a terminology list, which the MT system could safely rely on during translation, it was not directly possible to provide a master terminology for the translation process in the NMT world. Technically, you can, of course, introduce a glossary to an engine as part of the training corpora, but this will not act the way you would expect. It will not prioritize the translations in the glossary over the content in the rest of the training data. In the NMT technology, there is currently no way to influence the terminology translation directly during the machine translation process.
Are you a content owner or an LSP? Give Globalese a go now and grow your business with the power of Neural MT! Click here and start your free trial now!That doesn’t mean that developers hasn’t made attempts to solve this issue. One of the solutions we have seen from many MT providers is to implement terminology replacement based on a glossary after the machine translation phase. While it certainly sounds promising, unfortunately the results are not always that encouraging. The problem is that you are running a considerable risk of losing grammatical information during the replacement process. Just imagine the problems a changed gender of a word can cause in German. In better cases, you will have to spend many hours of editing to fish out the problematic bits. In some cases, you end up with a limited usability output that leaves you, your clients and your translators disappointed.
Introducing automated in-domain adaptation (AIDA)Globalese is answering to this challenge by introducing its proprietary technology, the automated in-domain adaptation. This technology will provide you with a yet unparalleled improvement. So what is this all about? By using the automated in-domain adaptation technology, as a Globalese user, you will have the chance to mark content from the training data of an engine as the most important in-domain content. For example, if a user has a Translation Memory (TM) of a medical device documentation, it can be marked as the master TM. Globalese will analyze the content of the master TM(s) and extend the engine only with similar and related training data from the auxiliary TMs. Additionally, the engine will be tuned based on the master TM. The result is a highly customized engine focusing on the content of the master TM.
Maxing out terminological accuracy and keeping qualityThe result of this process will be an engine where the wording and the style of the master TM will get higher priority over the rest of the training data, even if there are concurring terms. This way, you can reach a maximum level of terminology accuracy without having to face the problem of losing grammatical information or decreasing the overall language quality. Naturally, the cleaner and the more up-to-date your master TM is in the relevant topic or domain, the better the overall quality will be. This innovative Globalese solution concerning the terminology barrier of Neural MT technology paves the way to even better optimized workflows. This means that content owners and Language Service Providers can save considerable time and resources in post-editing output.
Join us for a coffee in Munich!
- SDLXLIFF parsing errors for rare segment statuses.
- Pulling MXLIFF files from a connected Memsource server failed in certain cases.
- Various tokenisation-related issues in training and translation.