Lingvanex Tranalator

Translator for

Translation Quality Report. March 2024

The goal of this report is to showcase the translation quality of Lingvanex language models according to two most popular machine translation evaluation metrics. Flores is an open-source and publicly available data test set that was released by Facebook Research and has the biggest language pair coverage.

Quality metrics description

BLEU

BLEU is an automatic metric based on n-grams. It measures the precision of n-grams of the machine translation output compared to the reference, weighted by a brevity penalty to punish overly short translations. We use a particular implementation of BLEU, called sacreBLEU. It outputs corpus scores, not segment scores.

References

  • Papineni, Kishore, S. Roukos, T. Ward and Wei-Jing Zhu. “Bleu: a Method for Automatic Evaluation of Machine Translation.” ACL (2002).
  • Post, Matt. “A Call for Clarity in Reporting BLEU Scores.” WMT (2018).

COMET

COMET (Crosslingual Optimized Metric for Evaluation of Translation) is a metric for automatic evaluation of machine translation that calculates the similarity between a machine translation output and a reference translation using token or sentence embeddings. Unlike other metrics, COMET is trained on predicting different types of human judgments in the form of post-editing effort, direct assessment, or translation error analysis.

References

On-premise Private Software Updates

New version - 1.22.0.

Changes in functionality:

  • Add support for audio in video files for Speech Recognizer.

New version - 1.21.1.

Changes in functionality:

  • Fixed speech recognition in *.wma and *.flv files.

Improved Language Models

BLEU Metrics

Improved Language Models. March 2024

COMET Metrics

Improved Language Models. March 2024

Language pairs

Note: The lower size of models on the hard drive means the lower consumption of GPU memory which leads to decreased deployment costs. Lower model size has better performance in translation time. The approximate usage of GPU memory is calculated as hard drive model size x 1.2

Conclusion

The Lingvanex machine translation models have been evaluated using two popular automatic evaluation metrics - BLEU and COMET. BLEU measures the precision of n-grams in the machine translation output compared to reference translations, while COMET calculates the similarity between the machine translation and reference using embedding- based techniques. The report showcases the translation quality of the Lingvanex models across a wide range of language pairs from the publicly available Flores dataset. The models demonstrate strong performance according to these industry-standard evaluation metrics. Some key advantages of the Lingvanex models noted are:

  • Smaller model size leading to lower GPU memory consumption and faster translation times
  • Continuous improvements in the language models over successive software updates

Overall, the results indicate that the Lingvanex machine translation models provide high-quality translations that meet or exceed the performance of other leading MT systems. The use of robust evaluation metrics provides confidence in the reliability and accuracy of the Lingvanex translation capabilities.


Frequently Asked Questions (FAQ)

How to evaluate the quality of translation?

The quality of translation can be assessed through manual and automatic approaches. Manual evaluation involves human translators checking the texts for accuracy and looking for errors. Automatic approach to the evaluation of machine translation presupposes the use of specific metrics such as BLEU, COMET, METEOR and others.

Why do we need translation quality assessment?

Translation quality assessment ensures that the translated texts meet the required standards. It allows linguists to evaluate the accuracy, fluency and the correspondence of the translated text to its intended purpose. For machine translation systems quality assessment is important to improve their engines, compare different MT providers, and identify strengths and weaknesses for future development.

How can you improve translation quality?

There are many ways to improve the quality of your translations:
1. Set clear standards or guidelines
2. Hold quality checks at multiple stages of a translation process
3. Ensure human reviews of translated texts
4. Hire professional translators with appropriate skills
5. Constantly train MT models and improve them
6. Use advanced NLP techniques to ensure accuracy
7. Combine MT with human post-editing to get the best results
8. Collect and analyze the feedback from your clients

More fascinating reads await

What is Speech Recognition

What is Speech Recognition

July 23, 2024

What is neural machine translation?

What is neural machine translation?

July 22, 2024

Speech Recognition Solutions for Businesses

Speech Recognition Solutions for Businesses

July 10, 2024

Request a free trial

✓ Valid
* Indicates required field

Your privacy is of utmost importance to us, your data will be used solely for contact purposes

Completed

Your request has been sent successfully

Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site.

We also use third-party cookies that help us analyze how you use this website, store your preferences, and provide the content and advertisements that are relevant to you. These cookies will only be stored in your browser with your prior consent.

You can choose to enable or disable some or all of these cookies but disabling some of them may affect your browsing experience.

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Always Active

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Always Active

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Always Active

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Always Active

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.