Analyzing Results after Two Years of Implementation
After researching and evaluating the existing quality assurance processes and best practices available in the market, Ccaps implemented its quality control process in April 2013, with three clear purposes: To systematically assess the quality of the work provided by vendors, to support project managers during the quality control stage of ongoing projects, and to contribute to the technical and professional development of vendors by providing structured and consistent feedback.
The purpose of this article is to analyze the results from the 2013-2015 period. By sharing such results, we expect to contribute to the improvement of our collaborators and the overall knowledge about the localization market.
In 2012, Ccaps decided it was time to invest in a more structured quality assurance process that could provide us with solid results — both for internal and external use — through feedback on the quality of the translations provided by our collaborators.
This work resulted in the Language Quality Assurance process known as Ccaps LQA. The Ccaps LQA process was fully implemented in April 2013 and today is the basis for the technical evaluation to determine the quality of the translation we offer to our customers.
In this article, we will focus on a subset of results accumulated between 2013 and 2015 as well as on the analysis of such results. We hope to demonstrate a breakdown of the mistakes made by our vendors and suggest measures for subsequent improvement.
Our quality control process classifies errors into the categories and subcategories listed below:
Ccaps LQA Categories and Subcategories
Our internal team records and categorizes the errors found in translation samples, and the number of mistakes leads to a score. The approval criteria (Pass) is a score greater than or equal to 80. Scores below this threshold are considered insufficient and classified as Fail.
The average sample size is 1,000 words, and projects with less than 1,500 words may undergo full review; i.e., no specific volume is predefined for the LQA process. To ensure consistency between evaluations, reviewed is performed by two Language Specialists only. In addition, one of the department goals is to have at least 4 evaluations per year for each active vendor in our database.
This article presents the results of the LQA process for only one language pair, where English is the source language and Portuguese is the target language. Sampling of the results comprises the period between April 2013 and July 2015. The chart below shows the monthly percentage rates of Passes and Fails in this period:
Percentage of Fails and Passes
The months of March 2014 and May 2015 featured the highest percentage of Fails (66.67% and 45.45%, respectively). The results reveal certain instability, with high variation and an increasing pattern of Fails in the first quarters of 2015.
This trend has led us to work more intensively in acknowledging the factors related to the inferior performance of vendors. Therefore, we decided to review and implement small changes to the LQA process, making it more efficient and informative while minimizing the month-to-month variation.
The goal was to ensure that vendor deliveries reached acceptable percentage levels, ultimately providing our customers with steadier and superior quality.
The distribution of errors by category over the reporting period can be summarized in the chart below:
Error Distribution by Category
We identified that the highest rates of mistakes made by our vendors fall within the Accuracy and Language categories.
Accuracy errors range from incorrect understanding of the source text to formatting errors, text left untranslated, as well as omissions and additions to the original text. The Language category comprises grammar, spelling and punctuation errors.
While analyzing deeper each error category, we detected that errors in the Accuracy category were divided as follows:
Errors in Accuracy Category
The vast majority (84%) of errors in this category is due to misunderstanding of the source text (Mistranslation), as well as unnecessary omissions and additions.
When evaluating errors in the Language category, we have the following scenario:
Errors in Language Category
In this case, errors are predominantly (72%) related to unfamiliarity with the rules of Portuguese as the target language. A more detailed analysis reveals that grammar errors are mainly related to government and agreement, while the incorrect use of commas is the predominant punctuation error.
The analysis of these results led us to discuss the incidence of errors in these two categories in two different ways:
Accuracy: Errors related to the task performance itself — translation done quickly or concurrently with other jobs — and vendor failure to review and perform a final check before delivery.
Language: Errors related to the vendor’s knowledge of language rules and/or failures during translation. These can be minimized with the efficient use of automated quality assurance tools, such as Xbench which verifies grammar, spelling and language compliance.
Our efforts to reduce the occurrences of these error types included:
- More frequent, accurate and detailed feedback
- Careful monitoring of vendors with high incidence of errors in these categories
- Greater focus on the implementation of the feedback we submitted
- Sharing information related to grammar and spell checkers available in the various CAT tools we use
- Development of in-house quick check processes to monitor the most common error types during the project lifecycle
From the early development phases of our quality assurance process, our goal was to ensure this process would not only meet internal requirements, but also contribute to the professional growth of our vendors. The analysis presented in this article aims to take this goal one step further, consolidating the data collected after two years of implementation. It also contains our interpretation of the key findings. By doing so we hope that these results will guide our vendors to further technical development and improve the Ccaps internal processes, ultimately helping us achieve our goals for ongoing quality improvement.
Similarly, given the shortage of research, surveys and analyses on the localization market in Brazil, we expect to contribute to the enrichment of this subject, enabling industry professionals, educators and researchers to benefit from our samples. We are aware that the data compiled and analyzed herein does not represent the totality of the localization market in Brazil. Nonetheless, considering the Ccaps position among the top ten language service providers in Latin America, we believe that we have positively contributed to this incipient master collection.