December 11, 2024
At the heart of Mozilla’s localization efforts lies Pontoon, our in-house translation management system. Powered by our vibrant volunteer community, Pontoon thrives on their commitments to submit and review translations across all our products.
As part of our ongoing attempts to further recognize the contributions of Pontoon’s volunteers, the localization team has been exploring new ways to celebrate their achievements. We know that the success of localization at Mozilla hinges on the dedication of our community, and it’s important to not only acknowledge this effort but to also create an environment that encourages even greater participation.
That’s why we’re excited to introduce achievement badges in Pontoon! Whether you’re new to Pontoon or a seasoned contributor, achievement badges not only recognize your contribution but also encourage participation and promote good habits amongst our community.
With achievement badges, we aim to make contributing to Pontoon more rewarding and fun while reinforcing Mozilla’s mission of building an open and accessible web for everyone, everywhere.
What are achievement badges?
Achievement badges are a symbol recognizing your hard work in keeping the internet accessible and open, no matter where users are located. These badges are displayed on your Pontoon profile page.
In collaboration with Mozillian designer Céline Villaneau, we’ve created three distinct badges to promote different behaviors within Pontoon:
- Translation Champion, awarded for submitting translations.
- Review Master, awarded for reviewing translations.
- Community Builder, awarded for promoting users to higher roles.
Receiving a badge
When the threshold required to receive a badge is crossed, you’ll receive a notification along with a pop-up tooltip (complete with confetti!). The tooltip will display details about the badge you’ve just earned.
To give you more of a challenge, each badge comes with multiple levels, encouraging continued contributions to Pontoon. You’ll receive similar notifications and celebratory tooltips whenever you unlock a new badge level.
Start collecting!
Badges are more than just icons — they’re a celebration of your dedication to keeping the web accessible to all. Ready to make your mark? All users will begin with a blank slate, so start contributing and begin your badge collection today!
December 11, 2024 04:38 PM
November 05, 2024
We’re excited to announce that Thunderbird Desktop will soon offer monthly releases through the Release channel as a supported alternative to the ESR channel. This means a new major version of Thunderbird will be available every month, providing the following benefits for our users:
- Frequent Feature Updates: New features will be available each month, rather than waiting for the annual Extended Support Release (ESR).
- Smoother Transitions: Moving from one monthly release to the next will be less disruptive than updating between ESR versions.
- Consistent Bug Fixes: Users will receive all available bug fixes, rather than relying on patch uplifts, as is the case with ESR.
Expanding Thunderbird’s Channels
Currently, Thunderbird offers three release channels: Daily, Beta, and ESR. With the addition of the Release channel, we’ll soon provide stable, monthly releases. Over time, this Release channel will become the default channel.
Current Status
The Thunderbird Release channel is currently available for testing purposes only. We have been publishing monthly releases for a few months now, and we will continue publishing new releases as we progress toward officially supporting the Thunderbird Release channel.
Translation Support
We are immensely grateful to our translators for their ongoing contributions in localizing Thunderbird. If you have any questions regarding translations for Thunderbird, please feel free to reach out to corey@thunderbird.net.
November 05, 2024 09:36 PM
October 24, 2024
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
New community/locales added
We’re grateful for the Abzhaz community’s initiative in reaching out to localize our products. Thank you for your valuable involvement!
New content and projects
What’s new or coming up in Firefox desktop
Search Mode Switcher
A new feature in development has become available (behind a flag) with the release of the latest Nightly version 133: the Search Mode Switcher. You may have already seen strings for this land in Pontoon, but this feature enables you to enter a search term into the address bar and search through multiple engines. After entering the search term and selecting a provider, the search term will persist (instead of showing the site’s URL) and then you can select a different provider by clicking an icon on the left of the bar.
Firefox Search Mode Switcher
You can test this now in version 133 of Nightly by entering about:config
in the address bar and pressing enter, proceed past the warning, and search for the following flag: browser.urlbar.scotchBonnet.enableOverride
. Toggling the flag to true
will enable the feature.
New profile selector
Starting in Version 134 of Nightly a new feature to easily select, create, and change profiles within Firefox will begin rolling out to a small number of users worldwide. Strings are planned to be made available for localization soon.
Sidebar and Vertical Tabs
Finally, as previously mentioned in the previous L10n Report, features for a new sidebar with expanded functionality along with the ability to change your tab layout from horizontal to vertical are available to test in Nightly through the Firefox Labs feature in your settings. Just go to your Nightly settings, select the Firefox Lab section from the left, and enable the feature by clicking the checkbox. Since these are experimental there may continue to be occasional string changes or additions. While you check out these features in your languages, if you have thoughts on the features themselves, we welcome you to share feedback through Mozilla Connect.
What’s new or coming up in web projects
AMO and AMO Frontend
To improve user experience, the AMO team plans to implement changes that will enable only locales meeting a specific completion threshold. Locales with very low completion percentages will be disabled in production but will remain available on Pontoon for teams to continue working on them. The exact details and timeline will be communicated once the plan is finalized.
Mozilla Accounts
Currently Mozilla Accounts is going through a redesign of some of its log-in pages’ user experiences. So we will continue to see small updates here and there for the rest of the year. There is also a planned update to the Mozilla Accounts payment sub platform. We expect to see a new file added to the project before the end of the year – but a large number of the strings will be the same as now. We will be migrating those translations so they don’t need to be translated again, but there will be a number of new strings as well.
Mozilla.org
The Mozilla.org site is undergoing a series of redesigns, starting with updates to the footer and navigation bars. These changes will continue through the rest of the year and beyond. The next update will focus on the About page. Additionally, the team is systematically removing obsolete strings and replacing them with updated or new strings, ensuring you have enough time to catch up while minimizing effort on outdated content.
There are a few new Welcome pages made available to a select few locales. Each of these pages have a different deadline. Make sure to complete them before they are due.
What’s new or coming up in SUMO
The SUMO platform just got a navigation redesign in July to improve navigation for users & contributors. The team also introduced new topics that are standardized across products, which lay the foundation for better data analysis and reporting. Most of the old topics, and their associated articles and questions, have been mapped to the new taxonomy, but a few remain that will be manually mapped to their new topics.
On the community side, we also introduced improvements & fixes on the messaging feature, changing the KB display time in format appropriate to locale, fixed the bug so we can properly display pageviews number in the KB dashboard, and add a spam tag in the list of question if it’s marked as spam to make moderation work easier for the forum moderators.
There will be a community call coming up on Oct 30 at 5pm UTC where we will be talking about Firefox 20th anniversary celebration and Firefox 132 release. Check out the agenda for more detail.
What’s new or coming up in Pontoon
Enhancements to Pontoon Search
We’re excited to announce that Pontoon now allows for more sophisticated searches for strings, thanks to the addition of the new search panel!
When searching for a string, clicking on the magnifying glass icon will open a dropdown, allowing users to select any combination of search options to help refine their search. Please note that the default search behavior has changed, as string identifiers must now be explicitly enabled in search options.
Pontoon Enhanced Search Options
User status banners
As part of the effort to introduce badges/achievements into Pontoon, we’ve added status banners under user avatars in the translation workspace. Status banners reflect the permissions of the user within the respective locale and project, eliminating the need to visit their profile page to view their role.
Namely, team managers will get the ‘MNGR’ tag, translators get the ‘TRNSL’ tag, project managers get the ‘PM’ tag, and those with site-wide admin permissions receive the ‘ADMIN’ tag. Users who have joined within the last three months will get the ‘NEW USER’ tag for their banner. Status banners also appear in comments made under translations.
New Pontoon logo
We hope you love the new Pontoon logo as much as we do! Thanks to all of you who expressed your preference by participating in the survey.
Pontoon New Logo
Friends of the Lion
Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!
Useful Links
Questions? Want to get involved?
If you want to get involved, or have any question about l10n, reach out to:
Did you enjoy reading this report? Let us know how we can improve it.
October 24, 2024 10:22 PM
August 28, 2024
When I began my 16-month journey as a Software Engineer intern at Mozilla, I had no idea how enriching the experience would be. I had just finished my third-year as a computer science student at the University of Toronto, passionate about Artificial Intelligence (AI), Machine Learning (ML), and software engineering, with a thirst for hands-on experience. Mozilla, with its commitment to the open web and global community, was the perfect place for me to grow, learn, and contribute meaningfully.
Starting off strong on day one at Mozilla—calling the shots from the big screen :)!
Integrating into a Global Team
Joining Mozilla felt like being welcomed into a global family. Mozilla’s worldwide presence meant that asynchronous communication was not just a convenience but a necessity. My team was scattered across various time zones around the world—from Berlin to Helsinki, Slovenia to Seattle, and everywhere in between. Meanwhile, I was located in Toronto, where morning standups became my lifeline. The early hours of the day were crucial; I had to ensure all my questions were answered before my teammates signed off for the day. Collaborating across continents with a diverse team honed my adaptability and proficiency in asynchronous communication, ensuring smooth project progress despite time zone differences. This taught me the art of clear, concise communication and the importance of being proactive in a globally distributed team.
Our weekly team meeting, connecting from all corners of the globe!
Working on localization with such a diverse team gave me a unique perspective. I learned that while we all used the same technology, the challenges and solutions were as diverse as the locales we supported. This experience underscored the importance of creating technology that is not just globally accessible but also locally relevant.
Who knew software engineering could be so… circus-y? Meeting the team in style at Mozilla’s All Hands event in Montréal!
Building Success Through Teamwork
During my internship, I was treated as a full-fledged engineer, entrusted with significant responsibilities that allowed me to lead projects. This experience honed my strategic thinking and built my confidence, but it also taught me the importance of collaboration. Working closely with a team of three engineers, I quickly learned that effective communication was essential to our success. I actively participated in code reviews, feature assessments, and bug resolutions, always keeping my team informed through regular updates in standups and Slack. This open communication not only fostered strong relationships but also made me an effective team player, ensuring that our collective efforts were aligned and that we could achieve our goals together.
Driving Innovation
One of the things I quickly realized at Mozilla was that innovation isn’t just about coming up with new ideas—it’s about identifying areas for improvement and enhancing them. My interest in AI led me to spot an opportunity to elevate the translation process in Pontoon, Mozilla’s localization platform. After thorough research and discussions with my mentor and team, I proposed integrating large language models to boost the platform’s capabilities. This proactive approach not only enhanced the platform but also showcased my ability to think critically and solve problems effectively.
Diving into the Tech Stack
Mozilla gave me the opportunity to dive deep into a tech stack that was both challenging and exciting. I worked extensively with Python using the Django framework, React, TypeScript, and JavaScript, along with HTML and CSS. But it wasn’t just about the tools—it was about applying them in ways that would have a lasting impact.
One of my most significant projects was leading the integration of GPT-4 into Pontoon. This wasn’t just about adding another tool to the platform; it was about enhancing the translation process in a way that captured the subtle nuances of language, something that traditional machine translation tools often missed. The result? A feature that allowed localizers to rephrase text, or make text more formal or informal as needed, ultimately ensuring that Mozilla’s products resonated with users worldwide.
This project was a full-stack adventure. From prompt engineering on the backend to crafting a seamless frontend interface, I was involved in every stage of the development process. The impact was immediate and widespread—by August 2024, the feature had been used over 2,000 times across 52 distinct locales. Seeing something I worked on make such a tangible difference was incredibly rewarding. You can read more about this feature in my blog post here.
Another project that stands out is the implementation of a light theme in Pontoon, aimed at promoting accessibility and enhancing user experience. Recognizing that a single dark theme could be straining for some users, I spearheaded the development of a light theme and system theme option that adhered to accessibility standards and catered to diverse user preferences. Within the first six months of its launch, the feature was adopted by over 14% of users who logged in within the last 12 months, significantly improving usability and demonstrating Mozilla’s commitment to inclusive design.
Building a Stronger Community
Mozilla’s commitment to community is one of the things that drew me to the organization, and I was thrilled to contribute to it in meaningful ways. One of my proudest achievements was initiating the introduction of gamification elements in Pontoon. The goal was to enhance community engagement by recognizing and rewarding contributions through badges. By analyzing user data and drawing inspiration from platforms like Duolingo and GitHub, I helped design a system that not only motivated contributors but also enhanced the trustworthiness of translations.
But my impact extended beyond that. I had the opportunity to interact with our global audience and participate in various virtual events focused on engaging with our localization community. For instance, I took part in the “Three Women in Localization” interview, where I shared my experiences as a female engineer in the tech industry. I also participated in a fireside chat with the localization tech team to discuss our work and the future of localization at Mozilla. More recently, I organized a live virtual interview featuring the Firefox Translations team, which turned out to be our most engaging online event to date. It was an incredible opportunity to connect with Mozilla’s global community, discuss important topics like privacy and AI, and facilitate real-time interaction. These experiences not only allowed me to share my insights but also deepened my understanding of the broader community that powers Mozilla’s mission.
Joining forces with the inspiring women of Mozilla’s localization team during the “Three Women in Localization” interview, where we shared our experiences and insights as females in the tech industry.
From Mentee to Mentor
During the last four months of my internship, I had the opportunity to mentor and onboard our new intern, Harmit Goswami, who would be taking over my role once I returned to my last semester of university. My team entrusted me with this responsibility, and I guided him through the onboarding process—helping him get everything set up, introducing him to the codebase, and supporting him as he tackled his first bugs.
Mentoring our new intern, Harmit, as he joins our weekly tech team call for the first time from the Toronto office—welcoming him to the Mozilla family, one Zoom call at a time!
This experience taught me the importance of clear communication, setting expectations, and creating a learning path for his growth and success. I was fortunate to have an amazing mentor, Matjaž Horvat, throughout my internship, and it was incredibly rewarding to take what I had learned from him and pass it on. In the process, I also gained a deeper understanding of my own skills and how to teach and guide others effectively.
Learning and Growing Every Day
The fast-paced, collaborative environment at Mozilla pushed me to learn new technologies and skills on a tight schedule. Whether it was diving into Django for backend development or mastering the intricacies of version control with Git and GitHub, I was constantly learning and growing. More importantly, I learned the value of adaptability and how to thrive in an open-source work culture that was vastly different from my previous experiences in the financial sector.
Reflecting on the Journey
As I wrap up my internship, I can’t help but reflect on how much I’ve grown—both as an engineer and as a person.
As a person, I was able to step out of my comfort zone and host virtual events that were open to both the company and the public, enhancing my confidence and public speaking skills. Engaging with a diverse audience and facilitating meaningful discussions taught me the importance of effective communication and community engagement.
As an engineer, I had the opportunity to lead my own projects from the initial idea to deployment, which allowed me to fully immerse myself in the software development lifecycle and project management. This experience sharpened my technical acumen and taught me how to provide constructive feedback during senior code reviews, ensuring code quality and adherence to best practices. Beyond technical development, I expanded my expertise by adopting a user-centric approach—writing proposal documents, conducting research, analyzing user data, and drafting detailed specification documents. This comprehensive approach required me to blend technical skills with strategic thinking and user-focused design, ultimately refining my problem-solving, research, and communication abilities. These experiences made me a more versatile and well-rounded engineer.
This journey has been about more than just writing code. It’s been about building something that matters, connecting with a global community, and growing into the kind of engineer who not only solves problems but also embraces challenges with creativity and resilience. As I look ahead to the future, I’m excited to continue this journey, armed with the knowledge, skills, and passion that Mozilla has helped me cultivate.
Acknowledgments
I want to extend my deepest gratitude to my manager, Francesco Lodolo, and my mentor, Matjaž Horvat, for their unwavering support and guidance throughout my internship. To my incredible team and the entire Mozilla community, thank you for fostering an environment of learning, collaboration, and innovation. This experience has been invaluable, and I will carry these lessons and memories with me throughout my career.
*Thank you for reading about my journey! If you have any questions or would like to discuss my experiences further, feel free to reach out via Linkedin.
August 28, 2024 02:50 PM
August 02, 2024
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
New content and projects
What’s new or coming up in Firefox desktop
Last month you may have seen “Firefox Labs” while translating in the Firefox project. In the coming months a number of new experimental features are being made available in Firefox through Firefox Labs, allowing users to test out and provide feedback (through Mozilla connect) on in-development features. You will be able to turn those features on and off by navigating to your about:settings
page and clicking ”Firefox Labs.” You can test it out yourself in Nightly right now.
Starting from the upcoming Firefox version 131 you should start seeing strings to localize for a number of new experimental features.
AI Chatbot
You may have noticed this feature in the current version of Nightly already. With this enabled, you can add AI chatbots such as ChatGPT to the sidebar. When added, users can also select text on a page and use the context menu to choose a pre-generated prompt. This feature is being opened for localization in version 131, and in addition to the regular UI strings you would expect, the prompts for sending to the chatbot will also be available to localize.
Localizing chatbot prompts
You can localize these prompts as usual, but you may want to test potential prompts out to see the quality of the results returned and tweak if necessary. Please find some additional background information from the development team to help you when localizing these:
Starting with Firefox version 130, users can choose to add an AI chatbot to their browser. This feature will be added to the Settings > Firefox Labs page, where interested users can choose to try it out. The chatbots users can choose from: Anthropic Claude, ChatGPT, Google Gemini, Hugging Chat, Le Chat Mistral.
In addition to having the chatbot in the sidebar, when users select text on a webpage, we will suggest several actions the user can ask the chatbot to perform. Selecting an action sends the selection, the page title, and a prompt that we have written to the chatbot provider.
Prompts are the plain language ‘instructions’ for the chatbot and will be visible in the provider’s interface.
About our prompts
This table lists the actions, their purpose, and the prompt.
Action |
Purpose |
Prompt |
Summarize |
Help users understand what a selection covers at a glance |
Please summarize the selection using precise and concise language. Use headers and bulleted lists in the summary, to make it scannable. Maintain the meaning and factual accuracy. |
Explain this |
Help users understand unfamiliar words and topics |
Please explain the key concepts in this selection, using simple words. Also, use examples. |
Simplify language |
Make a selection easier to read |
Please rewrite the selection using short sentences and simple words. Maintain the meaning and factual accuracy. |
Quiz me |
Test understanding of selection in an interactive way |
Please quiz me on this selection. Ask me a variety of types of questions, for example multiple choice, true or false, and short answer. Wait for my response before moving on to the next question |
Writing style of prompts
In English, we have made the prompts concise and direct for a few reasons:
Some providers have character restrictions around how much can be input into their chat interface (the ‘context window’). The length of the prompt plus the length of the selection are included in this character count.
Being direct provides less room for misinterpretation of the instructions.
When localizing, please strive also for being concise and direct, but not at the expense of losing meaning. We understand this style may feel more “formal” than some of our other strings.
Sidebar customization / Vertical tabs
In addition to the AI chatbot mentioned above, more changes to the sidebar are in the works including the addition of vertical tabs. Keep your eye out for this experiment and associated strings coming in 131.
Upcoming features
In addition to the experiments planned for 131, there are more new features we can look forward to in later versions. Currently in active development are features related to profile management as well as creation of encrypted backups of your Firefox data.
What’s new or coming up in mobile
Firefox for Android has two exciting new features, and we’d love your help testing them out! Please use the Nightly version in both cases (which is the version you should be using anyways in order to test your localization work).
The first one is the Translation feature, which you can access by navigating to any website, and then going to Settings > Translate page. Play around with the feature, for example you can translate a page from English to French, and then from French to another language you may speak.
If you encounter any problems whatsoever, please file a bug here, under the Component “Translations”. Under “Type”, chose “Defect”.
Secondly, there is an entire toolbar menu redesign! This is not available by default on Nightly yet, so you will have to enable it through Secret Settings. To do so, go to Settings > About Firefox Nightly, and click 5 times on the Firefox Nightly logo. This will enable the Secret Settings, which you can access by clicking on the back arrow (which brings you back to Settings). Scroll down until you see “Secret Settings”. Then select both “Enable Navigation Toolbar” and “Enable Menu Redesign”. You’ll immediately notice the difference once you navigate via the bottom toolbar.
Please play around with this new feature as much as possible in your language – look out especially for truncations, as we expect to see quite a few.
If you encounter any problems whatsoever, please file a bug here, under the Component “Toolbar”. Under “Type”, chose “Defect”.
Firefox for iOS is expected to incorporate these changes in the future; however, that work has not started yet.
What’s new or coming up in SUMO
The next community call is coming up on August 7, 2024. We’ll talk about what’s coming in Firefox 129 as well as have a discussion with the lead editor of the IRL podcast to talk about their next season, “AI and Me.” Join us on Wednesday, August 7, 5pm UTC!
If you want to get updated on the upcoming Firefox release, check out our release wiki page for Firefox 129 to stay updated with known issues/dot releases. We’ve been doing this since Firefox 126 and it’s pretty well-received by the community.
Recently, we also teamed up with the Firefox team to organize the Firefox third-party installer campaign. As a result, we received 1,844 reports in total, identified 683 unique third-party websites and 105 unique download links. The Firefox team is currently conducting further investigations with the QA team based on these reports.
Apart from that, check out the contributor spotlight content that we published recently, and learn more about what we’ve done in Q2 from this blog post.
Events
This month we hosted Erik Nordin, Marco Castelluccio, and Greg Tatum from the Firefox Translations team for a virtual interview. We covered topics such as how the Firefox translation feature works, privacy features, incorporating LLMs and AI, and more. The stream recording will be available to view at any time. You can watch the recording on Air Mozilla or YouTube.
Please provide your feedback on this event through this form so we can make our future events even better!
In June we also hosted a Pontoon demo, which covers all the basic functionality you’ll need to get started translating on Pontoon, plus handy tips and tricks to help you get the most out of this easy to use tool.
Come check out all our event videos here!
Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.
Useful Links
Questions? Want to get involved?
If you want to get involved, or have any question about l10n, reach out to:
August 02, 2024 10:47 PM
May 28, 2024
Image generated by DALL-E 3
Imagine a world where language barriers do not exist; a tool so intuitive that it can understand the subtleties of every dialect and the jargon of any industry.
While we’re not quite there yet, advancements in Large Language Models (LLMs) are bringing us closer to this vision.
What are LLMs: Beyond the Buzz
2024 is buzzing with talk about “AI,” but what does it actually mean? Artificial Intelligence, especially LLMs, isn’t just a fad — it’s a fundamental shift in how we interface with technology. You’ve likely interacted with AI without even realizing it — when Google auto-completes your searches, when Facebook suggests who to tag in a photo, or when Netflix recommends what you should watch next.
LLMs are a breed of AI designed to understand and generate human language by analyzing vast amounts of text. They can compose poetry, draft legal agreements, and yes, translate languages. They’re not just processing language; they’re understanding context, tone, and even the subtext of what’s being written or said.
The Evolution of Translation: From Machine Translation to LLMs
Remember the early days of Google Translate? You’d input a phrase in English and get a somewhat awkward French equivalent. This was typical of statistical machine translation, which relied on vast amounts of bilingual text to make educated guesses. It was magic for its time, but it was just the beginning.
As technology advanced, we saw the rise of neural machine translation, which used AI to better understand context and nuance, resulting in more accurate translations. However, even these neural models have their limitations.
Enter LLMs, which look at the big picture, compare multiple interpretations, and can even consider cultural nuances before suggesting a translation.
Pontoon: The Heart of Mozilla’s Localization Efforts
Pontoon isn’t just any translation tool; it’s the backbone of Mozilla’s localization efforts, where a vibrant community of localizers breathes life into strings of text, adapting Mozilla’s products for global audiences. However, despite integrating various machine translation sources, these tools often struggle with capturing the subtleties essential for accurate translation.
How do we make localizers’ jobs easier? By integrating LLMs to assist not just in translating text but in understanding the spirit of what’s being conveyed. And crucially, this integration doesn’t replace our experienced localizers who supervise and refine these translations; it supports and enhances their invaluable work.
Leveraging Research: Making the Case for LLMs
Our journey began with a question: How can we enhance Pontoon with the latest AI technologies? Diving into research, we explored various LLM applications, from simplifying complex translation tasks to handling under-represented languages with grace.
To summarize the research:
- Performance in Translation: Studies like “Large Language Models Are State-of-the-Art Evaluators of Translation Quality” by Tom Kocmi and Christian Federmann demonstrated that LLMs, specifically GPT-3.5 and larger models, exhibit state-of-the-art capabilities in translation quality assessment. These models outperform other automatic metrics in quality estimation without a reference translation, especially at the system level.
- Robustness and Versatility: The paper “How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation” by Amr Hendy et al. highlighted the competitive performance of GPT models in translating high-resource languages. It also discussed the limited capabilities for low-resource languages and the benefits of hybrid approaches that combine GPT models with other translation systems.
- Innovative Approaches: Research on new trends in machine translation, such as “New Trends in Machine Translation using Large Language Models: Case Examples with ChatGPT” explored innovative directions like stylized and interactive machine translation. These approaches allow for translations that match specific styles or genres and enable user participation in the translation process, enhancing accuracy and fluency.
The findings were clear — LLMs present a significant opportunity to enhance Pontoon and improve translation quality.
Why We Chose This Path
Why go through this transformation? Because language is personal. Take the phrase “Firefox has your back.” In English, it conveys reliability and trust. A direct translation might miss this idiomatic expression, interpreting it literally as “someone has ownership of your back”, which could confuse or mislead users. LLMs can help maintain the intended meaning and nuance, ensuring that every translated phrase feels as though it was originally crafted in the user’s native language.
We can utilize the in-context learning of LLMs to help with this. This is a technique that informs the model about your data and preferences as it generates its responses via an engineered prompt.
Experimenting: A Case Study with ChatGPT and GPT-4
To illustrate the effectiveness of our approach, I conducted a practical experiment with OpenAI’s ChatGPT, powered by GPT-4. I asked ChatGPT to translate the string “Firefox has your back” to Bengali. The initial translation roughly translates to “Firefox is behind you”, which doesn’t convey the original meaning of the string.
Asking GPT-4 to translate the string “Firefox has your back” to Bengali.
Now, it seems our friendly ChatGPT decided to go rogue and translated “Firefox” despite being told not to! Additionally, instead of simply providing the translation as requested, it gave a verbose introduction and even threw in an English pronunciation guide. This little mishap underscores a crucial point: the quality of the output heavily depends on how well the input is framed. It appears the AI got a bit too eager and forgot its instructions.
This experiment shows that even advanced models like GPT-4 can stumble if the prompt isn’t just right. We’ll dive deeper into the art and science of prompt engineering later, exploring how to fine-tune prompts to guide the model towards more accurate and contextually appropriate translations.
Next, I asked ChatGPT to translate the same string to Bengali, this time I specified to keep the original meaning of the string.
Asking GPT-4 to translate the string “Firefox has your back” to Bengali, while maintaining the original meaning of the string.
Adjusting the prompt, the translation evolved to “Firefox is with you”—a version that better captured the essence of the phrase.
I then used Google Translate to translate the same string.
Using Google Translate to translate the string “Firefox has your back” to Bengali.
For comparison, Google Translate offered a similar translation to the first attempt by GPT-4, which roughly translates to “Firefox is behind you”. This highlights the typical challenges faced by conventional machine translation tools.
This experiment underscores the potential of stylized machine translation to enhance translation quality, especially for idiomatic expressions or specific styles like formal or informal language.
The Essential Role of Prompt Engineering in AI Translation
Building on these insights, we dove deeper into the art of prompt engineering, a critical aspect of working with LLMs. This process involves crafting inputs that precisely guide the AI to generate accurate and context-aware outputs. Effective prompt engineering enhances the accuracy of translations, streamlines the translation process by reducing the need for revisions, and allows for customization to meet specific cultural and stylistic preferences.
Working together with the localization team, we tested a variety of prompts in languages like Italian, Slovenian, Japanese, Chinese, and French. We assessed each translation on its clarity and accuracy, categorizing them as unusable, understandable, or good. After several iterations, we refined our prompts to ensure they consistently delivered high-quality results, preparing them for integration into Pontoon’s Machinery tab.
How It Works: Bringing LLMs to Pontoon
Above is a demonstration of using the “Rephrase” option on the string “Firefox has your back” for the Italian locale. The original suggestion from Google’s Machine Translation meant “Firefox covers your shoulders”, while the rephrased version means “Firefox protects you”.
After working on the prompt engineering and implementation, we’re excited to announce the integration of LLM-assisted translations into Pontoon. For all locales utilizing Google Translate as a translation source, a new AI-powered option is now available within the ‘Machinery’ tab — the reason for limiting the feature to these locales is to gather insights on usage patterns before considering broader integration. Opening this dropdown will reveal three options:
REPHRASE
: Generate an alternative to this translation.
MAKE FORMAL
: Generate a more formal version of this translation.
MAKE INFORMAL
: Generate a more informal version of this translation.
After selecting an option, the revised translation will replace the original suggestion. Once a new translation is generated, another option SHOW ORIGINAL
will be available in the dropdown menu. Selecting it will revert to the original suggestion.
The Future of Translation is Here
As we continue to integrate Large Language Models (LLMs) into Mozilla’s Pontoon, we’re not just transforming our translation processes — we’re redefining how linguistic barriers are overcome globally. By enhancing translation accuracy, maintaining cultural relevance, and capturing the nuances of language through the use of LLMs, we’re excited about the possibilities this opens up for users worldwide.
However, it’s important to emphasize that the role of our dedicated community of localizers remains central to this process. LLMs and machine translation tools are not used without the supervision and expertise of experienced localizers. These tools are designed to support, not replace, the critical work of our localizers who ensure that translations are accurate and culturally appropriate.
We are eager to hear your thoughts. How do you see this impacting your experience with Mozilla’s products? Do the translations meet your expectations for accuracy? Your feedback is invaluable as we strive to refine and perfect this technology. Please share your thoughts and experiences in the comments below or reach out to us on Matrix, or file an issue. Together, we can make the web a place without language barriers.
May 28, 2024 03:00 PM
May 02, 2024
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
New content and projects
What’s new or coming up in Firefox desktop
To start, a “logistic” announcement: on April 29 we changed the configuration of the Firefox project in Pontoon to use a different repository for source (English) strings. This is part of a larger change that will move Firefox development from Mercurial to Git.
While the change was mostly transparent for localizers, there is an added benefit: as part of the Firefox project, you will now be able to localize about 40 strings that are used by GeckoView, the core of our Android browsers (Firefox, Focus). For your convenience, these are grouped in a specific tag called GeckoView. Since these are mostly old strings dating back to Fennec (Firefox for Android up to version 68), you will also find that existing translations have been imported — in fact, we imported over 4 thousand translations.
Going back to Firefox desktop, version 127 is currently in Nightly, and will move to Beta on May 13. Over the past few weeks there have been a few new features and updates that’s it’s worth testing to ensure the best experience for users.
You are probably aware of the Firefox Translations feature available for a growing number of languages. While this feature was originally available for full-page translation, now it’s also possible to select text in the page and translate it through the context menu.
Screenshot of the Translation selection feature in Firefox.
Reader Mode is also in the process of getting a redesign, with more controls to customize the user experience.
Screenshot of the Reader Mode settings in Firefox Nightly.
The New Tab page has a new wallpaper function: in order to test it, go to about:config
(see this page if you’re unfamiliar), search for browser.newtabpage.activity-stream.newtabWallpapers.enabled
and flip its value to true
(double-click will work). At this point, open a new tab and click the gear icon in the top-right corner. Note that the available wallpapers change depending on the current theme (dark vs light).
Screenshot of New Tab wallpaper selection in Nightly.
Last but not least, make sure to test the new features available in the integrated PDF Reader, in particular the dialog to add images and highlight elements in the page.
Screenshot of the PDF Viewer in Firefox, with the “Add image” UI.
What’s new or coming up in mobile
The mobile team is currently redesigning the app menus in Firefox Android and iOS. There will be many new menu strings landing in the upcoming versions (you may have already noticed some prelanding), including some dynamic menu text that may get truncated for some locales – especially on smaller screens.
Testing for this type of localization issues will be a focus: we’ll set expectations for it soon and send testing instructions (v130 or v131 releases are currently the target). Strings will be making their way incrementally in the new menus available through Firefox Nightly, allowing enough time for localizers to translate and test continuously.
What’s new or coming up in web projects
Mozilla.org
The mozilla.org team is creating a regular cleanup routine by labeling the soon-to-be replaced strings with an expiration date, usually two months after the string has become obsolete. This approach will minimize communities’ time localizing strings no longer used. In other words, if you see a string labeled with a date, please skip it. Below is an example, and in this case, you want to localize the v2 string:
example-v2 = Security, reliability and speed — on every device, anywhere you go.
# Obsolete string (expires: 2024-03-18)
example = Security, reliability and speed — from a name you can trust.
Relay Website
This product is in maintenance mode and it will not be open for new locales until we remove obsolete strings and revert the content migration to mozilla.org (see also l10n report from November 2023).
What’s new or coming up in SUMO
- Konstantina is joining the SUMO force! She moved from the Marketing team to the Customer Experience team in late Q1. If you haven’t get to know her, please don’t hesitate to say hi!
- AI spam has been a big issue in our forum lately, so we decided to spin up a new contributor policy around the use of AI-generated tools. Please check this thread if you haven’t!
- We opened an AAQ for NL in our support forum. Thanks to Tim Maks and the rest of the NL community, who’ve been very supportive of this work.
- Are you contributing to our Knowledge Base? You may want to read the recent blog posts from the content team to get to know more about what they’re up to. In short, they’re doing a lot around freshening up our knowledge base articles.
- Wanna know more about what we’ve done in Q1 2024, read the recap here.
What’s new or coming up in Pontoon
Large Language Model (LLM) Integration
We’re thrilled to announce the integration of LLM-assisted translations into Pontoon! For all locales utilizing Google Translate as a translation source, a new AI-powered option is now available within the ‘Machinery’ tab. This feature enhances Google Translate outputs by leveraging a Large Language Model (LLM). Users can now tailor translations to be more formal or informal and rephrase text for clarity and tone.
Since January, our team has conducted extensive research to explore how other localization services are utilizing AI. We specifically focused on comparing the capabilities of Large Language Models (LLMs) against traditional machine translation methods and identifying industry best practices.
Our findings revealed that while tools like Google Translate provide a solid foundation, they sometimes fall short, often translating text too literally. Recognizing the potential for improvement, we introduced functionality within Pontoon to adjust the tone and refine phrases directly.
For example, consider the phrase “Firefox has your back” translated in the Italian locale. The suggestion provided by Google’s machine translation is literal and incorrect (“Firefox covers your shoulders”). The images below demonstrate the use of the “Rephrase” option:
Dropdown to use the LLM feature
Enhanced translation output from the LLM rephrasing the initial Google Translate result.
Furthering our community engagement, on April 29th, we hosted a Localization Fireside Chat. During this session, we discussed the new feature in depth and provided a live demonstration. Catch the highlights of our discussion at the following recordings (the LLM feature is discussed at the 7:22 mark):
Performance improvements
At the end of the last year we’ve asked Mozilla localizers what areas of Pontoon would they like to see improved. Performance optimizations were one of the top-voted requests and we’re happy to report we’ve landed several speedups since the beginning of the year.
Most notable improvements were made to the dashboards, with Contributors, Insights and Tags pages now loading in a fraction of the time they took to load earlier in the year. We’ve also improved the loading times of Permissions tab, Notifications page and some filters.
As shown in the chart below, almost all the pages and actions will now take less time to load.
Chart showing the improved apdex score of several views in Pontoon.
Events
Watch our latest localization virtual events here.
Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.
Useful Links
Questions? Want to get involved?
If you want to get involved, or have any question about l10n, reach out to:
Did you enjoy reading this report? Let us know how we can improve it.
May 02, 2024 04:01 PM
February 07, 2024
Quite often, an imperfect translation is better than no translation. So why even publish untranslated content when high-quality machine translation systems are fast and affordable? Why not immediately machine-translate content and progressively ship enhancements as they are submitted by human translators?
At Mozilla, we call this process pretranslation. We began implementing it in Pontoon before COVID-19 hit, thanks to Vishal who landed the first patches. Then we caught some headwinds and didn’t make much progress until 2022 after receiving a significant development boost and finally launched it for the general audience in September 2023.
So far, 20 of our localization teams (locales) have opted to use pretranslation across 15 different localization projects. Over 20,000 pretranslations have been submitted and none of the teams have opted out of using it. These efforts have resulted in a higher translation completion rate, which was one of our main goals.
In this article, we’ll take a look at how we developed pretranslation in Pontoon. Let’s start by exploring how it actually works.
How does pretranslation work?
Pretranslation is enabled upon a team’s request (it’s off by default). When a new string is added to a project, it gets automatically pretranslated using a 100% match from translation memory (TM), which also includes translations of glossary entries. If a perfect match doesn’t exist, a locale-specific machine translation (MT) engine is used, trained on the locale’s translation memory.
After pretranslations are retrieved and saved in Pontoon, they get synced to our primary localization storage (usually a GitHub repository) and hence immediately made available for shipping. Unless they fail our quality checks. In that case, they don’t propagate to repositories until errors or warnings are fixed during the review process.
Until reviewed, pretranslations are visually distinguishable from user-submitted suggestions and translations. This makes post-editing much easier and more efficient. Another key factor that influences pretranslation review time is, of course, the quality of pretranslations. So let’s see how we picked our machine translation provider.
Choosing a machine translation engine
We selected the machine translation provider based on two primary factors: quality of translations and the number of supported locales. To make translations match the required terminology and style as much as possible, we were also looking for the ability to fine-tune the MT engine by training it on our translation data.
In March 2022, we compared Bergamot, Google’s Cloud Translation API (generic), and Google’s AutoML Translation (with custom models). Using these services we translated a collection of 1,000 strings into 5 locales (it, de, es-ES, ru, pt-BR), and used automated scores (BLEU, chrF++) as well as manual evaluation to compare them with the actual translations.
Performance of tested MT engines for Italian (it).
Google’s AutoML Translation outperformed the other two candidates in virtually all tested scenarios and metrics, so it became the clear choice. It supports over 60 locales. Google’s Generic Translation API supports twice as many, but we currently don’t plan to use it for pretranslation in locales not supported by Google’s AutoML Translation.
Making machine translation actually work
Currently, around 50% of pretranslations generated by Google’s AutoML Translation get approved without any changes. For some locales, the rate is around 70%. Keep in mind however that machine translation is only used when a perfect translation memory match isn’t available. For pretranslations coming from translation memory, the approval rate is 90%.
To reach that approval rate, we had to make a series of adjustments to the way we use machine translation.
For example, we convert multiline messages to single-line messages before machine-translating them. Otherwise, each line is treated as a separate message and the resulting translation is of poor quality.
Multiline message:
Make this password unique and different from any others you use.
A good strategy to follow is to combine two or more unrelated
words to create an entire pass phrase, and include numbers and symbols.
Multiline message converted to a single-line message:
Make this password unique and different from any others you use. A good strategy to follow is to combine two or more unrelated words to create an entire pass phrase, and include numbers and symbols.
Let’s take a closer look at two of the more time-consuming changes.
The first one is specific to our machine translation provider (Google’s AutoML Translation). During initial testing, we noticed it would often take a long time for the MT engine to return results, up to a minute. Sometimes it even timed out! Such a long response time not only slows down pretranslation, it also makes machine translation suggestions in the translation editor less useful – by the time they appear, the localizer has already moved to translate the next string.
After further testing, we began to suspect that our custom engine shuts down after a period of inactivity, thus requiring a cold start for the next request. We contacted support and our assumption was confirmed. To overcome the problem, we were advised to send a dummy query to the service every 60 seconds just to keep the system alive.
Of course, it’s reasonable to shut down inactive services to free up resources, but the way to keep them alive isn’t. We have to make (paid) requests to each locale’s machine translation engines every minute just to make sure they work when we need them. And sometimes even that doesn’t help – we still see about a dozen ServiceUnavailable errors every day. It would be so much easier if we could just customize the default inactivity period or pay extra for an always-on service.
The other issue we had to address is quite common in machine translation systems: they are not particularly good at preserving placeholders. In particular, extra space often gets added to variables or markup elements, resulting in broken translations.
Message with variables:
{ $partialSize } of { $totalSize }
Message with variables machine-translated to Slovenian (adding space after $ breaks the variable):
{$ partialSize} od {$ totalSize}
We tried to mitigate this issue by wrapping placeholders in <span translate=”no”>…</span>, which tells Google’s AutoML Translation to not translate the wrapped text. This approach requires the source text to be submitted as HTML (rather than plain text), which triggers a whole new set of issues — from adding spaces in other places to escaping quotes — and we couldn’t circumvent those either. So this was a dead-end.
The solution was to store every placeholder in the Glossary with the same value for both source string and translation. That approach worked much better and we still use it today. It’s not perfect, though, so we only use it to pretranslate strings for which the default (non-glossary) machine translation output fails our placeholder quality checks.
Making pretranslation work with Fluent messages
On top of the machine translation service improvements we also had to account for the complexity of Fluent messages, which are used by most of the projects we localize at Mozilla. Fluent is capable of expressing virtually any imaginable message, which means it is the localization system you want to use if you want your software translations to sound natural.
As a consequence, Fluent message format comes with a syntax that allows for expressing such complex messages. And since machine translation systems (as seen above) already have trouble with simple variables and markup elements, their struggles multiply with messages like this:
shared-photos =
{ $photoCount ->
[one]
{ $userGender ->
[male] { $userName } added a new photo to his stream.
[female] { $userName } added a new photo to her stream.
*[other] { $userName } added a new photo to their stream.
}
*[other]
{ $userGender ->
[male] { $userName } added { $photoCount } new photos to his stream.
[female] { $userName } added { $photoCount } new photos to her stream.
*[other] { $userName } added { $photoCount } new photos to their stream.
}
}
That means Fluent messages need to be pre-processed before they are sent to the pretranslation systems. Only relevant parts of the message need to be pretranslated, while syntax elements need to remain untouched. In the example above, we extract the following message parts, pretranslate them, and replace them with pretranslations in the original message:
- { $userName } added a new photo to his stream.
- { $userName } added a new photo to her stream.
- { $userName } added a new photo to their stream.
- { $userName } added { $photoCount } new photos to his stream.
- { $userName } added { $photoCount } new photos to her stream.
- { $userName } added { $photoCount } new photos to their stream.
To be more accurate, this is what happens for languages like German, which uses the same CLDR plural forms as English. For locales without plurals, like Chinese, we drop plural forms completely and only pretranslate the remaining three parts. If the target language is Slovenian, two additional plural forms need to be added (two, few), which in this example results in a total of 12 messages needing pretranslation (four plural forms, with three gender forms each).
Finally, Pontoon translation editor uses custom UI for translating access keys. That means it’s capable of detecting which part of the message is an access key and which is a label the access key belongs to. The access key should ideally be one of the characters included in the label, so the editor generates a list of candidates that translators can choose from. In pretranslation, the first candidate is directly used as an access key, so no TM or MT is involved.
Access keys (not to be confused with shortcut keys) are used for accessibility to interact with all controls or menu items using the keyboard. Windows indicates access keys by underlining the access key assignment when the Alt key is pressed. Source: Microsoft Learn.
Looking ahead
With every enhancement we shipped, the case for publishing untranslated text instead of pretranslations became weaker and weaker. And there’s still room for improvements in our pretranslation system.
Ayanaa has done extensive research on the impact of Large Language Models (LLMs) on translation efficiency. She’s now working on integrating LLM-assisted translations into Pontoon’s Machinery panel, from which localizers will be able to request alternative translations, including formal and informal options.
If the target locale could set the tone to formal or informal on the project level, we could benefit from this capability in pretranslation as well. We might also improve the quality of machine translation suggestions by providing existing translations into other locales as references in addition to the source string.
If you are interested in using pretranslation or already use it, we’d love to hear your thoughts! Please leave a comment, reach out to us on Matrix, or file an issue.
February 07, 2024 10:55 AM
February 02, 2024
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
New content and projects
What’s new or coming up in Firefox desktop
While the amount of content has been relatively small over the last few months in Firefox, there have been some UI changes and updates to privacy setting related text such as form autofill, Cookie Banner Blocker, passwords (about:logins), and cookie and site data*. One change happening here (and across all Mozilla products) is the move away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”
In addition, while the number of strings is low, Firefox’s PDF viewer will soon have the ability to highlight content. You can test this feature now in Nightly.
Most of these strings and translations can be previewed by checking a Nightly build. If you’re new to localizing Firefox or if you missed our deep dive, please check out our blog post from July to learn more about the Firefox release schedule.
*Recently in our L10N community matrix channel, someone from our community asked how the new strings for clearing browsing history and data (see screenshot below) from Cookie and Site Data could be shown in Nightly.
In order to show the strings in Nightly, the privacy.sanitize.useOldClearHistoryDialog preference needs to be set to false. To set the preference, type about:config in your URL bar and press enter. A warning may pop up warning you to proceed with caution, click the button to continue. On the page that follows, paste privacy.sanitize.useOldClearHistoryDialog into the search field, then click the toggle button to change the value to false.
You can then trigger the new dialog by clicking “Clear Data…” from the Cookies and Site Data setting or “Clear History…” from the History. (You may need to quit Firefox and open it again for the change to take effect.).
In case of doubts about managing about:config, you can consult the Configuration Editor guide on SUMO.
What’s new or coming up in mobile
Much like desktop, mobile land has been pretty calm recently.
Having said that, we would like to call out the new Translation feature that is now available to test on the latest Firefox for Android v124 Nightly builds (this is possible only through the secret settings at the moment). It’s a built-in full page translation feature that allows you to seamlessly browse the web in your preferred language. As you navigate the site, Firefox continuously translates new content.
Check your Pontoon notifications for instructions on how to test it out. Note that the feature is not available on iOS at the moment.
In the past couple of months you may have also noticed strings mentioning a new shopping feature called “Review Checker” (that we mentioned for desktop in our November edition). The feature is still a bit tricky to test on Android, but there are instructions you can follow – these can also be found in your Pontoon notification archive.
For testing on iOS, you just need to have the latest Beta version installed and navigate to the product pages on the US sites of amazon.com, bestbuy.com, and walmart.com. A logo in the URL bar will appear with a notification, to launch and test the feature.
Finally, another notable change that has been called out under the Firefox desktop section above: we are moving away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”
What’s new or coming up in Foundation projects
New languages have been added to Common Voice in 2023: Tibetan, Chichewa, Ossetian, Emakhuwa, Laz, Pular Guinée, Sindhi. Welcome!
What’s new or coming up in Pontoon
Improved support for mobile devices
Pontoon translation workspace is now responsive, which means you can finally use Pontoon on your mobile device to translate and review strings! We developed a single-column layout for mobile phones and 2-column layout for tablets.
Screenshot of Pontoon UI on a smartphone running Firefox for Android
2024 Pontoon survey
Thanks again to everyone who has participated in the 2024 Pontoon survey. The 3 top-voted features we commit to implement are:
- Add ability to edit Translation Memory entries (611 votes).
- Improve performance of Pontoon translation workspace and dashboards (603 votes).
- Add ability to propose new Terminology entries (595 votes).
Friends of the Lion
We started a series called “Localizer Spotlight” and have published two already. Do you know someone who should be featured there? Let us know here!
Also, do someone in your l10n community who’s been doing a great job and should appear in this section? Contact us and we’ll make sure they get a shout-out!
Useful Links
Questions? Want to get involved?
If you want to get involved, or have any question about l10n, reach out to:
Did you enjoy reading this report? Let us know how we can improve it.
February 02, 2024 08:07 AM
January 18, 2024
After the previous post highlighting what the Mozilla community and Localization Team achieved in 2023, it’s time to dive deeper on the work the team does in the area of localization technologies and standards.
A significant part of our work on localization at Mozilla happens within the space of Internet standards. We take seriously our commitments that stem from the Mozilla Manifesto:
We are committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.
To us, this means that it’s not enough to strive to improve the localization of our products, but that we need to improve the localizability of the Internet as a whole. We need to take the lessons we are learning from our work on Firefox, Thunderbird, websites, and all our other projects, and make them available to everyone, everywhere.
That’s a pretty lofty goal we’ve set ourselves, but to be fair it’s not just about altruism. With our work on Fluent and DOM Localization, we’re in a position where it would be far too easy to rest on our laurels, and to consider what we have “good enough”. To keep going forward and to keep improving the experiences of our developers and localizers, we need input from the outside that questions our premises and challenges us. One way for us to do that is to work on Internet standards, presenting our case to other experts in the field.
In 2023, a large part of our work on localization standards has been focused on Unicode MessageFormat 2 (aka “MF2”), an upcoming message formatting specification, as well as other specifications building on top of it. Work on this has been ongoing since late 2019, and Mozilla has been one of the core participants from the start. The base MF2 spec is now slated for an initial “technology preview” release as a part of the 2024 Spring’s Unicode CLDR release.
Compared to Fluent, MF2 corresponds to the syntax and formatting of a single message pattern. Separately, we’ve also been working on the syntax and representation of a resource format for messages (corresponding to Fluent’s FTL files), as well as championing JavaScript language proposals for formatting messages and parsing resources. Work on standardizing DOM localization (as in, being able to use just HTML to localize a website) is also getting started in W3C/WHATWG, but its development is contingent on all the preceding specifications reaching a more stable stage.
So, besides the long term goal of improving localization everywhere, what are the practical results of these efforts? The nature of this work is exploratory, so predicting results has not and will not be completely possible. One tangible benefit that we’ve been able to already identify and deploy is a reconsideration of how Fluent messages with internal selectors — like plurals — are presented to localizers: Rather than showing a message in pieces, we’ve adopted the MF2 approach of presenting a message with its selectors (possibly more than one) applying to the whole message. This duplicates some parts of the message, but it also makes it easier to read and to translate via machine translation, as well as ensuring that it is internally consistent across all languages.
Another byproduct of this work is MF2’s message data model: Unlike anything before it, it is capable of representing all messages in all languages in all formats. We are currently refactoring our tools and internal systems around this data model, allowing us to deduplicate file format-specific tooling, making it easier to add new features and support new syntaxes. In Pontoon, this approach already made it easier to introduce syntax highlighting and improve the editing experience for right-to-left scripts. To hear more, you can join us at FOSDEM next month, where we’ll be presenting on this in more detail!
At Mozilla, we do not presume to have all the answers, or to always be right. Instead, we try to share what we have, and to learn from others. With many points of view, we gain greater insights – and we help make the world a better place for all peoples of all demographic characteristics.
January 18, 2024 07:33 AM
March 31, 2023
March 31, or “three thirty-one,” is something of a talisman in the Mozilla community. It’s the date that, back in 1998, Mozilla first came into being — the date that we open-sourced the Netscape code for the world to use.
This year, “three thirty-one” is especially meaningful: It’s Mozilla’s 25 year anniversary.
A lot has changed since 1998. Mozilla is no longer just a bold idea. We’re a family of organizations — a nonprofit, a public benefit-corporation, and others — that builds products, fuels movements, and invests in responsible tech.
And we’re no longer a small group of engineers in Netscape’s Mountain View office. We’re technologists, researchers, and activists located around the globe — not to mention tens of thousands of volunteers.
But if a Mozillian from 1998 stepped into a Mozilla office (or joined a Mozilla video call) in 2023, I think they’d quickly feel something recognizable. A familiar spirit, and a familiar set of values.
When Mozilla open-sourced our browser code 25 years ago, the reason was the public interest: We wanted to spark more innovation, more competition, and more choice online. Technology in the public interest has been our manifesto ever since — whether releasing Firefox 1.0 in 2004, or launching Mozilla.ai earlier this year.
Right now, technology in the public interest seems more important than ever before. The internet today is deeply entwined with our personal lives, our professional lives, and society at large. The internet today is also flawed. Centralized control reduces choice and competition. A focus on “engagement” magnifies outrage, and bad actors are thriving.
Right now — and over the next 25 years — Mozilla can do something about this.
Mozilla’s mission and principles are evergreen, and we will continue to evolve to meet the needs and challenges of the modern internet. How people use the internet will change over time, but the need for innovative products that give individuals agency and choice on the internet is a constant. Firefox has evolved from a faithful and efficient render of web pages on PCs to a cross-platform agent that acts on behalf of the individual, protecting them from bad actors and surveillance capitalists as they navigate the web. Mozilla has introduced new products, such as Firefox Relay and Mozilla VPN, to keep people’s identity protected and activity private as they use the internet. Mozilla is contributing to healthy public discourse, with Pocket enabling discovery of amazing content and the mozilla.social Mastodon instance supporting decentralized, community-driven social media.
We’re constantly exploring ways to apply new technologies so that people feel the benefits in their everyday lives, as well as inspire others to responsibly innovate on behalf of humanity. As AI emerges as a core building block for the future of computing, we’ll turn our attention in that direction and ask: How can we make products and technologies like machine learning work in the public interest? We’ve already started this work via Mozilla.ai, a new Mozilla organization focusing on a trustworthy, independent, and open-source AI ecosystem. And via the Responsible AI Challenge, where we’re convening (and funding) bright people and ambitious projects building trustworthy AI.
And we will continue to champion public policy that keeps the internet healthy. There is proposed legislation around the world that seeks to maintain the internet in the public interest: the Platform Accountability and Transparency Act (PATA) in the U.S., the Digital Services Act (DSA) in the EU. Mozilla has helped shape these laws, and we will continue to follow along closely with their implementation and enforcement.
On this “three thirty-one,” I’m realistic about the challenges facing the internet. But I’m also optimistic about Mozilla’s potential to address them. And I’m looking forward to another 25 years of not just product, but also advocacy, philanthropy, and policy in service of a better internet.
March 31, 2023 04:46 PM
March 06, 2023
As Mozilla reaches its 25th anniversary this year, we’re working hard to set up our ‘next chapter’ — thinking bigger and being bolder about how we can shape the coming era of the internet. We’re working to expand our product offerings, creating multiple options for consumers, audiences and business models. We’re growing our philanthropic and advocacy work that promotes trustworthy AI. And, we’re creating two new Mozilla companies, Mozilla.ai: to develop a trustworthy open source AI stack and Mozilla Ventures: to invest in responsible tech companies. Across all of this, we’ve been actively recruiting new leaders who can help us build Mozilla for this next era.
With all of this in mind, we are seeking three new members for the Mozilla Foundation Board of Directors. These Board members will help grow the scope and impact of the Mozilla Project overall, working closely with the Boards of the Mozilla Corporation, Mozilla.ai and Mozilla Ventures. At least one of the new Board members will play a central role in guiding the work of the Foundation’s charitable programs, which focuses on movement building and trustworthy AI.
What is the role of a Mozilla board member?
I’ve written in the past about the role of the Board of Directors at Mozilla.
At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for executives to do their jobs. We feel differently. We work openly and transparently, and want Board members to be part of the team and part of the community.
It’s worth noting that Mozilla is an unusual organization. As I wrote in our most recent annual report:
Mozilla is a rare organization. We’re activists for a better internet, one where individuals and societies benefit more from the effects of technology, and where competition brings consumers choices beyond a small handful of integrated technology giants.
We’re activists who champion change by building alternatives. We build products and compete in the consumer marketplace. We combine this with advocacy, policy, and philanthropic programs connecting to others to create change. This combination is rare.
It’s important that our Board members understand all this, including why we build consumer products and why we have a portfolio of organizations playing different roles. It is equally important that the Boards of our commercial subsidiaries understand why we run charitable programs within Mozilla Foundation that complement the work we do to develop products and invest in responsible tech companies.
What are we looking for?
At the highest level, we are seeking people who can help our global organization grow and succeed — and who ensure that we advance the work of the Mozilla Manifesto over the long run. Here is the full job description: https://mzl.la/MofoBoardJD2023
There are a variety of qualities that we seek in all Board members, including a cultural sense of Mozilla and a commitment to an open, transparent, community driven approach. We are also focused on ensuring the diversity of the Board, and fostering global perspectives.
As we recruit, we typically look to add specific skills or domain expertise to the Board. Current examples of areas where we’d like to add expertise include:
- Mission-based business — experience creating, running or overseeing organizations that combine public benefit and commercial activities towards a mission.
- Global, public interest advocacy – experience leading successful, large-scale public interest advocacy organizations with online mobilization and shaping public discourse on key issues at the core.
- Effective ‘portfolio’ organizations – experience running or overseeing organizations that include a number of divisions, companies or non-profits under one umbrella, with an eye to helping the portfolio add up to more than the sum of its parts.
Finding the right people who match these criteria and who have the skills we need takes time. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.
We want your suggestions
We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to msurman@mozillafoundation.org. We will use real discretion with the names you send us.
March 06, 2023 07:19 PM
January 08, 2020
Mozilla is a global community that is building an open and healthy internet. We do so by building products that improve internet life, giving people more privacy, security and control over the experiences they have online. We are also helping to grow the movement of people and organizations around the world committed to making the digital world healthier.
As we grow our ambitions for this work, we are seeking new members for the Mozilla Foundation Board of Directors. The Foundation’s programs focus on the movement building side of our work and complement the products and technology developed by Mozilla Corporation.
What is the role of a Mozilla board member?
I’ve written in the past about the role of the Board of Directors at Mozilla.
At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for the Executive Director to do his or her job. I wrote in my previous post that “We feel differently”. This is still true today. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.
It’s worth noting that Mozilla is an unusual organization. We’re a technology powerhouse with broad internet openness and empowerment at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the technology industry.
It’s important that our board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.
What are we looking for?
Last time we opened our call for board members, we created a visual role description. Below is an updated version reflecting the current needs for our Mozilla Foundation Board.
Here is the full job description: https://mzl.la/MoFoBoardJD
Here is a short explanation of how to read this visual:
- In the vertical columns, we have the particular skills and expertise that we are looking for right now. We expect new board members to have at least one of these skills.
- The horizontal lines speaks to things that every board member should have. For instance, to be a board member, you should have to have some cultural sense of Mozilla. They are a set of things that are important for every candidate. In addition, there is a set of things that are important for the board as a whole. For instance, international experience. The board makeup overall should cover these areas.
- The horizontal lines will not change too much over time, whereas the vertical lines will change, depending on who joins the Board and who leaves.
Finding the right people who match these criteria and who have the skills we need takes time. We hope to have extensive discussions with a wide range of people. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.
We want your suggestions
We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to msurman@mozillafoundation.org. We will use real discretion with the names you send us.
January 08, 2020 05:18 PM
May 02, 2019
Introduction
A couple of weeks ago the Localization Team at Mozilla released the Fluent Syntax specification. As mentioned in our announcement, we already have over 3000 Fluent strings in Firefox. You might wonder how we introduced Fluent to a running project. In this post I’ll detail on how the design of Fluent plays into that effort, and how we pulled it off.
Fluent’s Design for Simplicity
Fluent abstracts away the complexities of human languages from programmers. At the same time, Fluent makes easy things easy for localizers, while making complex things possible.
When you migrate a project to Fluent, you build on both of those design principles. You will simplify your code, and move the string choices from your program into the Fluent files. Only then you’ll expose Fluent to localizers to actually take advantage of the capabilities of Fluent, and to perfect the localizations of your project.
Fluent’s Layered Design
When building runtime implementations, we created several layers to tightly own particular tasks.
- Fluent source files are parsed into Resources.
- Multiple resources are aggregated in Bundles, which expose APIs to resolve single strings. Message and Term references resolve inside Bundles, but not necessarily inside Resources. A Bundle is associated with a single language, as well as fallback languages for i18n libraries.
- Language negotiation and language fallback happen in the Localization level. Here you’d implement that someone looking for Frisian would get a Frisian string. If that’s missing or has a runtime problem, you might want to try Dutch, and then English.
- Bindings use the Localization API, and integrate it into the development stack. They marshal data models from the programming language into Fluent data models like strings, numbers, and dates. Declarative bindings also apply the localizations to the rendered UI.
Invest in Bindings
Bindings integrate Fluent into your development workflow. For Firefox, we focused on bindings to generate localized DOM. We also have bindings for React. These bindings determine how fluent Fluent feels to developers, but also how much Fluent can help with handling the localized return values. To give an example, integrating Fluent into Android app development would probably focus on a LayoutInflator. In the bindings we use at Mozilla, we decided to localize as close to the actual display of the strings as possible.
If you have declarative UI generation, you want to look into a declarative binding for Fluent. If your UI is generated programmatically, you want a programmatic binding.
The Localization classes also integrate IO into your application runtime, and making the right choices here has strong impact on performance characteristics. Not just on speed, but also the question of showing untranslated strings shortly.
Migrate your Code
Migrating your code will often be a trivial change from one API to another. Most of your code will get a string and show it, after all. You might convert several different APIs into just one in Fluent, in particular dedicated plural APIs will go away.
You will also move platform-specific terminology into the localization side, removing conditional code. You should also be able to stop stitching several localized strings together in your application logic.
As we’ll go through the process here, I’ll show an example of a sentence with a link. The project wants to be really sure the link isn’t broken, so it’s not exposed to localizers at all. This is shortened from an actual example in Firefox, where we link to our privacy policy. We’ll convert to DOM overlays, to separate localizable and non-localizable aspects of the DOM in Fluent. Let’s just look at the HTML code snippet now, and look at the localizations later.
Before:
<li>&msg-start;<a href="https://example.com">&msg-middle;</a>&msg-end;</li>
After:
<li data-l10n-id="msg"><a href="https://example.com" data-l10n-name="msg-link"></a></li>
Migrate your Localizations
Last but not least, we’ll want to migrate the localizations. While migrating code is work, losing all your existing localizations is just outright a bad idea.
For our work on Firefox, we use a Python package named fluent.migrations
. It’s building on top of the fluent.syntax
package, and programmatically creates Fluent files from existing localizations.
It allows you to copy and paste existing localizations into a Fluent string for the most simple cases. It also concats several strings into a single result, which you used to do in your code. For these very simple cases, it even uses Fluent syntax, with specialized global functions to copy strings.
Example:
msg = {COPY(from_path,"msg-start")}<a data-l10n-name="msg-link">{COPY(from_path,"msg-middle")}</a>{COPY(from_path,"msg-end")}
Then there are a bit more complicated tasks, notably involving variable references. Fluent only supports its built-in variable placement, so you need to migrate away from printf
and friends. That involves firstly normalizing the various ways that a printf parameter can be formatted and placed, and then the code can do a simple replacement of the text like %2$S
with a Fluent variable reference like {user-name}
.
We also have logic to read our Mozilla-specific plural logic from legacy files, and to write them out as select-expressions in Fluent, with a variant for each plural form.
These transforms are implemented as pseudo nodes in a template AST, which is then evaluated against the legacy translations and creates an actual AST, which can then be serialized.
Concluding our example, before:
<ENTITY msg-start "This is a link to an ">
<ENTITY msg-middle "example">
<ENTITY msg-end ".">
After:
msg = This is a link to an <a data-l10n-name="msg-link">example</a> site.
Find out more about this package and its capabilities in the documentation.
Given that we’re OpenSource, we also want to carry over attribution. Thus our code not only migrates all the data, but also splits the migration into individual commits, one for each author of the migrated translations.
Once the baseline is migrated, localizers can dive in and improve. They can then start using parameterized Terms to adjust grammar, for example. Or add a plural form where English didn’t need one. Or introduce a platform-specific terminology that only exists in their language.
May 02, 2019 08:24 AM
August 07, 2018
Gerv was Mozilla’s first intern. He arrived in the summer of 2001, when Mozilla staff was still AOL employees. It was a shock that AOL had allocated an intern to the then-tiny Mozilla team, and we knew instantly that our amazingly effective volunteer in the UK would be our choice.
When Gerv arrived a few things about him jumped out immediately. The first was a swollen, shiny, bright pink scar on the side of his neck. He quickly volunteered that the scar was from a set of surgeries for his recently discovered cancer. At the time Gerv was 20 or so, and had less than a 50% chance of reaching 35. He was remarkably upbeat.
The second thing that immediately became clear was Gerv’s faith, which was the bedrock of his response to his cancer. As a result the scar was a visual marker that led straight to a discussion of faith. This was the organizing principle of Gerv’s life, and nearly everything he did followed from his interpretation of how he should express his faith.
Eventually Gerv felt called to live his faith by publicly judging others in politely stated but damning terms. His contributions to expanding the Mozilla community would eventually become shadowed by behaviors that made it more difficult for people to participate. But in 2001 all of this was far in the future.
Gerv was a wildly active and effective contributor almost from the moment he chose Mozilla as his university-era open source project. He started as a volunteer in January 2000, doing QA for early Gecko builds in return for plushies, including an early program called the Gecko BugAThon. (With gratitude to the Internet Archive for its work archiving digital history and making it publicly available.)
Gerv had many roles over the years, from volunteer to mostly-volunteer to part-time, to full-time, and back again. When he went back to student life to attend Bible College, he worked a few hours a week, and many more during breaks. In 2009 or so, he became a full time employee and remained one until early 2018 when it became clear his cancer was entering a new and final stage.
Gerv’s work varied over the years. After his start in QA, Gerv did trademark work, a ton of FLOSS licensing work, supported Thunderbird, supported Bugzilla, Certificate Authority work, policy work and set up the MOSS grant program, to name a few areas. Gerv had a remarkable ability to get things done. In the early years, Gerv was also an active ambassador for Mozilla, and many Mozillians found their way into the project during this period because of Gerv.
Gerv’s work life was interspersed with a series of surgeries and radiation as new tumors appeared. Gerv would methodically inform everyone he would be away for a few weeks, and we would know he had some sort of major treatment coming up.
Gerv’s default approach was to see things in binary terms — yes or no, black or white, on or off, one or zero. Over the years I worked with him to moderate this trait so that he could better appreciate nuance and the many “gray” areas on complex topics. Gerv challenged me, infuriated me, impressed me, enraged me, surprised me. He developed a greater ability to work with ambiguity, which impressed me.
Gerv’s faith did not have ambiguity at least none that I ever saw. Gerv was crisp. He had very precise views about marriage, sex, gender and related topics. He was adamant that his interpretation was correct, and that his interpretation should be encoded into law. These views made their way into the Mozilla environment. They have been traumatic and damaging, both to individuals and to Mozilla overall.
The last time I saw Gerv was at FOSDEM, Feb 3 and 4. I had seen Gerv only a few months before in December and I was shocked at the change in those few months. Gerv must have been feeling quite poorly, since his announcement about preparing for the end was made on Feb 16. In many ways, FOSDEM is a fitting final event for Gerv — free software, in the heart of Europe, where impassioned volunteer communities build FLOSS projects together.
To memorialize Gerv’s passing, it is fitting that we remember all of Gerv — the full person, good and bad, the damage and trauma he caused, as well as his many positive contributions. Any other view is sentimental. We should be clear-eyed, acknowledge the problems, and appreciate the positive contributions. Gerv came to Mozilla long before we were successful or had much to offer besides our goals and our open source foundations. As Gerv put it, he’s gone home now, leaving untold memories around the FLOSS world.
August 07, 2018 06:19 PM
July 12, 2018
TL;DR: Is there research bringing together Software Analysis and Machine Translation to yield Machine Localization of Software?
I’m Telling You, There Is No Word For ‘Yes’ Or ‘No’ In Irish
from Brendan Caldwell
The art of localizing a piece of software with a Yes button is to know what that button will do. This is an example of software UI that makes assumptions on language that hold for English, but might not for other languages. A more frequent example in both UI and languages that are affecting is piecing together text and UI controls:
In the localization tool, you’ll find each of those entries as individual strings. The localizer will recognize that they’re part of one flow, and will move fragments from the shared string to the drop-down as they need. Merely translating the individual segments is not going to be a proper localization of that piece of UI.
If we were to build a rule-based machine localization system, we’d find rules like
Now that’s rule-based, and it’d be tedious to maintain these rules. Neural Machine Translation (NMT) has all the buzz now, and Machine Learning in general. There is plenty of research that improves how NMT systems learn about the context of the sentence they’re translating. But that’s all text.
It’d be awesome if we could bring Software Analysis into the mix, and train NMT to localize software instead of translating fragments.
For Firefox, could one train on English and localized DOM? For Android’s XML layout, a similar approach could work? For projects with automated screenshots, could one train on those? Is there enough software out there to successfully train a neural network?
Do you know of existing research in this direction?
July 12, 2018 01:25 PM
September 16, 2017
This week I had the opportunity to share Mozilla’s vision for an Internet that is open and accessible to all with the audience at MWC Americas.
I took this opportunity because we are at a pivotal point in the debate between the FCC, companies, and users over the FCC’s proposal to roll back protections for net neutrality. Net neutrality is a key part of ensuring freedom of choice to access content and services for consumers.
Earlier this week Mozilla’s Heather West wrote a letter to FCC Chairman Ajit Pai highlighting how net neutrality has fueled innovation in Silicon Valley and can do so still across the United States.
The FCC claims these protections hamper investment and are bad for business. And they may vote to end them as early as October. Chairman Pai calls his rule rollback “restoring internet freedom” but that’s really the freedom of the 1% to make decisions that limit the rest of the population.
At Mozilla we believe the current rules provide vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Millions of people commented on the FCC docket, including those who commented through Mozilla’s portal that removing these core protections will hurt consumers and small businesses alike.
Mozilla is also very much focused on the issues preventing people coming online beyond the United States. Before addressing the situation in the U.S., journalist Rob Pegoraro asked me what we discovered in the research we recently funded in seven other countries into the impact of zero rating on Internet use:
(Video courtesy: GSMA)
If you happen to be in San Francisco on Monday 18th September please consider joining Mozilla and the Internet Archive for a special night: The Battle to Save Net Neutrality. Tickets are available here.
You’ll be able to watch a discussion featuring former FCC Chairman Tom Wheeler; Representative Ro Khanna; Mozilla Chief Legal and Business Officer Denelle Dixon; Amy Aniobi, Supervising Producer, Insecure (HBO); Luisa Leschin, Co-Executive Producer/Head Writer, Just Add Magic (Amazon); Malkia Cyril, Executive Director of the Center for Media Justice; and Dane Jasper, CEO and Co-Founder of Sonic. The panel will be moderated by Gigi Sohn, Mozilla Tech Policy Fellow and former Counselor to Chairman Wheeler. It will discuss how net neutrality promotes democratic values, social justice and economic opportunity, what the current threats are, and what the public can do to preserve it.
September 16, 2017 04:00 AM
August 18, 2017
For the past year and a half I have been serving as one of two co-chairs of the U.S. Commerce Department Digital Economy Board of Advisors. The Board was appointed in March 2016 by then-Secretary of Commerce Penny Pritzer to serve a two year term. On Thursday I sent the letter below to Secretary Ross.
Dear Secretary Ross,
I am resigning from my position as a member and co-chair of the Commerce Department’s Digital Economy Board of Advisors, effective immediately.
It is the responsibility of leaders to take action and lift up each and every American. Our leaders must unequivocally denounce bigotry, racism, sexism, hate, and violence.
The digital economy is fundamental to creating an economy that offers opportunity to all Americans. It has been an honor to serve as member and co-chair of this board and to work with the Commerce Department staff.
Sincerely,
Mitchell Baker
Executive Chairwoman
Mozilla
August 18, 2017 07:12 PM
April 28, 2017
Today, I’m thrilled to announce that Mohamed Nanabhay and Nicole Wong have joined the Mozilla Foundation Board of Directors.
Over the last few years, we’ve been working to expand the boards for both the Mozilla Foundation and the Mozilla Corporation. Our goals for the Foundation board roles were to grow Mozilla’s capacity to move our mission forward; expand the number and diversity of people on our boards, and; add specific skills in areas related to movement building and organizational excellence. Adding Mohamed and Nicole represents a significant move forward on these goals.
We met Mohamed about seven years ago through former board member and then Creative Commons CEO Joi Ito. Mohamed was at Al Jazeera at the time and hosted one of Mozilla’s first Open News fellows. Mohamed Nanabhay currently serves as the Deputy CEO of the Media Development Investment Fund (MDIF), which invests in independent media around the world providing the news, information and debate that people need to build free, thriving societies.
Nicole is an attorney specializing in Internet, media and intellectual property law. She served as President Obama’s deputy chief technology officer (CTO) and has also worked as the vice president and deputy general counsel at Google to arbitrate issues of censorship. Nicole has already been active in helping Mozilla set up a new fellows program gathering people who have worked in government on progressive tech policy. That program launches in June.
Talented and dedicated people are the key to building an Internet as a global public resource that is open and accessible to all. Nicole and Mohammad bring expertise, dedication and new perspectives to Mozilla. I am honored and proud to have them as our newest Board members.
Please join me in welcoming Mohamed and Nicole to the Board. You can read more about why Mohamed chose to join the Board here, and why Nicole joined us here.
Mitchell
April 28, 2017 08:29 PM
March 23, 2017
I’m going to just recreate blame, he said. It’s going to be easy, he said.
We have a project to migrate the localization of Firefox to one repository for all channels, nick-named cross-channel, or x-channel in short. The plan is to create one repository that holds all the en-US strings we need for Firefox and friends on all channels. One repository to rule them all, if you wish. So you need to get the contents of mozilla-central
, comm-central
, *-aurora
, *-beta
, *-release
, and also some of *-esr??
together in one repository, with, say, one toolkit/chrome/global/customizeToolbar.dtd
file that has all the strings that are used by any of the apps on any branch.
We do have some experience with merging the content of localization files as part of l10n-merge which is run at Firefox build time. So this shouldn’t be too hard, right?
Enter version control, and the fact that quite a few of our localizers are actually following the development of Firefox upstream, patch by patch. That they’re trying to find the original bug if there’s an issue or a question. So, it’d be nice to have the history and blame in the resulting repository reflect what’s going on in mozilla-central and its dozen siblings.
Can’t we just hg convert and be done with it? Sadly, that only converts one DAG into another hg DAG, and we have a dozen. We have a dozen heads, and we want a single head in the resulting repository.
Thus, I’m working on creating that repository. One side of the task is to update that target repository as we see updates to our 12 original heads. I’m pretty close to that one.
The other task is to create a good starting point. Or, good enough. Maybe if we could just create a repo that had the same blame as we have right now? Like, not the hex or integer revisions, but annotate to the right commit message etc? That’s easy, right? Well, I thought it was, and now I’m learning.
To understand the challenges here, one needs to understand the data we’re throwing at any algorithm we write, and the mercurial code that creates the actual repository.
As of FIREFOX_AURORA_45_BASE, just the blame for the localized files for Firefox and Firefox for Android includes 2597 hg revisions. And that’s not even getting CVS history, but just what’s in our usual hg repository. Also, not including comm-central
in that number. If that history was linear, things would probably be pretty easy. At least, I blame the problems I see in blame on things not being linear.
So, how non-linear is that history. The first attempt is to look at the revision set with hg log -G -r ....
. That creates a graph where the maximum number of parents of a single changeset is at 1465. Yikes. We can’t replay that history in the target repository, as hg commits can only have 2 parents. Also, that’s clearly not real, we’ve never had that many parallel threads of development. Looking at the underlying mercurial code, it’s showing all reachable roots as parents of a changeset, if you have a sparse graph. That is, it gives you all possible connections through the underlying full graph to the nodes in your selection. But that’s not what we’re interested in. We’re interested in the graph of just our nodes, going just through our nodes.
In a first step, I wrote code that removes all grandchildren from our parents. That reduces the maximum number of parents to 26. Much better, but still bad. At least it’s at a size where I can start to use graphviz to create actual visuals to inspect and analyze. Yes, I can graph that graph.
The resulting graph has a few features that are actually already real. mozilla-central has multiple roots. One is the initial hg import of the Firefox code. Another is including Firefox for Android in mozilla-central
, which used to be an independent repository. Yet another is the merge of services/sync
. And then I have two heads, which isn’t much of a problem, it’s just that their merge commit didn’t create anything to blame for, and thus doesn’t show up in my graph. Easy to get to, too.
Looking at a subset of the current graph, it’s clear that there are more arcs to remove:
Anytime you have an arc that just leap-frogs to an ancestor, you can safely remove that. I indicated some in the graph above, and you’ll find more – I was just tired of annotating in Preview. As said before, I already did that for grandchildren. Writing this post I realize that it’s probably easy enough to do it for grandgrandchildren, too. But it’s also clear from the full graph, that that algorithm probably won’t scale up. Seems I need to find a good spot at which to write an explicit loop detection.
This endeavour sounds a bit academic at first, why would you care? There are two reasons:
Blame in mercurial depends on the diff that’s stored in the backend, and the diff depends on the previous content. So replaying the blame in some way out of band doesn’t actually create the same blame. My current algorithm is to just add the final lines one by one to the files, and commit. Whitespace and reoccurring lines get all confused by that algorithm, sadly.
Also, this isn’t a one-time effort. The set of files we need to expose in the target depends on the configuration, and often we fix the configuration of Firefox l10n way after the initial landing of the files to localize. So having a sound code-base to catch up on missed history is an important step to make the update algorithm robust. Which is really important to get it run in automation.
PS: The tune for this post is “That Smell” by Lynyrd Skynyrd.
March 23, 2017 02:01 PM
March 13, 2017
There are a set of topics that are important to Mozilla and to what we stand for in the world — healthy communities, global communities, multiculturalism, diversity, tolerance, inclusion, empathy, collaboration, technology for shared good and social benefit. I spoke about them at the Mozilla All Hands in December, if you want to (re)listen to the talk you can find it here. The sections where I talk about these things are at the beginning, and also starting at about the 14:30 minute mark.
These topics are a key aspect of Mozilla’s worldview. However, we have not set them out officially as part of who we are, what we stand for and how we describe ourselves publicly. I’m feeling a deep need to do so.
My goal is to develop a small set of principles about these aspects of Mozilla’s worldview. We have clear principles that Mozilla stands for topics such as security and free and open source software (principles 4 and 7 of the Manifesto). Similarly clear principles about topic such as global communities and multiculturalism will serve us well as we go forward. They will also give us guidance as to the scope and public voice of Mozilla, spanning official communications from Mozilla, to the unofficial ways each of us describes Mozilla.
Currently, I’m working on a first draft of the principles. We are working quickly, as quickly as we can have rich discussions and community-wide participation. If you would like to be involved and can potentially spend some hours reviewing and providing input please sign up here. Jascha and Jane are supporting me in managing this important project.
I’ll provide updates as we go forward.
March 13, 2017 06:28 PM
March 03, 2017
Or, how to change everything and nobody sees a difference.
Heads up: All I’m writing about here is running on non-web-facing VMs behind VPN.
tl;dr: I changed 5 VMs, landed 76 changesets in 7 repositories, resolving 12 bugs, got two issues in docker fixed, and took a couple of days of downtime. If automation is your cup of tea, I have some open questions at the end, too.
To set the stage: Behind the scenes of the elmo website, there’s a system that generates the data that it shows. That system consists of two additional VMs, which help with the automation.
One is nick-named a10n, and is responsible for polling all those mercurial repositories that we use for l10n, and to update the elmo database with information about these repositories as it comes in. elmo basically keeps a copy of the mercurial metadata for quicker access.
The other is running buildbot
to do the actual data collection jobs about the l10n status in our source repositories. This machine runs both a master and one slave (the actual workhorse, not my naming).
This latter machine is an old VM, on old OS, old Python (2.6), never had real IT support, and is all around historic. And needed to go.
With the help of IT, I had a new VM, with a new shiny python 2.7.x, and a new storage. Something that can actually run current versions of compare-locales, too. So I had to create an update for
Python 2.6 |
→ |
Python 2.7.x |
globally installed python modules |
→ |
virtualenv |
Django 1.4.18 |
→ |
Django 1.8.x |
Ubuntu |
→ |
CentOS |
Mercurial 3.7.3 |
→ |
Mercurial 4.0.1 and hglib |
individual local clones |
→ |
unified local clones |
No working stage |
→ |
docker-compose up |
At the same time, we also changed hg.m.o from http to https all over the place, which also required a handful of code changes.
One thing that I did not change is buildbot. I’m using a heavily customized version of buildbot 0.7.12, which is incompatible with later buildbot changes. So I’m tied to my branch of 0.7.12 for now, and with that to Twisted 8.2.0. That will change, but in a different blog post.
Unified Repositories
One thing I wanted and needed for a long time was to use unified clones of our mercurial repositories. Aside from the obvious win in terms of disk usage, it allows to use mercurial directly to create a diff from a revision that’s only on aurora against a revision that’s only on beta. Sadly, I did think otherwise when I wrote the first parts of elmo and the automation behind it, often falling back to default
instead of an actual hash revision, if I didn’t know anything ad-hoc. So that had to go, and required a surprising amount of changes. I also changed the way that comparisons are triggered, making them fully reproducible. They also got more robust. I used to run hg id -ir .
to get the revision, which worked OK, unless you had extension errors in stdout/stderr. Meh. Good that that’s gone.
As I noted, the unified repositories also benefit doing diffs, which is one of the features of elmo for reviewing localizations. Now that we can just use plain mercurial to get those diffs, I could remove a bunch of code that created diffs between aurora and beta by creating diffs between each head and some ancestor, and then sticking those diffs back together. Good that that’s gone.
Testing
Testing an automation with that many moving parts is hard. Some things can be tested via unit tests, but more often, you just need integration tests. I still have to find a way to write automated integration tests, but even manual integration tests require a ton of set-up:
- elmo
- MySQL
- ElasticSearch
- RabbitMQ
- Mercurial upstream repositories
- Mercurial web server
- a10n get-pushes poller
- a10n data ingestion worker
- Buildbot master
- Buildbot slave
Doing this manually is evil, and on Macs, it’s not even possible, because Twisted 8.2.0 doesn’t build anymore. I used to have a script that did many of these things, but that’s …. you guessed it. Good that that’s gone. Now I have a docker-compose
test setup, that has most things running with just a docker-compose up
. I’m still running elmo and MySQL on my host machine, fixing that is for another day. Also, I haven’t found a good way to do initial project setup like database creations. Anyway, after finding a couple of bugs in docker, this system now fires up quickly and let’s me do various changes and see how they pass through the system. One particularly nice artifact is that the output of docker-compose is actually all the logs together in one stream. So as you’re pushing things through the system, you just have one log to watch.
As part of this work, I also greatly simplified the code structure, and moved the buildbot integration from three repositories into one. Good that those are gone.
snafus
Sadly there were a few bits and pieces where my local testing didn’t help:
Changing the URL schemes for hg.m.o to https alongside this change triggered a couple of problems where Twisted 8.2 and modern Python/OpenSSL can’t get a connection up. Had to replace the requests to websites with synchronous urllib2.urlopen
calls.
Installing mercurial in a virtualenv to be used via hglib is good, but WSGI doesn’t activate the virtualenv, and thus PATH
isn’t set. My fix still needs some server-side changes to work.
I didn’t have enough local testing for the things that Thunderbird needs. That left that setup burning for longer than I anticipated. The fix wasn’t hard, just badly timed.
Every now and then, Django 1.8.x and MySQL decide that it’s a good idea to throw away the connection, and die badly. In the case of long-running automation jobs, that’s really hard to prevent, in particular because I still haven’t fully understood what change actually made that happen, and what the right fix is. I just plaster connection.close()
into every other function, and see if it stops dying.
On Saturday morning I woke up, and the automation didn’t process Firefox for a locale on aurora. I freaked out, and added tons of logging. Good logging that is. Best logging. Found a different bug. Also found out that the locale was Belarus, and that wasn’t part of the build on Saturday. Hit my head against a wall or two.
Said logging made uncaught exceptions in some parts of the code actually show up in logs, and discovered that I hadn’t tested my work against bad configurations. And we have that, Thunderbird just builds everything on central, regardless of whether the repositories it should use for that exist or not. I’m not really happy yet with the way I fixed this.
Open Questions
- Anyone got taskcluster running on something resembling docker-compose for local testing and development? You know, to get off of buildbot.
- Initial setup steps for the docker-compose staging environment are best done … ?
- Test https connections in docker-compose? Can I? Which error cases would that cover?
March 03, 2017 08:01 PM
January 31, 2017
Today I want to say thank you to Reid Hoffman for 11 years as a Mozilla Corporation board member. Reid’s normal “tour of duty” on a board is much shorter. Reid joined Mozilla as an expression of his commitment to the Open Internet and the Mozilla mission, and he’s demonstrated that regularly. Almost five years ago I asked Reid if he would remain on the Mozilla board even though he had already been a member for six years. Reid agreed. When Chris Beard joined us Reid agreed to serve another two years in order to help Chris get settled and prime Mozilla for the new era.
Mozilla is in a radically better place today than we were two, three, or five years ago, and is poised for a next phase of growth and influence. Take a look at the Annual Report we published Dec 1, 2016 to get a picture of our financial and operational health. Or look at The Glass Room, or our first Internet Health Report, or the successful launch of Firefox Focus (or Walt Mossberg’s article about Mozilla) to see what we’ve done the last few months.
And so after an extended “tour of duty” Reid is leaving the Mozilla Corporation board and becoming an Emeritus board member. He remains a close friend and champion of Mozilla and the Open Internet. He continues to help identify technologists, entrepreneurs, and allies who would be a good fit to join Mozilla, including at the board level. He also continues to meet with and provide support to our key executives.
A heartfelt thank you to Reid.
January 31, 2017 05:06 PM
December 05, 2016
This post was originally posted on the Mozilla.org website.
Helen Turvey, new Mozilla Foundation Board member
Today, we’re welcoming Helen Turvey as a new member of the Mozilla Foundation Board of Directors. Helen is the CEO of the Shuttleworth Foundation. Her focus on philanthropy and openness throughout her career makes her a great addition to our Board.
Throughout 2016, we have been focused on board development for both the Mozilla Foundation and the Mozilla Corporation boards of directors. Our recruiting efforts for board members has been geared towards building a diverse group of people who embody the values and mission that bring Mozilla to life. After extensive conversations, it is clear that Helen brings the experience, expertise and approach that we seek for the Mozilla Foundation Board.
Helen has spent the past two decades working to make philanthropy better, over half of that time working with the Shuttleworth Foundation, an organization that provides funding for people engaged in social change and helping them have a sustained impact. During her time with the Shuttleworth Foundation, Helen has driven the evolution from traditional funder to the current co-investment Fellowship model.
Helen was educated in Europe, South America and the Middle East and has 15 years of experience working with international NGOs and agencies. She is driven by the belief that openness has benefits beyond the obvious. That openness offers huge value to education, economies and communities in both the developed and developing worlds.
Helen’s contribution to Mozilla has a long history: Helen chaired the digital literacy alliance that we ran in UK in 2013 and 2014; she’s played a key role in re-imagining MozFest; and she’s been an active advisor to the Mozilla Foundation executive team during the development of the Mozilla Foundation ‘Fuel the Movement’ 3 year plan.
Please join me in welcoming Helen Turvey to the Mozilla Foundation Board of Directors.
Mitchell
You can read Helen’s message about why she’s joining Mozilla here.
Background:
Twitter: @helenturvey
High-res photo
December 05, 2016 02:30 AM
October 05, 2014
I’ve given the team pages on l10n.mozilla.org a good whack in the past few days. Time to share some details and get feedback before I roll it out.
The gist of it: More data in less screen space, I just folded things into rows, and made the rows slimmer. Better display of sign-off status, I separated status from progress and actions. Actions are now ordered chronologically, too.
The details? Well, see my recording where I walk you through:
View it on youtube.
Comments here or in bug 1077988.
October 05, 2014 02:02 PM
June 17, 2014
We have a lot of data around localizations, but it’s hard to know what people might be looking for.
I just switched a new feature live, edit your own dashboard.
You can select branches of products, as well as the localizations you’re interested in, and get data you want.
Say you’re looking for mobile and India. You’d want Firefox OS and Firefox for Android aka Fennec. The latter is actively localized on aurora, so you’d want the gaia
tree and fennec_aurora
. You want Assamese, Bengali, Gujarati…. and 9 other languages. Select gu
and pa
, too, ’cause why not.
Or are you keen on Destop in Latin America? Again we’re looking at Aurora, so fx_aurora
is our tree of choice this time. Locales are Spanish in its American Variants, and Brazilian Portuguese.
Select generously, you can always reduce your selection through the controls on the right side of the resulting dashboard.
Play around, and compare the Status and History columns. Try to find stories, and share them in the comments below.
A bit more details on fx-aurora
vs Firefox 32
. Right now, Firefox 32 is on the Aurora channel and repository branch. Thus, selecting either gives you the same data today. In six weeks, though, 32 is going to be on beta, so if you bookmark a link, it’d give you different data then. That’s why you can only select one or the other.
June 17, 2014 03:38 PM
May 22, 2014
Today we’re launching an update to l10n.mozilla.org (elmo).
Team pages and the project overview tables now contain sparklines, indicating the progress over the past 50 days.
Want to see how a localization team is doing? Now with 100% more self-serve.
If the sparklines go up like so
the localization is making good progress. Each spark is an update (either en-US or the locale), so sparks going up frequently show that the team is actively working on this one.
If the sparklines are more like
then, well, not so much.
The sparklines always link to an interactive page, where you can get more details, and look at smaller or larger time windows for that project and that locale.
You should also look at the bugzilla section. A couple of bugs with recent activity is good. More bugs with no activity for a long time, not so much.
Known issues: We still have localizations showing status for central/nightly
, even though those teams don’t work on central. Some teams do, but not all. Also, the sparklines start at some point in the past 50 days, that’s because we don’t figure out the status before. We could.
May 22, 2014 11:44 AM
September 19, 2013
Or, how I made converting gaia to gaia-l10n suck less.
Background: For Firefox OS, we’re exposing a modified repository to localizers, so that it’s easier to find out what to work on, and to get support from the l10n dashboards. Files in the main gaia repository on github like
apps/browser/locales/browser.en-US.properties
should become
apps/browser/browser.properties
and the localizable sections in manifest.webapp
,
{
…
"locales": {
"en-US": {
"name": "Browser",
"description": "Gaia Web Browser"
}
…
}
are exposed in manifest.properties
as
name: Browser
description: Gaia Web Browser
We’re also not supporting git on the l10n dashboard yet, so we need hg repositories.
I haven’t come across a competitor to hg convert
on the git side yet, so I looked on the mercurial side of life. I started by glancing of the code in hgext/convert
in the upstream mercurial code. That does a host of things to get parents and graphs right, and I didn’t feel like replicating that. It doesn’t offer hooks for dynamic file maps, though, let alone content rewriting. But it’s python, and it’s open-source. So I forked it.
With hg convert
. Isn’t that meta? That gives me a good path to update the extension with future updates to upstream mercurial. I started out with a conversion of mercurial 2.7.1, then removed all the stuff I don’t need like bzr support etc. Then I made the mercurial code do what I need for gaia. I had to disable some checks that try to avoid commits that don’t actually change the contents, because I don’t mind that happening. And last but not least I added the filemap and the shamap of the initial conversion of hgext/convert
, so that future updates don’t depend on my local disk.
Now I could just run hg gaiaconv
and get what I want. Enter the legacy repositories for en-US. We only want fast-forward merges in hg, and in the conversion to git. No history editing allowed. But as you can probably guess, the new history is completely incompatible with the old, from changeset one. But I don’t mind, I hacked that.
I did run the regular hg gaiaconv
up to the revision 21000 of the integration/gaia-central
repository. That ended up with the graph for revision 4af36780fb5d.
I pulled the old conversion for v1-train, which is the graph for revision ad14a618e815.
Then I did a no-op merge of the old graph into the new one.
That’s all good, but now future conversions via gaiaconv
would still pick up the non-merged revision. Well, unless one just edits the generated shamap, and replaces all references to 4af36780fb5d with cfb28f851111. And yes, that actually works.
Tadaaa, a fully automated conversion process, and only forward merges.
Repositories involved in this post:
September 19, 2013 02:49 PM
February 15, 2013
Let me share some recent revelations I had. It all started with the infamous Berlin airport. Not the nice one in Tegel, but the BBI desaster. The one we’ve thought we’d open last year, and now we don’t know which year.
Part of the newscoverage here in Germany was all about how they didn’t do any risk analysis, and are doomed, and how that other project for the Olympics in London did do risk analysis, and got in under budget, ahead of time.
So what’s good for the Olympics can’t be bad for Firefox, and I started figuring out the math behind our risk to ship Firefox, at a given time, with loads of localizations. How likely is it that we’ll make it?
Interestingly enough, the same algorithm can also be applied to a set of features that are scheduled for a particular Firefox release. Locales, features, blockers, product-managers, developers, all the same thing :-). Any bucket of N things trying to make a single deadline have similar risks. And the same cure. So bear with me. I’ll sprinkle graphs as we go to illustrate. They’ll link to a site that I’ve set up to play with the numbers, reproducing the shown graphs.
The setup is like this: Every single item (localization, for exampe) has a risk, and I’m assuming the same risk across the board. I’m trying to do that N times, and I’m interested in how likely I’ll get all of them. And then I evaluate the impact of different amounts of freeze cycles. If you’re like me, and don’t believe any statistics unless they’re done by throwing dices, check out the dices demo.
Anyway, let’s start with 20% risk per locale, no freeze, and up to 100 locales.
Ouch. We’re crossing 50-50 at 3 items already, and anything at scale is a pretty flat zero-chance. Why’s that? What we’re seeing is an exponential decay, the base being 80%, and the power being how often we do that. This is revelation one I had this week.
How can we help this? If only our teams would fail less often? Feel free to play with the numbers, like setting the successrate from 80% to 90%. Better, but the system at large still doesn’t scale. To fight an exponential risk, we need a cure that’s exponential.
Turns out freezes are just that. And that’d be revelation two I had this week. Let’s add some 5 additional frozen development cycles.
Oh hai. At small scales, even just one frozen cycle kills risks. Three features without freeze have a 50-50 chance, but with just one freeze cycle we’re already at 88%, which is better than the risk of each individual feature. At large scales like we’re having in l10n, 2 freezes control the risk to mostly linear, 3 freezes being pretty solid. If I’m less confident and go down to 70% per locale, 4 or 5 cycles create a winning strategy. In other words, for a base risk of 20-30%, 4-5 freeze cycles make the problem for a localized release scale.
It’s actually intuitive that freezes are (kinda) exponentially good. The math is a tad more complicated, but simplified, if your per-item success rate is 70%, you only have to solve your problem for 30% of your items in the next cycle, and for 9% in the second cycle. Thus, you’re fighting scale with scale. You can see this in action on the dices demo, which plays through this each time you “throw” the dices.
Now onwards to my third revelation while looking at this data. Features and blockers are just like localizations. Going in to the rapid release cycle with Firefox 5 etc, we’ve made two rules:
- Feature-freeze and string-freeze are on migration day from central to aurora
- Features not making the freeze take the next train
That worked fine for a while, but since then, mozilla has grown as an organization. We’ve also built out dependencies inside our organization that make us want particular features in particular releases. That’s actually a good situation to be in. It’s good that people care, and it’s good that we’re working on things that have organizational context.
But this changed the risks in our release cycle. We started off having a single risk of exponential scale after the migration date (l10n). Today, we have features going in to the cycle, and localizations thereof. At this point, having feature-freeze and string-freeze being the same thing becomes a risk for the release cycle at large. We should think about how to separate the two to mitigate the risk for each effectively, and ship awesome and localized software.
I learned quite a bit looking at our risks, I hope I could share some of that.
February 15, 2013 03:26 PM
September 27, 2012
Language packs are add-ons that you can install to add additional localizations to our desktop applications.
Starting with tomorrow’s nightly, and thus following the Firefox 18 train, language packs will be restartless. That was bug 677092, landed as 812d0ba83175.
To change your UI language, you just need to install a language pack, set your language (*), and open a new window. This also works for updates to an installed language pack. Opening a new window is the workaround for not having a reload button on the chrome window.
The actual patch turned out to be one line to make language packs restartless, and one line so that they don’t try to call in to bootstrap.js
. I was optimistic that the chrome registry was already working, and rightfully so. There are no changes to the language packs themselves.
Tests were tricky, but Blair talked me through most of it, thanks for that.
(*) Language switching UI is bug 377881, which has a mock-up for those interested. Do not be scared, it only shows if you have language packs installed.
September 27, 2012 11:04 AM