Existing machine translation software must wait for pauses in speech. The new software to be developed by the National Institute of Information and Communications Technology will be able to start interpreting while a person is talking, making for a smoother conversation, officials say.
Using reams of text in such languages as Japanese, English and Chinese, the prototype software will go through a machine-learning process comparing words and phrases with those that come later.
In this way, it will learn to start interpreting speech a few seconds after the speaker starts talking.
AI-powered interpretation could find its way to a range of venues, including hotels, tourist attractions, business meetings and international conferences.
The Ministry of Internal Affairs and Communications believes that the interpretation tool will take the lead in this field if a practical version can be made available by 2025.
After the software is developed, the institute will open up any patented technologies to the private sector. Mobile phone carriers, electronics makers and others will be able to use them to develop their own products and services.
The ministry will request 2 billion yen (US$18.9 million) for the effort in the fiscal 2020 budget.
If you want to read this article in Japanese, please see the following link: