Abstract
The article explores the potential of neural network models in translating English- and German-language texts into Ukrainian. Today, these models form the basis of the most popular machine translation systems, having replaced outdated statistical and rule-based approaches. Thanks to the Transformer architecture, it has become possible to reproduce complex syntactic structures and account for context more effectively. The study analyzes the performance of three systems – DeepL, Google Translate, and GPT – with a comparative overview of their results in translating texts of different genres: scientific, journalistic, and literary. This made it possible to highlight the cases in which machine translation systems perform most successfully and where human intervention is still required. The research presents examples demonstrating both the strengths and weaknesses of the models. It was found that the most accurate results appear in technical and journalistic materials, where semantic precision is of primary importance. At the same time, the translation of literary works proves to be far more challenging: the systems often fail to preserve irony, stylistic nuances, or wordplay. It was also emphasized that translation quality depends on the training corpus and the text domain: results are considerably better in familiar domains than in specialized or less common areas. The article concludes that neural network translation models are an important tool in modern translation practice. They save time and help overcome language barriers, but they are still unable to fully replace human translators. The most promising direction is the combination of automatic translation with subsequent human editing and quality control.
References
Bahdanau D., Cho K., Bengio Y. Neural machine translation by jointly learning to align and translate. ICLR. 2014. Pp. 1–15.
Johnson M., Schuster M., Le Q. V., et al. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Transactions of the Association for Computational Linguistics, vol. 5. 2017. Pp. 339–351, 2
Müller M., Freitag M., Al-Onaizan Y. Domain Robustness in Neural Machine Translation. Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP, 2019. Pp. 42–50.
Stahlberg Felix. Neural machine translation: A review. Journal of Artificial Intelligence Research, 69. 2020. Pp. 343–418.
Vaswani A., Shazeer N., Parmar N., et al. Attention Is All You Need. Advances in Neural Information Processing Systems, 30. 2017. Pp. 5998–6008.
Zhang J., Zong C. Neural machine translation: Challenges, progress and future. Science China Technological Sciences, 63(10). 2020 Pp. 2028–2050.
Юрчак І., Хіч А., Кичук О., , Оксентюк В. Можливості та обмеження великих мовних моделей. Computer Systems and Network, 6 (2). 2024. С. 267–280.
Chat GPT. URL: https://chatgpt.com/ (дата звернення: 15.07.2025)
DeepL. URL: https://www.deepl.com (дата звернення: 15.07.2025)
Google Translate. URL: https://translate.google.com (дата звернення: 15.07.2025)
Rowling J. K. Harry Potter and the Philosopher’s Stone. London: Bloomsbury Publishing, 2014. 342 p.
Vaswani A., Shazeer N., Parmar N., et al. Attention Is All You Need. Advances in Neural Information Processing Systems, 30. 2017. Pp. 5998-6008.
Більд (Bild). URL: https://www.bild.de/politik/inland/schueler-zu-oft-online-forscher-schlagen-alarm-wegensocial-media-sucht-689c4ef7f3b330071984022e (дата звернення: 15.07.2025)

This work is licensed under a Creative Commons Attribution 4.0 International License.
