expand_dims ( Y_train, - 1 ), validation_data = ( X_validation, np. format ( language ) best_model_cb = ModelCheckpoint ( best_model_fname, monitor = 'val_loss', save_best_only = True, verbose = 0 ) history = simple_seq2seq. More on that in later posts…Įxamples = _simple_seq2seq_checkpoint.h5". However, this task is more difficult (implicit base 16, and exponentiation based on the digit position). It can take a given number and converts it to a text string spelling the number in one of many. To check whether the model is able to acquire some basic arithmetic skills, we have added the task of translating from hexadecimal to base 10 digits.Ĭonsidering its poor results, it is unlikely that the model learns any arithmetic at all for performing its translation task. This package can convert numbers to words in many languages. it has to be interpreted as (4 * 20 + 11) * 1000.
the model requires less training examples. The United Bible Societies reported that the Bible, in whole or part, has been translated in more than 3,324 languages (including an increasing number of sign languages), including complete Old or New Testaments in 2,189 languages, and the complete text of the Bible (Protestant canon) in 674 languages, by the end of December 2017. The experiment: We illustrate the convergence of the model to perfect prediction on the test set as a function of the training set size.įaster increasing accuracy indicates easier learning task, i.e. fairseq or sockeye for more sophisticated ones. Note that this is a very simple seq2seq model, cf. It is based on the very good deep learning tutorials by Olivier Grisel and Charles Ollion. French, English, Chinese, Malay) to their digits (base 10) representation.
#TRANSLATE NUMBERS IN DIFFERENT LANGUAGES HOW TO#
This blog post illustrates how difficult it is for a simple seq2seq model to learn how to translate numbers from different languages (e.g. On the difficulty of reading numbers in different languages