You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current model is a fine-tuned Seq2Seq model whereas my model is a transformer-based machine translation model.
One of the drawbacks of Seq2Seq modeling is that since Recurrent Neural Network (RNN) is used for constructing the encoder and the decoder, the architecture memorizes the entire sentence from the source language. Therefore, with the increase in the length of sentences, the performance would decrease.
Transformers, on the other hand, breaks the entire sentence from the source language into parts and then translates these parts making them much more efficient. It allows the decoder to see the entire input sequence all at once.
For example,
The translation for 'overgrown weeds' by:
Current model is: 'muy de la ciudad' which when translated by Google says 'very from the city'
My model is: 'Hierbas sobrecrecidas' which when translated by Google says 'overgrown grasses'
The results from my model are closer to being accurate than the current model. By fine-tuning my model, I believe the results generated will be more accurate.
The text was updated successfully, but these errors were encountered:
Can we use Gemini's API key and get translation done by it
I have created a Generative AI model that translates sentences from Hindi to English
Same can be applied to variety of languages at one go and can be implemented on our code
The current model is a fine-tuned Seq2Seq model whereas my model is a transformer-based machine translation model.
One of the drawbacks of Seq2Seq modeling is that since Recurrent Neural Network (RNN) is used for constructing the encoder and the decoder, the architecture memorizes the entire sentence from the source language. Therefore, with the increase in the length of sentences, the performance would decrease.
Transformers, on the other hand, breaks the entire sentence from the source language into parts and then translates these parts making them much more efficient. It allows the decoder to see the entire input sequence all at once.
For example,
The translation for 'overgrown weeds' by:
Current model is: 'muy de la ciudad' which when translated by Google says 'very from the city'
My model is: 'Hierbas sobrecrecidas' which when translated by Google says 'overgrown grasses'
The results from my model are closer to being accurate than the current model. By fine-tuning my model, I believe the results generated will be more accurate.
The text was updated successfully, but these errors were encountered: