You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 1, 2021. It is now read-only.
The algorithm published in the original paper does not require such synchronization (sec 3.1: each worker maintains it's own clock, t_i). Is AllReduceEA then an algorithm for synchronous EASGD, for which only the formulation was presented in the paper (sec 3)?
If so, are there any comparisons between synchronous and asynchronous EASGD?
Apologies if I have misunderstood this.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
From the code and the algorithm presented (https://github.com/twitter/torch-distlearn/blob/master/lua/AllReduceEA.md), it seems like the all-reduce step involves synchronization between workers.
The algorithm published in the original paper does not require such synchronization (sec 3.1: each worker maintains it's own clock, t_i). Is AllReduceEA then an algorithm for synchronous EASGD, for which only the formulation was presented in the paper (sec 3)?
If so, are there any comparisons between synchronous and asynchronous EASGD?
Apologies if I have misunderstood this.
The text was updated successfully, but these errors were encountered: