ENMF shall compute over batch of user instead of full users #999
-
Refer to the ENMF paper, the loss always goes with a batch of users. The |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
@rowedenny Thanks for your attention ! ENMF is a sepcific model that different from other gereral recommender models. |
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for your quick response. However, I am afraid that I cannot agree with you. We can investigate it from two aspects:
From these two aspects, I believe the current implementation has a minor issue, though I am not sure if one term is computed for full batch, while the other for a batch could result in loss inblance. Correct me if I have any misunderstandings. |
Beta Was this translation helpful? Give feedback.
-
Hi @rowedenny. After consideration, i think you are right. It is exactly a BUG of our implementation. We will fix it as soon as possiable. Thank again for your carefully review of the code. |
Beta Was this translation helpful? Give feedback.
-
If my understanding is correct, we only need to update the Is that correct? |
Beta Was this translation helpful? Give feedback.
-
By the way, I may also point out one minor issue Recall the padding item is zeros, however, the xavier_uniform_initialization initialization initializes all the embeddings, including the Unless we use a mask to mask out the padding influence. In short, I assume it is more appropriate to remove the xavier_normal_initialization initialization. |
Beta Was this translation helpful? Give feedback.
Hi @rowedenny. After consideration, i think you are right. It is exactly a BUG of our implementation. We will fix it as soon as possiable. Thank again for your carefully review of the code.