Replies: 1 comment 1 reply
-
When the number of users and items changes, the embedding dimension of the model will also change, so it cannot be loaded directly. I think you want the old user or item to use the pre-training embedding, while the new user or item is initialized randomly and trained from scratch. My suggestion is to concatenate the pre-train parameters with the randomly initialized embedding parameters before modifying the parameters. For example:
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
hello! I am currently in the process of trying to proceed with learning by dividing the dataset of movielens using the LightGCN model.
In the process of saving and loading the model with torch, the node embedding size does not match, so learning is impossible.
(The values corresponding to 494 and 1609 here mean the unique values of user_id and movie_id constituting the dataset entered in the pre-learning stage, that is, one node.)
In other words, since I want to learn by splitting the same dataset, I want to learn in an environment where the number of nodes changes (during re-training) when the data type is the same but the unique value is different.
RuntimeError: Error(s) in loading state_dict for LightGCN:
size mismatch for user_embedding.weight: copying a param with shape torch.Size([494, 64]) from checkpoint, the shape in current model is torch.Size([587, 64]).
size mismatch for item_embedding.weight: copying a param with shape torch.Size([1609, 64]) from checkpoint, the shape in current model is torch.Size([1622, 64]).
In my opinion, I checked the link https://recbole.io/docs/user_guide/usage/load_pretrained_embedding.html as a way to load and use the Pre-trained model, which is the result of model learning, but I am having difficulty applying it. Any help would be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions