-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calling methods via instantiate API #205
Comments
This doesn't work with the code above: >>> instantiate(builds(torch.Generator, zen_wrappers=call_method("manual_seed", 42)))
...
TypeError: Error instantiating 'hydra_zen.funcs.zen_processing' : outer() got multiple values for argument 'mthd_name' |
Ah, whoops. Yeah things get a little complicated with |
I think, for now, we should provide this functionality as is. It provides a useful functionality that probably won't be used that much.
Doesn't
I agree it would be nicer to have a more user friendly interface (and one we can validate), but maybe we can wait until we see more use cases. One idea in left field, is there any possible way to code this API: |
The generated dataclass would need a special Edit: Actually I think you might need a metaclass |
@jgbos I think you are right. Since this is independent of
I think this would get hairy pretty fast; a purely functional approach would definitely be easier to support.
Yep, I think you are right. I don't think |
#219 prototypes a potential solution to the above, that is much more powerful than what I had initially proposed. This is inspired by @jgbos ' out of left field idea 😄 Let's see >>> from hydra_zen import just, instantiate, like
>>> import torch
>>> GenLike = like(torch.Generator) # 'memorizes' sequence of interactions with `torch.Generator`
>>> seeded = GenLike().manual_seed(42)
>>> seeded
Like(<class 'torch._C.Generator'>)().manual_seed(42)
>>> Conf = just(seeded) # creates Hydra-compatible config for performing interactions via instantiation
>>> generator = instantiate(Conf) # calls torch.Generator().manual_seed(42)
<torch._C.Generator at 0x14efb92c770> what is going on here?
Let's use this to make a config that actually returns the >>> manual_seed_fn = GenLike().manual_seed
>>> instantiate(just(manual_seed_fn))
<function Generator.manual_seed> Here is a silly example, just to help give more of a feel for this: >>> listy = like(list)
>>> x = listy([1, 2, 3])
>>> y = x.pop()
# note that `x` *does not* reflect any mutation
>>> instantiate(just(x)), instantiate(just(y))
([1, 2, 3], 3) Current thoughts on this prototypeThe implementation in #219 is super crude, but it seems to cover ~80% (?) of the desired use cases already! One nice thing about Some obvious to-dos would include:
I would definitely like to get feedback on this. I only just thought of it, and will have to think carefully about potential pitfalls as well as about possible use cases that I haven't anticipated yet. |
This idea reminds me of >>> from unittest.mock import MagicMock
>>> GenLike = MagicMock(name="GenLike")
>>> seeded = GenLike("init").foo(42)["bar"].baz
>>> GenLike.mock_calls
[call('init'), call().foo(42), call().foo().__getitem__('bar')]
>>> seeded._mock_new_name # _mock_new_name is undocumented
'baz'
>>> str(seeded)
"<MagicMock name='GenLike().foo().__getitem__().baz' id='139896475473856'>" |
Oh yeah, that is a great observation! I will see if I might leverage any of that functionality directly, or if some of their design decisions might be worth copying. |
Had to throw this at you, the interface made me try it 😵💫 Module = builds(...) # lightning module
LightningTrainer = like(Trainer)(gpus=2, strategy="ddp")
LightningFit = just(LightningTrainer.fit(Module))
# automatically start fitting
instantiate(LightningFit) This then begs the question, can we do something like this: LightningTrainer = builds_like(Trainer, gpus=2, strategy="ddp", populate_signature=True)
LightningFit = just(LightningTrainer.fit(Module)) |
Nice! This got me thinking... While I do like (that all being said, it is pretty crazy how things "just work" like your example above) |
Yeah this is definitely something I have been thinking about... What are the arguments to As I mentioned in my previous comment, I think we should take some time to come up with guidelines for what are and aren't "best practices", and to have those guidelines inform how we design |
Heh... I just thought of this: import torch as tr
tr = like(tr)
tr.tensor([1., 2.]) # is equivalent to `like(tr.tensor)([1.0, 2.0])` That is, this would effectively auto-apply (kind of funny how I post this immediately after my "now now, we should take care to only recommend moderate use cases" post) |
It would be nice to support The annotation for But resorting to
The downside of 1 is that requiring an extra The downside of 2 is that it promotes code patterns that will light IDEs/type-checkers up like a Christmas tree. Obviously, it is bad to have code that runs without issue get consistently flagged by static checks. I really don't want hydra-zen to promote this sort of thing. Is there some nice trick I am missing here? Something with |
Based on the I have a scenario where I want to expose the default parameter(s) of a class method in Hydra's config, but rather than instantiation calling the method, I want to receive the initialized object with a modified method. For a concrete example, PyTorch Lightning Flash offers finetuning strategies with their import flash
import pytorch_lightning as pl
from pytorch_lightning.callbacks import Callback
class MyCustomCallback(Callback):
pass
def train(trainer: flash.Trainer, model: pl.LightningModule, datamodule: pl.LightningDataModule):
"""This function would be a Hydra _target_ with the parameters supplied via recursive instantiation"""
# Interacting with ``trainer`` before and after calling the method whose params are to be altered
trainer.callbacks.insert(0, MyCustomCallback(...))
trainer.finetune(model, datamodule=datamodule) # TODO configure default value for ``strategy`` param
trainer.test() As @jgbos points out, I could combine the interactions with Given that (and assuming my interpretation of import inspect
from hydra.utils import instantiate
from hydra_zen import builds
import flash
from omegaconf import OmegaConf
config_strategy = 'foobar'
TrainerConf = builds(
flash.Trainer,
zen_meta={'finetune': builds(
flash.core.trainer.Trainer.finetune,
strategy=config_strategy,
zen_partial=True)},
max_epochs=10,
gpus=-1)
# FIXME ``builds`` changes ``_target_`` path --> current workaround is to manually specify
TrainerConf.finetune._target_ = 'flash.core.trainer.Trainer.finetune'
print(OmegaConf.to_yaml(TrainerConf))
trainer = instantiate(TrainerConf)
sig = inspect.signature(trainer.finetune)
instantiated_strategy = sig.parameters['strategy'].default
# FIXME ``zen_meta`` attr disappears after instantiation so the ``strategy`` override has no effect
# Can't move the field outside of meta since it doesn't match the signature for Trainer.__init__
if instantiated_strategy != config_strategy:
raise AssertionError(
f'TrainerConf.finetune.strategy ({config_strategy}) != '
f'trainer.finetune.strategy ({instantiated_strategy})') Seeing the wrappers approach in @rsokl initial comment, I now think that would be the better approach. I could write a wrapper function to replace the |
Sorry for the delayed response. I'm finally going through your example here and see what you are trying to do. I think this is the equivalent using from hydra_zen import to_yaml, like, just, instantiate
import flash
config_strategy = 'foobar'
TrainerConf = like(flash.Trainer)(max_epochs=10, gpus=-1)
FineTunerConf = TrainerConf.finetune(strategy=config_strategy, zen_partial=True)
#works
print(to_yaml(just(TrainerConf)))
instantiate(just(TrainerConf))
# does not work with partials
print(to_yaml(just(FineTunerConf)))
instantiate(just(FineTunerConf)) It currently doesn't support @rsokl I believe you had reasons for not pursuing |
I'm wondering now if your use case would work with this approach. There would be a configuration for import flash
import pytorch_lightning as pl
from pytorch_lightning.callbacks import Callback
from typing import Protocol
class FineTuner(Protocol):
def __call__(
self,
model,
train_dataloader=None,
val_dataloaders=None,
datamodule=None,
strategy="no_freeze",
train_bn=True,
):
...
class MyCustomCallback(Callback):
pass
def train(
trainer: flash.Trainer,
finetuner: FineTuner,
model: pl.LightningModule,
datamodule: pl.LightningDataModule,
):
trainer.callbacks.insert(0, MyCustomCallback(...))
finetuner(model, datamodule=datamodule)
trainer.test() We may need a better approach still. Here is a temporary solution: from hydra_zen import to_yaml, instantiate, builds, make_config
import flash
config_strategy = "foobar"
TrainerConf = builds(flash.Trainer, max_epochs=10, gpus=-1, populate_full_signature=True)
Config = make_config(
finetuner_kwargs = make_config(strategy="foobar"),
trainer=TrainerConf,
model=...,
datamodule=...
)
def train(
trainer: flash.Trainer,
model: pl.LightningModule,
datamodule: pl.LightningDataModule,
finetuner_kwargs: dict
):
trainer.callbacks.insert(0, MyCustomCallback(...))
trainer.finetune(model, datamodule=datamodule, **finetuner_args)
trainer.test()
|
This is just a prototype. We can use zen-wrappers to let people specify a method call via instantiate:
I can't say I love this form factor, or the readability of this. Specifying the method via its name (a string) is not very zen. I would want to see if I can make the yaml more legible:
Some things for us to consider:
builds(A, x=-11, zen_wrappers=call_method("x_plus_y", y=2))
can validate that the method call will be valid upon instantiation?The text was updated successfully, but these errors were encountered: