Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] cyclegan:UnboundLocalError: local variable 'init_info' referenced before assignment #2109

Open
3 tasks done
xvjiawen opened this issue Dec 28, 2023 · 3 comments
Open
3 tasks done
Assignees
Labels
kind/bug something isn't working

Comments

@xvjiawen
Copy link

Prerequisite

Task

I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.

Branch

main branch https://github.com/open-mmlab/mmagic

Environment

System environment:
sys.platform: linux
Python: 3.8.17 (default, Jul 5 2023, 21:04:15) [GCC 11.2.0]
CUDA available: True
numpy_random_seed: 2022
GPU 0,1,2,3,4,5,6,7,8,9: NVIDIA GeForce RTX 2080 Ti
CUDA_HOME: /usr/local/cuda-11.7
NVCC: Cuda compilation tools, release 11.7, V11.7.99
GCC: gcc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
PyTorch: 2.0.1+cu117
PyTorch compiling details: PyTorch built with:

  • GCC 9.3

  • C++ Version: 201703

  • Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications

  • Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)

  • OpenMP 201511 (a.k.a. OpenMP 4.5)

  • LAPACK is enabled (usually provided by MKL)

  • NNPACK is enabled

  • CPU capability usage: AVX2

  • CUDA Runtime 11.7

  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86

  • CuDNN 8.5

  • Magma 2.6.1

  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

    TorchVision: 0.15.2+cu117
    OpenCV: 4.8.1
    MMEngine: 0.8.4

Runtime environment:
cudnn_benchmark: True
mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
dist_cfg: {'backend': 'nccl'}
seed: 2022
diff_rank_seed: True
Distributed launcher: none
Distributed training: False
GPU number: 1

Reproduces the problem - code sample

def init_func(m):
    """Initialization function.

    Args:
        m (nn.Module): Module to be initialized.
    """
    classname = m.__class__.__name__
    if hasattr(m, 'weight') and (classname.find('Conv') != -1
                                 or classname.find('Linear') != -1):
        if init_type == 'normal':
            normal_init(m, 0.0, init_gain)
        elif init_type == 'xavier':
            xavier_init(m, gain=init_gain, distribution='normal')
        elif init_type == 'kaiming':
            kaiming_init(
                m,
                a=0,
                mode='fan_in',
                nonlinearity='leaky_relu',
                distribution='normal')
        elif init_type == 'orthogonal':
            init.orthogonal_(m.weight, gain=init_gain)
            init.constant_(m.bias.data, 0.0)
        else:
            raise NotImplementedError(
                f"Initialization method '{init_type}' is not implemented")
        init_info = (f'Initialize {m.__class__.__name__} by \'init_type\' '
                     f'{init_type}.')
    elif classname.find('BatchNorm2d') != -1:
        # BatchNorm Layer's weight is not a matrix;
        # only normal distribution applies.
        normal_init(m, 1.0, init_gain)
        init_info = (f'{m.__class__.__name__} is BatchNorm2d, initialize '
                     'by Norm initialization with mean=1, '
                     f'std={init_gain}')

    if hasattr(m, '_params_init_info'):
        update_init_info(module, init_info)

module.apply(init_func)

Reproduces the problem - command or script

./tools/dist_train.sh ./configs/cyclegan/cyclegan_lsgan-id0-resnet-in_1xb1-270kiters_hk2flir.py 8 --work-dir ./work_dirs/demo

Reproduces the problem - error message

Traceback (most recent call last):
File "/data/xjw/share/projects/mmagic/tools/train.py", line 114, in
main()
File "/data/xjw/share/projects/mmagic/tools/train.py", line 107, in main
runner.train()
File "/root/miniconda3/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1723, in train
self._init_model_weights()
File "/root/miniconda3/lib/python3.8/site-packages/mmengine/runner/runner.py", line 906, in _init_model_weights
model.init_weights()
File "/data/xjw/share/projects/mmagic/mmagic/models/base_models/base_translation_model.py", line 110, in init_weights
gen.init_weights()
File "/data/xjw/share/projects/mmagic/mmagic/models/editors/cyclegan/cyclegan_generator.py", line 139, in init_weights
generation_init_weights(
File "/data/xjw/share/projects/mmagic/mmagic/models/utils/model_utils.py", line 146, in generation_init_weights
module.apply(init_func)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 885, in apply
fn(self)
File "/data/xjw/share/projects/mmagic/mmagic/models/utils/model_utils.py", line 144, in init_func
update_init_info(module, init_info)
UnboundLocalError: local variable 'init_info' referenced before assignment

Additional information

I ONLY modify the dataset which the format is the same as hourse2zerba

@xvjiawen xvjiawen added the kind/bug something isn't working label Dec 28, 2023
@zhangliukun
Copy link

zhangliukun commented Jan 31, 2024

I have the same errors, it seems like the "if elif" not run leading to the problem.

@fsbarros98
Copy link

fsbarros98 commented Feb 28, 2024

any updates on this? this seems to work with MMGeneration repository...

@TolgaOzdmir
Copy link

I ran into the same error and tried a quick fix by changing the code below, but I know it's probably not the right solution.

`def init_func(m):
"""Initialization function.

    Args:
        m (nn.Module): Module to be initialized.
    """
    classname = m.__class__.__name__
    init_info = ""
    if hasattr(m, 'weight') and (classname.find('Conv') != -1
                                 or classname.find('Linear') != -1):
        if init_type == 'normal':
            normal_init(m, 0.0, init_gain)
        elif init_type == 'xavier':
            xavier_init(m, gain=init_gain, distribution='normal')
        elif init_type == 'kaiming':
            kaiming_init(
                m,
                a=0,
                mode='fan_in',
                nonlinearity='leaky_relu',
                distribution='normal')
        elif init_type == 'orthogonal':
            init.orthogonal_(m.weight, gain=init_gain)
            init.constant_(m.bias.data, 0.0)
        else:
            raise NotImplementedError(
                f"Initialization method '{init_type}' is not implemented")
        init_info = (f'Initialize {m.__class__.__name__} by \'init_type\' '
                     f'{init_type}.')
    elif classname.find('BatchNorm2d') != -1:
        # BatchNorm Layer's weight is not a matrix;
        # only normal distribution applies.
        normal_init(m, 1.0, init_gain)
        init_info = (f'{m.__class__.__name__} is BatchNorm2d, initialize '
                     'by Norm initialization with mean=1, '
                     f'std={init_gain}')

    if hasattr(m, '_params_init_info'):
        update_init_info(module, init_info)

module.apply(init_func)`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants