Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

image_augmentation.py does not work #2

Open
chappy0205 opened this issue Oct 28, 2021 · 4 comments
Open

image_augmentation.py does not work #2

chappy0205 opened this issue Oct 28, 2021 · 4 comments

Comments

@chappy0205
Copy link

chappy0205 commented Oct 28, 2021

I added list5 which is written in page.49 to image_augmentation.py.
And I updated train_ds in list1 in page.45 accoridng to page.50.
However, the following error is dumped when I run the updated list1.

TypeError                                 Traceback (most recent call last)
<ipython-input-36-620ad43f4a23> in <module>()
      4     .shuffle(len(train_dataset), reshuffle_each_iteration=True)
      5     .map(lambda image, label: (image_preprocess_with_augment(image), label_preprocess(label)),
----> 6          num_parallel_calls=tf.data.AUTOTUNE)
      7     .batch(BATCH_SIZE)
      8     .prefetch(8)

10 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
    693       except Exception as e:  # pylint:disable=broad-except
    694         if hasattr(e, 'ag_error_metadata'):
--> 695           raise e.ag_error_metadata.to_exception(e)
    696         else:
    697           raise

TypeError: in user code:

    <ipython-input-36-620ad43f4a23>:5 None  *
        lambda image, label: (image_preprocess_with_augment(image), label_preprocess(label)),
    <ipython-input-13-d6d29bfe65b7>:16 image_preprocess_with_augment  *
        image = image_augmentation.augment(image, AUGMENT_N, AUGMENT_M)
    /content/drive/My Drive/SoftwareDesign/image_augmentation.py:182 augment  *
        image = augment_funcs[j](image, M)
    /content/drive/My Drive/SoftwareDesign/image_augmentation.py:152 sharpness  *
        image = tfa.image.sharpness(image, factor)
    /usr/local/lib/python3.7/dist-packages/tensorflow_addons/image/color_ops.py:138 sharpness  *
        image = _sharpness_image(image, factor=factor)
    /usr/local/lib/python3.7/dist-packages/tensorflow_addons/image/color_ops.py:101 _sharpness_image  *
        kernel = tf.tile(kernel, [1, 1, image_channels, 1])
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_array_ops.py:11532 tile  **
        "Tile", input=input, multiples=multiples, name=name)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:525 _apply_op_helper
        raise err
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:515 _apply_op_helper
        preferred_dtype=default_dtype)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/profiler/trace.py:163 wrapped
        return func(*args, **kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1566 convert_to_tensor
        ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:346 _constant_tensor_conversion_function
        return constant(v, dtype=dtype, name=name)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:272 constant
        allow_broadcast=True)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:290 _constant_impl
        allow_broadcast=allow_broadcast))
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_util.py:553 make_tensor_proto
        "supported type." % (type(values), values))

    TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [1, 1, None, 1]. Consider casting elements to a supported type.

When I use image_preprocess() instead of image_preprocess_with_augment(), the error was not dumped.
I'm sorry that I don't know why image_preprocess_with_augment() dumps the error.
Could you please tell me the reason and how to fix the issue if possible?

@nadare881
Copy link
Owner

I can't see all of the code in question, but there seems to be an error because of the differences here.

    <ipython-input-13-d6d29bfe65b7>:16 image_preprocess_with_augment  *
        image = image_augmentation.augment(image, AUGMENT_N, AUGMENT_M)

Would you please refer to the code in List 5 on page 49 and try again?

@tosiyuki
Copy link

tosiyuki commented Nov 1, 2021

After removing sharpness from augument_funcs in Listing 5, I was able to run it. Is there anything wrong with the sharpness method?

import random
#augument_funcs = [identity, crop_and_resize, shrink_and_pad, rotate, shear_x, shear_y, translate_xy, change_aspect, auto_contrast, contrast, 
#                  brightness, posterize, mean_blur, median_blur, cutout, sharpness]

augument_funcs = [identity, crop_and_resize, shrink_and_pad, rotate, shear_x, shear_y, translate_xy, change_aspect, auto_contrast, contrast, 
                  brightness, posterize, mean_blur, median_blur, cutout]

@tf.function(experimental_relax_shapes=True)
def augument(image, N, M):
    # argumentをシャッフルしてN個選ぶ
    _, ixs = tf.math.top_k(tf.random.uniform([len(augument_funcs)]), k=N)
    for i in range(N):
        for j in range(len(augument_funcs)):
            if ixs[i] == j:
                image = augument_funcs[j](image, M)
                
    image = tf.clip_by_value(image, 0., 255.)
    return image

@nadare881
Copy link
Owner

@tosiyuki
The sharpness code posted on github is the same as the one that worked in my environment. I think the cause is probably different, so could you create another thread with the error message?

@chappy0205
Copy link
Author

@tosiyuki
Thanks a lot for your comment. I also could complete it after removing sharpness.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants