Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

想法-关于DenseNet的Block内的结构的设计 #24

Open
ghost opened this issue Dec 7, 2018 · 3 comments
Open

想法-关于DenseNet的Block内的结构的设计 #24

ghost opened this issue Dec 7, 2018 · 3 comments

Comments

@ghost
Copy link

ghost commented Dec 7, 2018

image

@ghost
Copy link
Author

ghost commented Dec 7, 2018

这两种结构也就对应了两种构造代码的方式:

    # 这样这不知道对不对, 但是逻辑是按照第一个结构的逻辑写的
    # short_cuts = []
    # short_cuts.append(tf.identity(inputs))
    # for i in range(num_repeated):
    #   inputs = self.dense_conv2d(inputs, self.k, 3, 1)
    #   short_cuts.append(tf.identity(inputs))
    #
    #   for short_cut in short_cuts:
    #     inputs = tf.concat((inputs, short_cut), axis=-1)

    # 对于从支路的角度来看, 实际上就是支路逐渐变宽的过程
    internel_out = tf.identity(inputs)
    for i in range(self.per_block_num):
      # 对于卷积而言, 输入的就是卷叠加后的输出
      inputs = self.dense_conv2d(internel_out, out_channel=self.k,
                                  kernel_size=3)
      # 对于叠加而言, 就是卷积后的输出和快速通道的拼接
      internel_out = tf.concat((inputs, internel_out), axis=-1)
    return internel_out

@ghost
Copy link
Author

ghost commented Dec 7, 2018

第二种实际上是复用了前面的结果, 所以写出来形式更为简洁

@lartpang
Copy link
Owner

lartpang commented Sep 12, 2019

类似的pytorch实现:

# init 部分
self.denseops = nn.ModuleList()
for i in range(self.pre_block_num):
    self.denseops.append(DenseOps[i])

# 前向部分
for ops in self.denseops:
    # 对于卷积而言, 输入的就是卷叠加后的输出
    inter_out = ops(inputs)
    # 对于叠加而言, 就是卷积后的输出和快速通道的拼接
    inputs = torch.cat((inter_out, internel_out), axis=1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant