现在,我正在研究有关使用Keras后端进行深度学习的注册的工作。任务的状态是完成两个图像之间的登记fixed
和moving
。最后,我得到一个变形场D(200,200,2)
,其中200
是图像的大小和2
表示每个像素的偏移dx, dy, dz
。我应该适用D
于moving
与计算损失fixed
。
问题是,有没有一种方法可以moving
按照D
Keras模型排列像素?
您应该可以使用来实现变形tf.contrib.resampler.resampler
。它应该类似于tf.contrib.resampler.resampler(moving, D)
,尽管您应该注意它应该moving
采用格式,(batch_size, height, width, num_channels)
但随后D[..., 0]
应包含宽度坐标和D[..., 1]
高度坐标。该操作对两个输入都实现了渐变,因此在任何情况下都可以很好地用于训练。
如果您不想使用它,tf.contrib
因为它将要从TensorFlow中删除,则可以滚动自己的双线性插值实现。这可能是这样的:
import tensorflow as tf
def deform(moving, deformation):
# Performs bilinear interpolation
s = tf.shape(moving)
b, h, w = s[0], s[1], s[2]
grid_b, grid_h, grid_w = tf.meshgrid(
tf.range(b), tf.range(h), tf.range(w), indexing='ij')
idx = tf.cast(tf.stack([grid_h, grid_w], axis=-1), deformation.dtype) + deformation
idx_floor = tf.floor(idx)
clip_low = tf.zeros([2], dtype=tf.int32)
clip_high = tf.cast([h, w], dtype=tf.int32)
# 0 0
idx_00 = tf.clip_by_value(tf.cast(idx_floor, tf.int32), clip_low, clip_high)
idx_batch = tf.expand_dims(grid_b, -1)
idx_batch_00 = tf.concat([idx_batch, idx_00], axis=-1)
moved_00 = tf.gather_nd(moving, idx_batch_00)
# 0 1
idx_01 = tf.clip_by_value(idx_00 + [0, 1], clip_low, clip_high)
idx_batch_01 = tf.concat([idx_batch, idx_01], axis=-1)
moved_01 = tf.gather_nd(moving, idx_batch_01)
# 1 0
idx_10 = tf.clip_by_value(idx_00 + [1, 0], clip_low, clip_high)
idx_batch_10 = tf.concat([idx_batch, idx_10], axis=-1)
moved_10 = tf.gather_nd(moving, idx_batch_10)
# 1 1
idx_11 = tf.clip_by_value(idx_00 + 1, clip_low, clip_high)
idx_batch_11 = tf.concat([idx_batch, idx_11], axis=-1)
moved_11 = tf.gather_nd(moving, idx_batch_11)
# Interpolate
alpha = idx - idx_floor
alpha_h = alpha[..., 0]
alpha_h_1 = 1 - alpha_h
alpha_w = alpha[..., 1]
alpha_w_1 = 1 - alpha_w
moved_0 = moved_00 * alpha_w_1 + moved_01 * alpha_w
moved_1 = moved_10 * alpha_w_1 + moved_11 * alpha_w
moved = moved_0 * alpha_h_1 + moved_1 * alpha_h
return moved
有趣的是,这实际上不应该工作,但可能可以。将根据插值坐标的像素值估算梯度,这意味着当变形值更接近两个像素之间的中点而不是精确到像素位置时,它将更加精确。但是,对于大多数图像而言,差异可能微不足道。
如果您想要一种更原则的方法,则可以使用tf.custom_gradient
插值更好的逐像素梯度估计:
import tensorflow as tf
@tf.custom_gradient
def deform(moving, deformation):
# Same code as before...
# Gradient function
def grad(dy):
moving_pad = tf.pad(moving, [[0, 0], [1, 1], [1, 1], [0, 0]], 'SYMMETRIC')
# Diff H
moving_dh_ref = moving_pad[:, 1:, 1:-1] - moving_pad[:, :-1, 1:-1]
moving_dh_ref = (moving_dh_ref[:, :-1] + moving_dh_ref[:, 1:]) / 2
moving_dh_0 = tf.gather_nd(moving_dh_ref, idx_batch_00)
moving_dh_1 = tf.gather_nd(moving_dh_ref, idx_batch_10)
moving_dh = moving_dh_1 * alpha_h_1 + moving_dh_1 * alpha_h
# Diff W
moving_dw_ref = moving_pad[:, 1:-1, 1:] - moving_pad[:, 1:-1, :-1]
moving_dw_ref = (moving_dw_ref[:, :, :-1] + moving_dw_ref[:, :, 1:]) / 2
moving_dw_0 = tf.gather_nd(moving_dw_ref, idx_batch_00)
moving_dw_1 = tf.gather_nd(moving_dw_ref, idx_batch_01)
moving_dw = moving_dw_1 * alpha_w_1 + moving_dw_1 * alpha_w
# Gradient of deformation
deformation_grad = tf.stack([tf.reduce_sum(dy * moving_dh, axis=-1),
tf.reduce_sum(dy * moving_dw, axis=-1)], axis=-1)
# Gradient of moving would be computed by applying the inverse deformation to dy
# or just resorting to standard TensorFlow gradient, if needed
return None, deformation_grad
return moved, grad
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句