我正在尝试构建VGG16模型以使用Pytorch进行ONNX导出。我想用自己的权重和偏见来强制模型。但是在此过程中,我的计算机很快耗尽了内存。
这是我想要的方法(这只是一个测试,在真实版本中,我读取了一组文件中的权重和偏差),此示例仅将所有值强制为0.5
# Create empty VGG16 model (random weights)
from torchvision import models
from torchsummary import summary
vgg16 = models.vgg16()
# la structure est : vgg16.__dict__
summary(vgg16, (3, 224, 224))
# convolutive layers
for layer in vgg16.features:
print()
print(layer)
if (hasattr(layer,'weight')):
dim = layer.weight.shape
print(dim)
print(str(dim[0]*(dim[1]*dim[2]*dim[3]+1))+' params')
# Remplacement des poids et biais
for i in range (dim[0]):
layer.bias[i] = 0.5
for j in range (dim[1]):
for k in range (dim[2]):
for l in range (dim[3]):
layer.weight[i][j][k][l] = 0.5
# Dense layers
for layer in vgg16.classifier:
print()
print(layer)
if (hasattr(layer,'weight')):
dim = layer.weight.shape
print(str(dim)+' --> '+str(dim[0]*(dim[1]+1))+' params')
for i in range(dim[0]):
layer.bias[i] = 0.5
for j in range(dim[1]):
layer.weight[i][j] = 0.5
当我查看计算机的内存使用情况时,它在第一个密集层处理期间会线性增长并饱和16GB RAM。然后python崩溃了...
有没有其他更好的方法可以做到这一点,请记住我要在以后进行nnx导出模型?谢谢你的帮助。
内存增长是由于需要为每个权重和偏差更改调整梯度而引起的。尝试在更新之前将.requires_grad
属性设置为,False
并在更新之后将其还原。例:
for layer in vgg16.features:
print()
print(layer)
if (hasattr(layer,'weight')):
# supress .requires_grad
layer.bias.requires_grad = False
layer.weight.requires_grad = False
dim = layer.weight.shape
print(dim)
print(str(dim[0]*(dim[1]*dim[2]*dim[3]+1))+' params')
# Remplacement des poids et biais
for i in range (dim[0]):
layer.bias[i] = 0.5
for j in range (dim[1]):
for k in range (dim[2]):
for l in range (dim[3]):
layer.weight[i][j][k][l] = 0.5
# restore .requires_grad
layer.bias.requires_grad = True
layer.weight.requires_grad = True
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句