[实现]Transfer Learning with ResNet for Image Classification
/ 2 min read
Updated:Table of Contents
finetune all parameters
通过torchvison加载resnet18,最后一层为fc,获取其in_features,便于修改model_ft.fc
# Finetuning the ConvNetmodel_ft = models.resnet18(weights="IMAGENET1K_V1")num_ftrs = model_ft.fc.in_featuresmodel_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()# Observe that all parameters are being optimizedoptimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)# Decay LR by a factor of 0.1 every 7 epochsexp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)finetune final layer
关键部分已加入中文注释
model_conv = torchvision.models.resnet18(weights="IMAGENET1K_V1")# 模型不再记录梯度for param in model_conv.parameters(): param.requires_grad = Falsenum_ftrs = model_conv.fc.in_features# 新建的Module对象,requires_grad默认为True,挂载到模型作为属性后,其转换为Paramter对象model_conv.fc = nn.Linear(num_ftrs, 2)model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()# Observe that only parameters of final layer are being optimized as# opposed to before.optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochsexp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)结论
此次实现,只微调最后的FC层,效果要比全部微调更好。
原文
Transfer Learning for Computer Vision Tutorial — PyTorch Tutorials 2.4.0+cu121 documentation