一、torch.nn.modules
网络模型构建于一系列:nn.Conv2d、AvgPool2d、ReLU6、BatchNorm2d、CrossEntropyLoss等,由此基本操作组成。先看操作类型,再看操作组成方式。
1、属于nn.modules的操作类型
① class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, group)
int
or tuple
, optional
) - 输入的每一条边补充0的层数int
or tuple
, optional
) – 卷积核元素之间的间距 用以空洞卷积torch.nn.UpsamplingNearest2d(size=None, scale_factor=None) 上采样
通过size或者scale_factor来指定上采样后的图片大小shape(N,C,H_in,W_in)-> shape(N,C,H_out,W_out).
② class torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_model=False)
class torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)
class torch.nn.AdaptiveMaxPool2d(output_size, return_indices=False) # 输入输出的HW不变
③ class torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True)
在每一个小批量(mini-batch)数据中,计算输入各个通道的均值和标准差。gamma与beta是可学习的大小为C的参数向量(C为输入大小)。
m = nn.BatchNorm2d(100) input = autograd.Variable(torch.randn(16, 3, 1024, 1024)) # import torch.autograd as autograd (bs,ch,h,w) output = m(input)
④损失函数:用于计算pred_y和true_y之间的loss,其实不属于网络结构组成。一般使用方式如下
loss = nn.L1Loss() output = loss(m(input), target) # 两个参数可以任意shape,但元素数量都为n。
⑤全连接、dropout
2、网络结构组成方式
网络结构的类型是torch.nn.Module()容器, 在容器添加以上①②③.... forward中out 可以= 一个nn.Module子类。
class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU()) self.MaxPool = nn.MaxPool2d(kernel_size=2, stride=2) self.fc = nn.Linear(7 * 7 * 32, num_classes) def forward(self, x): # 数据流向 out = self.layer1(x) out = self.MaxPool(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out
四个相似的容器属性,都是{name:values}对应的类似字典的类型,去掉named_则是对应的类似list类型。
二、模型的保存与加载
torch.save与load由python的pickle实现
①常用的两种save load
net = ConvNet() torch.save(obj,path)
_ = net.load_state_dict(torch.load(model_weight_path), strict=False, map_location=torch.device("cpu" or "cuda:0")) net = torch.load(path)
net.to(torch.device("cpu" or "cuda:0"))
设置strict=False忽略那些没有匹配到的key
②保存超参数信息用于resume或inferance
# save torch.save({ 'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'loss': loss, ... }, PATH) # load model = TheModelClass(*args, **kwargs) optimizer = TheOptimizerClass(*args, **kwargs) checkpoint = torch.load(PATH) model.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) epoch = checkpoint['epoch'] loss = checkpoint['loss'] model.eval() # or model.train()
③保存torch.nn.DataParallel模型
torch.save(model.module.state_dict(), PATH)