IT博客汇
  • 首页
  • 精华
  • 技术
  • 设计
  • 资讯
  • 扯淡
  • 权利声明
  • 登录 注册

    使用 PyTorch 实现并训练 LeNet-5 模型

    Yanjun发表于 2024-01-20 15:04:40
    love 0

    LeNet-5 是由 Yann LeCun提 出的卷积神经网络,在论文《Gradient-Based Learning Applied To Document Recognition》中可以看到 LeNet-5 网络模型的结构,如下图所示:
    CNN-LeNet-5
    通过上图可以看到,从左至右网络各个层顺序连接:

    1. 输入层 :图片大小 32×32
    2. 卷积层1 :卷积核数 6,大小 5×5,步长 1
    3. 池化层 :过滤器大小2×2,步长为2
    4. 卷积层2 :卷积核数 16,大小 5×5, 步长 1
    5. 池化层2 :过滤器大小 2×2,步长 2
    6. 全连接层1:节点数 120
    7. 全连接层2:节点数 84
    8. 全连接层3:节点数 10

    我们只需要准备好数据集,并基于上图连接结构,使用 PyTorch 搭建 CNN 网络的结构并进行训练和使用。
    我们只需要准备好数据集,并基于上图,使用 PyTorch 搭建好 CNN 网络的结构就可以进行训练。

    实现 LeNet-5 模型

    基本环境配置如下:
    Python:3.11.3
    PyTorch:2.0.1(torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2)

    1 准备数据集

    使用经典的手写数字数据集 MNIST,可以直接通过 PyTorch 的 datasets.MNIST 下载并准备数据:

    import torch
    from torch import nn
    from torch.utils.data import DataLoader
    from torchvision import datasets
    from torchvision.transforms import ToTensor
    
    # Download training/test data from open datasets.
    training_data = datasets.MNIST(root="data", train=True, download=True, transform=ToTensor(),)
    test_data = datasets.MNIST(root="data", train=False, download=True, transform=ToTensor(),)
    
    batch_size = 64
    
    # Create data loaders.
    train_dataloader = DataLoader(training_data, batch_size=batch_size)
    test_dataloader = DataLoader(test_data, batch_size=batch_size)
    
    for X, y in test_dataloader:
        print(f"Shape of X [N, C, H, W]: {X.shape}")
        print(f"Shape of y: {y.shape} {y.dtype}")
        break
    

    MNIST 数据集中的图片尺寸都是单通道的 28×28 的。

    2 实现模型

    实现 LeNet-5 模型,主要工作就是配置该 CNN 神经网络的每一层,并实现前向传播的计算逻辑,代码如下所示:

    # Get cpu, gpu or mps device for training.
    device = (
        "cuda"
        if torch.cuda.is_available()
        else "mps"
        if torch.backends.mps.is_available()
        else "cpu"
    )
    print(f"Using {device} device")
    
    # Define model
    class LeNet5Model(nn.Module):
        def __init__(self):
            super().__init__()
            self._conv1 = nn.Conv2d(1, 6, 5, 1)
            self._pool1 = nn.MaxPool2d(2)
            self._conv2 = nn.Conv2d(6, 16, 5, 1)
            self._pool2 = nn.MaxPool2d(2)
            self._fc1 = nn.Linear(4*4*16, 120)
            self._fc2 = nn.Linear(120, 84)
            self._fc3 = nn.Linear(84, 10)
    
        def forward(self, x):
            x = self._conv1(x)
            x = self._pool1(x)
            x = self._conv2(x)
            x = self._pool2(x)
            x = x.view(-1, 4 * 4 * 16)
            x = self._fc1(x)
            x = self._fc2(x)
            x = self._fc3(x)
            return x
    
    # Create model
    model = LeNet5Model().to(device)
    print(model)
    

    可以看到,输出的 LeNet-5 模型结构,如下所示:

    LeNet5Model(
      (_conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
      (_pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (_conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
      (_pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      (_fc1): Linear(in_features=256, out_features=120, bias=True)
      (_fc2): Linear(in_features=120, out_features=84, bias=True)
      (_fc3): Linear(in_features=84, out_features=10, bias=True)
    )
    

    3 训练模型

    下面是训练模型的过程,代码如下所示:

    # Define loss function and optimizer
    loss_fn = nn.CrossEntropyLoss()
    optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
    
    # training loop
    def train(dataloader, model, loss_fn, optimizer):
        size = len(dataloader.dataset)
        model.train()
        for batch, (X, y) in enumerate(dataloader):
            X, y = X.to(device), y.to(device)
    
            # Compute prediction error
            pred = model(X)
            loss = loss_fn(pred, y)
    
            # Backpropagation
            loss.backward()
            optimizer.step()
            optimizer.zero_grad()
    
            if batch % 100 == 0:
                loss, current = loss.item(), (batch + 1) * len(X)
                print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")
                
    # test loop
    def test(dataloader, model, loss_fn):
        size = len(dataloader.dataset)
        num_batches = len(dataloader)
        model.eval()
        test_loss, correct = 0, 0
        with torch.no_grad():
            for X, y in dataloader:
                X, y = X.to(device), y.to(device)
                pred = model(X)
                test_loss += loss_fn(pred, y).item()
                correct += (pred.argmax(1) == y).type(torch.float).sum().item()
        test_loss /= num_batches
        correct /= size
        print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
        
    # execute model training and testing
    epochs = 10
    for t in range(epochs):
        print(f"Epoch {t+1}\n-------------------------------")
        train(train_dataloader, model, loss_fn, optimizer)
        test(test_dataloader, model, loss_fn)
    print("Done!")
    

    运行代码可以看到迭代过程,结果示例如下:

    Epoch 1
    -------------------------------
    loss: 2.317153  [   64/60000]
    loss: 2.305442  [ 6464/60000]
    loss: 2.306234  [12864/60000]
    loss: 2.324416  [19264/60000]
    loss: 2.302907  [25664/60000]
    loss: 2.297269  [32064/60000]
    loss: 2.305612  [38464/60000]
    loss: 2.291686  [44864/60000]
    loss: 2.301056  [51264/60000]
    loss: 2.307510  [57664/60000]
    Test Error: 
     Accuracy: 11.9%, Avg loss: 2.291815 
    
    Epoch 2
    -------------------------------
    loss: 2.293081  [   64/60000]
    loss: 2.284958  [ 6464/60000]
    loss: 2.288021  [12864/60000]
    loss: 2.292037  [19264/60000]
    loss: 2.281588  [25664/60000]
    loss: 2.278919  [32064/60000]
    loss: 2.274934  [38464/60000]
    loss: 2.274108  [44864/60000]
    loss: 2.271511  [51264/60000]
    loss: 2.271358  [57664/60000]
    Test Error: 
     Accuracy: 41.4%, Avg loss: 2.261012 
    
    ... ...
    
    Epoch 9
    -------------------------------
    loss: 0.506072  [   64/60000]
    loss: 0.397812  [ 6464/60000]
    loss: 0.355343  [12864/60000]
    loss: 0.390962  [19264/60000]
    loss: 0.431304  [25664/60000]
    loss: 0.475698  [32064/60000]
    loss: 0.307452  [38464/60000]
    loss: 0.511966  [44864/60000]
    loss: 0.481164  [51264/60000]
    loss: 0.522870  [57664/60000]
    Test Error: 
     Accuracy: 88.4%, Avg loss: 0.397837 
    
    Epoch 10
    -------------------------------
    loss: 0.454751  [   64/60000]
    loss: 0.365487  [ 6464/60000]
    loss: 0.319525  [12864/60000]
    loss: 0.371037  [19264/60000]
    loss: 0.389002  [25664/60000]
    loss: 0.452506  [32064/60000]
    loss: 0.273858  [38464/60000]
    loss: 0.487788  [44864/60000]
    loss: 0.447199  [51264/60000]
    loss: 0.497519  [57664/60000]
    Test Error: 
     Accuracy: 89.1%, Avg loss: 0.369253 
    
    Done!
    

    通过训练,可以得到我们需要的最终想要的模型,这时将模型保存下来以便后面使用:

    saved_model_path = "LeNet5.pth"
    torch.save(model.state_dict(), saved_model_path)
    print("Saved PyTorch Model State to ", saved_model_path)
    

    4 加载、使用模型

    model = LeNet5Model().to(device)
    model.load_state_dict(torch.load("LeNet5.pth"))
    
    classes = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
    
    model.eval()
    for i in range(10):
        x, y = test_data[i][0], test_data[i][1]
        with torch.no_grad():
            x = x.to(device)
            pred = model(x)
            predicted, actual = classes[pred[0].argmax(0)], classes[y]
            print(f'Predicted: "{predicted}", Actual: "{actual}"')
    

    示例结果如下所示:

    Predicted: "7", Actual: "7"
    Predicted: "2", Actual: "2"
    Predicted: "1", Actual: "1"
    Predicted: "0", Actual: "0"
    Predicted: "4", Actual: "4"
    Predicted: "1", Actual: "1"
    Predicted: "4", Actual: "4"
    Predicted: "9", Actual: "9"
    Predicted: "6", Actual: "5"
    Predicted: "9", Actual: "9"
    

    参考资源

    • https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html
    • https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d


沪ICP备19023445号-2号
友情链接