【遥感影像分类】使用PaddleAPI搭建ResNet50实现遥感影像分类任务
本文围绕遥感分类任务展开,使用西北工业大学2016年发布的含45类土地利用类型的遥感影像数据集,构建RESISC45Dataset自定义数据集,搭建ResNet50模型,经训练、验证,模型精度达0.83左右,最后进行了模型预测与效果展示。

前言
①. 关于任务
免费影视、动漫、音乐、游戏、小说资源长期稳定更新! 👉 点此立即查看 👈
遥感分类,是指根据不同的分类标志以及遥感探测及应用侧重的方面不同,将遥感分成不同的类型。遥感图像计算机分类的依据是遥感图像像素的相似度。常使用距离和相关系数来衡量相似度。常见的分类方法有:监督分类、非监督分类法。
按遥感平台的不同,可把遥感分为航天遥感、航空遥感和地面(近地)遥感。按探测的电磁波段不同,可分为可见光遥感,红外遥感,微波遥感等。
②. 关于数据集
该数据集是由西北工业大学于2016年发布,包含提取自Google Earth的45种土地利用类型的遥感影像
数据集包含45个类别文件夹,每个文件夹下对应各自700幅遥感影像,一共有31500幅。
影像文件为三通道、大小为256*256的jpg格式文件
数据准备
解压已预先划分好的数据集
In [2]# 解压数据集!unzip -oq /home/aistudio/data/data131697/NWPU-RESISC45.zip登录后复制 In [3]
# 查看数据集文件结构!tree NWPU-RESISC45 -L 1登录后复制
自定义数据集
In [1]# 导入包import paddlefrom PIL import Imageimport osimport numpy as npimport random# 打印paddle版本print(paddle.__version__)登录后复制
2.2.2登录后复制 In [3]
class RESISC45Dataset(paddle.io.Dataset): def __init__(self, mode='train', label_path='NWPU-RESISC45/train_list.txt'): """ 初始化函数 """ assert mode in ['train', 'eval', 'test'], 'mode is one of train, eval, test.' self.mode = mode.lower() self.label_path = label_path self.data = [] with open(label_path) as f: for line in f.readlines(): info = line.strip().split(' ') if len(info) > 0: image_root = label_path.split('/')[0] info[0]=os.path.join(image_root,info[0]) self.data.append([info[0].strip(), info[1].strip()]) def preprocess(self,image): """ 数据增强函数 """ # 训练模式下的数据增强 if self.mode == 'train': # 裁剪大小 image = image.resize((224, 224), Image.BICUBIC) # 随机水平翻转 if random.randint(0, 1) == 1: image = image.transpose(Image.FLIP_LEFT_RIGHT) else: pass # 随机垂直翻转 if random.randint(0, 1) == 1: image = image.transpose(Image.FLIP_TOP_BOTTOM) else: pass # 图像归一化 image = np.asarray(image) image = image.astype('float32') mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] max_value = [255, 255, 255] min_value = [0, 0, 0] mean = np.asarray(mean, dtype=np.float32)[np.newaxis, np.newaxis, :] std = np.asarray(std, dtype=np.float32)[np.newaxis, np.newaxis, :] range_value = np.asarray([1. / (max_value[i] - min_value[i]) for i in range(len(max_value))],dtype=np.float32) image = (image - np.asarray(min_value, dtype=np.float32)) * range_value image -= mean image /= std # 数据格式转换 return paddle.to_tensor(image.transpose((2,0,1))) # 验证和测试模型下的数据增强 else: # 裁剪大小 image = image.resize((224, 224), Image.BICUBIC) # 图像归一化 image = np.asarray(image) image = image.astype('float32') mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] max_value = [255, 255, 255] min_value = [0, 0, 0] mean = np.asarray(mean, dtype=np.float32)[np.newaxis, np.newaxis, :] std = np.asarray(std, dtype=np.float32)[np.newaxis, np.newaxis, :] range_value = np.asarray([1. / (max_value[i] - min_value[i]) for i in range(len(max_value))],dtype=np.float32) image = (image - np.asarray(min_value, dtype=np.float32)) * range_value image -= mean image /= std # 数据格式转换 return paddle.to_tensor(image.transpose((2,0,1))) def __getitem__(self, index): """ 根据索引获取单个样本 """ image_file, label = self.data[index] image = Image.open(image_file) # 图片通道对齐 if image.mode != 'RGB': image = image.convert('RGB') # 进行数据增强 image = self.preprocess(image) return image, np.array(label, dtype='int64') def __len__(self): """ 获取样本总数 """ return len(self.data)登录后复制 实例化数据集
In [4]train_dataset=RESISC45Dataset(mode='train', label_path='NWPU-RESISC45/train_list.txt')val_dataset=RESISC45Dataset(mode='eval',label_path='NWPU-RESISC45/val_list.txt')test_dataset=RESISC45Dataset(mode='test',label_path='NWPU-RESISC45/test_list.txt')登录后复制
模型搭建
这里搭建的模型是ResNet50,论文地址:Deep_Residual_Learning_for_Image_Recognition
ResNet介绍
ResNet(Residual Neural Network)由微软研究院的Kaiming He等四名华人提出,通过使用ResNet Unit成功训练出了152层的神经网络,并在ILSVRC2015比赛中取得冠军,在top5上的错误率为3.57%,同时参数量比VGGNet低,效果非常突出。ResNet的结构可以极快的加速神经网络的训练,模型的准确率也有比较大的提升。同时ResNet的推广性非常好,甚至可以直接用到InceptionNet网络中。
代码实现
In [13]import paddleimport paddle.nn as nnfrom paddle.nn import Conv2D, MaxPool2D, AdaptiveAvgPool2D, Linear, ReLU, BatchNorm2Dimport paddle.nn.functional as F# 定义卷积批归一化块class ConvBNLayer(paddle.nn.Layer): def __init__(self, in_channels, out_channels, kernel_size, stride=1, act=None): super(ConvBNLayer, self).__init__() # 创建卷积层 self._conv = Conv2D( in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=(kernel_size - 1) // 2, bias_attr=False) # 创建BatchNorm层 self._batch_norm = BatchNorm2D(out_channels) # 创建activate层 self.act = act def forward(self, inputs): y = self._conv(inputs) y = self._batch_norm(y) if self.act == 'relu': y = F.relu(x=y) return y # 定义残差块class Bottleneckblock(paddle.nn.Layer): def __init__(self, inplane, in_channel, out_channel, stride = 1, start = False): super(Bottleneckblock, self).__init__() self.stride = stride self.start = start self.conv0 = ConvBNLayer(in_channel, inplane, 1, stride = stride, act='relu') self.conv1 = ConvBNLayer(inplane, inplane, 3, stride=1, act='relu') self.conv2 = ConvBNLayer(inplane, out_channel, 1, stride=1, act=None) self.conv3 = ConvBNLayer(in_channel, out_channel, 1, stride = stride, act=None) self.relu = nn.ReLU() def forward(self, inputs): y = inputs x = self.conv0(inputs) x = self.conv1(x) x = self.conv2(x) if self.start: y = self.conv3(y) z = self.relu(x+y) return zclass Resnet50(paddle.nn.Layer): def __init__(self, num_classes=45): super().__init__() # stem layers self.stem = nn.Sequential( nn.Conv2D(3, out_channels=64, kernel_size=7, stride=2, padding=3), nn.BatchNorm2D(64), nn.ReLU(), nn.MaxPool2D(kernel_size=3, stride=2, padding=1)) # blocks self.layer1 = self.add_bottleneck_layer(3, 64, start = True) self.layer2 = self.add_bottleneck_layer(4, 128) self.layer3 = self.add_bottleneck_layer(6, 256) self.layer4 = self.add_bottleneck_layer(3, 512) # head layer self.avgpool = nn.AdaptiveAvgPool2D(1) self.classifier = nn.Linear(2048, num_classes) def add_bottleneck_layer(self, num, inplane, start = False): layer = [] if start: layer.append(Bottleneckblock(inplane, inplane, inplane*4, start = True)) else: layer.append(Bottleneckblock(inplane, inplane*2, inplane*4, stride = 2, start = True)) for i in range(num-1): layer.append(Bottleneckblock(inplane, inplane*4, inplane*4)) return nn.Sequential(*layer) def forward(self, inputs): x = self.stem(inputs) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.flatten(1) x = self.classifier(x) return x登录后复制
实例化Resnet50并打印模型结构
In [14]resnet50 = Resnet50(num_classes=45)登录后复制
W0310 10:31:31.892053 141 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.0, Runtime API Version: 10.1W0310 10:31:31.896260 141 device_context.cc:465] device: 0, cuDNN Version: 7.6.登录后复制 In [15]
paddle.summary(resnet50, (1, 3, 224, 224))登录后复制
------------------------------------------------------------------------------- Layer (type) Input Shape Output Shape Param # =============================================================================== Conv2D-1 [[1, 3, 224, 224]] [1, 64, 112, 112] 9,472 BatchNorm2D-1 [[1, 64, 112, 112]] [1, 64, 112, 112] 256 ReLU-1 [[1, 64, 112, 112]] [1, 64, 112, 112] 0 MaxPool2D-1 [[1, 64, 112, 112]] [1, 64, 56, 56] 0 Conv2D-2 [[1, 64, 56, 56]] [1, 64, 56, 56] 4,096 BatchNorm2D-2 [[1, 64, 56, 56]] [1, 64, 56, 56] 256 ConvBNLayer-1 [[1, 64, 56, 56]] [1, 64, 56, 56] 0 Conv2D-3 [[1, 64, 56, 56]] [1, 64, 56, 56] 36,864 BatchNorm2D-3 [[1, 64, 56, 56]] [1, 64, 56, 56] 256 ConvBNLayer-2 [[1, 64, 56, 56]] [1, 64, 56, 56] 0 Conv2D-4 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384 BatchNorm2D-4 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024 ConvBNLayer-3 [[1, 64, 56, 56]] [1, 256, 56, 56] 0 Conv2D-5 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384 BatchNorm2D-5 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024 ConvBNLayer-4 [[1, 64, 56, 56]] [1, 256, 56, 56] 0 ReLU-2 [[1, 256, 56, 56]] [1, 256, 56, 56] 0 Bottleneckblock-1 [[1, 64, 56, 56]] [1, 256, 56, 56] 0 Conv2D-6 [[1, 256, 56, 56]] [1, 64, 56, 56] 16,384 BatchNorm2D-6 [[1, 64, 56, 56]] [1, 64, 56, 56] 256 ConvBNLayer-5 [[1, 256, 56, 56]] [1, 64, 56, 56] 0 Conv2D-7 [[1, 64, 56, 56]] [1, 64, 56, 56] 36,864 BatchNorm2D-7 [[1, 64, 56, 56]] [1, 64, 56, 56] 256 ConvBNLayer-6 [[1, 64, 56, 56]] [1, 64, 56, 56] 0 Conv2D-8 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384 BatchNorm2D-8 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024 ConvBNLayer-7 [[1, 64, 56, 56]] [1, 256, 56, 56] 0 ReLU-3 [[1, 256, 56, 56]] [1, 256, 56, 56] 0 Bottleneckblock-2 [[1, 256, 56, 56]] [1, 256, 56, 56] 0 Conv2D-10 [[1, 256, 56, 56]] [1, 64, 56, 56] 16,384 BatchNorm2D-10 [[1, 64, 56, 56]] [1, 64, 56, 56] 256 ConvBNLayer-9 [[1, 256, 56, 56]] [1, 64, 56, 56] 0 Conv2D-11 [[1, 64, 56, 56]] [1, 64, 56, 56] 36,864 BatchNorm2D-11 [[1, 64, 56, 56]] [1, 64, 56, 56] 256 ConvBNLayer-10 [[1, 64, 56, 56]] [1, 64, 56, 56] 0 Conv2D-12 [[1, 64, 56, 56]] [1, 256, 56, 56] 16,384 BatchNorm2D-12 [[1, 256, 56, 56]] [1, 256, 56, 56] 1,024 ConvBNLayer-11 [[1, 64, 56, 56]] [1, 256, 56, 56] 0 ReLU-4 [[1, 256, 56, 56]] [1, 256, 56, 56] 0 Bottleneckblock-3 [[1, 256, 56, 56]] [1, 256, 56, 56] 0 Conv2D-14 [[1, 256, 56, 56]] [1, 128, 28, 28] 32,768 BatchNorm2D-14 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-13 [[1, 256, 56, 56]] [1, 128, 28, 28] 0 Conv2D-15 [[1, 128, 28, 28]] [1, 128, 28, 28] 147,456 BatchNorm2D-15 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-14 [[1, 128, 28, 28]] [1, 128, 28, 28] 0 Conv2D-16 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536 BatchNorm2D-16 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048 ConvBNLayer-15 [[1, 128, 28, 28]] [1, 512, 28, 28] 0 Conv2D-17 [[1, 256, 56, 56]] [1, 512, 28, 28] 131,072 BatchNorm2D-17 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048 ConvBNLayer-16 [[1, 256, 56, 56]] [1, 512, 28, 28] 0 ReLU-5 [[1, 512, 28, 28]] [1, 512, 28, 28] 0 Bottleneckblock-4 [[1, 256, 56, 56]] [1, 512, 28, 28] 0 Conv2D-18 [[1, 512, 28, 28]] [1, 128, 28, 28] 65,536 BatchNorm2D-18 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-17 [[1, 512, 28, 28]] [1, 128, 28, 28] 0 Conv2D-19 [[1, 128, 28, 28]] [1, 128, 28, 28] 147,456 BatchNorm2D-19 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-18 [[1, 128, 28, 28]] [1, 128, 28, 28] 0 Conv2D-20 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536 BatchNorm2D-20 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048 ConvBNLayer-19 [[1, 128, 28, 28]] [1, 512, 28, 28] 0 ReLU-6 [[1, 512, 28, 28]] [1, 512, 28, 28] 0 Bottleneckblock-5 [[1, 512, 28, 28]] [1, 512, 28, 28] 0 Conv2D-22 [[1, 512, 28, 28]] [1, 128, 28, 28] 65,536 BatchNorm2D-22 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-21 [[1, 512, 28, 28]] [1, 128, 28, 28] 0 Conv2D-23 [[1, 128, 28, 28]] [1, 128, 28, 28] 147,456 BatchNorm2D-23 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-22 [[1, 128, 28, 28]] [1, 128, 28, 28] 0 Conv2D-24 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536 BatchNorm2D-24 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048 ConvBNLayer-23 [[1, 128, 28, 28]] [1, 512, 28, 28] 0 ReLU-7 [[1, 512, 28, 28]] [1, 512, 28, 28] 0 Bottleneckblock-6 [[1, 512, 28, 28]] [1, 512, 28, 28] 0 Conv2D-26 [[1, 512, 28, 28]] [1, 128, 28, 28] 65,536 BatchNorm2D-26 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-25 [[1, 512, 28, 28]] [1, 128, 28, 28] 0 Conv2D-27 [[1, 128, 28, 28]] [1, 128, 28, 28] 147,456 BatchNorm2D-27 [[1, 128, 28, 28]] [1, 128, 28, 28] 512 ConvBNLayer-26 [[1, 128, 28, 28]] [1, 128, 28, 28] 0 Conv2D-28 [[1, 128, 28, 28]] [1, 512, 28, 28] 65,536 BatchNorm2D-28 [[1, 512, 28, 28]] [1, 512, 28, 28] 2,048 ConvBNLayer-27 [[1, 128, 28, 28]] [1, 512, 28, 28] 0 ReLU-8 [[1, 512, 28, 28]] [1, 512, 28, 28] 0 Bottleneckblock-7 [[1, 512, 28, 28]] [1, 512, 28, 28] 0 Conv2D-30 [[1, 512, 28, 28]] [1, 256, 14, 14] 131,072 BatchNorm2D-30 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-29 [[1, 512, 28, 28]] [1, 256, 14, 14] 0 Conv2D-31 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824 BatchNorm2D-31 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-30 [[1, 256, 14, 14]] [1, 256, 14, 14] 0 Conv2D-32 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144 BatchNorm2D-32 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096 ConvBNLayer-31 [[1, 256, 14, 14]] [1, 1024, 14, 14] 0 Conv2D-33 [[1, 512, 28, 28]] [1, 1024, 14, 14] 524,288 BatchNorm2D-33 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096 ConvBNLayer-32 [[1, 512, 28, 28]] [1, 1024, 14, 14] 0 ReLU-9 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Bottleneckblock-8 [[1, 512, 28, 28]] [1, 1024, 14, 14] 0 Conv2D-34 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144 BatchNorm2D-34 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-33 [[1, 1024, 14, 14]] [1, 256, 14, 14] 0 Conv2D-35 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824 BatchNorm2D-35 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-34 [[1, 256, 14, 14]] [1, 256, 14, 14] 0 Conv2D-36 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144 BatchNorm2D-36 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096 ConvBNLayer-35 [[1, 256, 14, 14]] [1, 1024, 14, 14] 0 ReLU-10 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Bottleneckblock-9 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Conv2D-38 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144 BatchNorm2D-38 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-37 [[1, 1024, 14, 14]] [1, 256, 14, 14] 0 Conv2D-39 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824 BatchNorm2D-39 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-38 [[1, 256, 14, 14]] [1, 256, 14, 14] 0 Conv2D-40 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144 BatchNorm2D-40 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096 ConvBNLayer-39 [[1, 256, 14, 14]] [1, 1024, 14, 14] 0 ReLU-11 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Bottleneckblock-10 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Conv2D-42 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144 BatchNorm2D-42 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-41 [[1, 1024, 14, 14]] [1, 256, 14, 14] 0 Conv2D-43 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824 BatchNorm2D-43 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-42 [[1, 256, 14, 14]] [1, 256, 14, 14] 0 Conv2D-44 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144 BatchNorm2D-44 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096 ConvBNLayer-43 [[1, 256, 14, 14]] [1, 1024, 14, 14] 0 ReLU-12 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Bottleneckblock-11 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Conv2D-46 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144 BatchNorm2D-46 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-45 [[1, 1024, 14, 14]] [1, 256, 14, 14] 0 Conv2D-47 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824 BatchNorm2D-47 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-46 [[1, 256, 14, 14]] [1, 256, 14, 14] 0 Conv2D-48 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144 BatchNorm2D-48 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096 ConvBNLayer-47 [[1, 256, 14, 14]] [1, 1024, 14, 14] 0 ReLU-13 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Bottleneckblock-12 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Conv2D-50 [[1, 1024, 14, 14]] [1, 256, 14, 14] 262,144 BatchNorm2D-50 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-49 [[1, 1024, 14, 14]] [1, 256, 14, 14] 0 Conv2D-51 [[1, 256, 14, 14]] [1, 256, 14, 14] 589,824 BatchNorm2D-51 [[1, 256, 14, 14]] [1, 256, 14, 14] 1,024 ConvBNLayer-50 [[1, 256, 14, 14]] [1, 256, 14, 14] 0 Conv2D-52 [[1, 256, 14, 14]] [1, 1024, 14, 14] 262,144 BatchNorm2D-52 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 4,096 ConvBNLayer-51 [[1, 256, 14, 14]] [1, 1024, 14, 14] 0 ReLU-14 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Bottleneckblock-13 [[1, 1024, 14, 14]] [1, 1024, 14, 14] 0 Conv2D-54 [[1, 1024, 14, 14]] [1, 512, 7, 7] 524,288 BatchNorm2D-54 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048 ConvBNLayer-53 [[1, 1024, 14, 14]] [1, 512, 7, 7] 0 Conv2D-55 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,359,296 BatchNorm2D-55 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048 ConvBNLayer-54 [[1, 512, 7, 7]] [1, 512, 7, 7] 0 Conv2D-56 [[1, 512, 7, 7]] [1, 2048, 7, 7] 1,048,576 BatchNorm2D-56 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192 ConvBNLayer-55 [[1, 512, 7, 7]] [1, 2048, 7, 7] 0 Conv2D-57 [[1, 1024, 14, 14]] [1, 2048, 7, 7] 2,097,152 BatchNorm2D-57 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192 ConvBNLayer-56 [[1, 1024, 14, 14]] [1, 2048, 7, 7] 0 ReLU-15 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0 Bottleneckblock-14 [[1, 1024, 14, 14]] [1, 2048, 7, 7] 0 Conv2D-58 [[1, 2048, 7, 7]] [1, 512, 7, 7] 1,048,576 BatchNorm2D-58 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048 ConvBNLayer-57 [[1, 2048, 7, 7]] [1, 512, 7, 7] 0 Conv2D-59 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,359,296 BatchNorm2D-59 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048 ConvBNLayer-58 [[1, 512, 7, 7]] [1, 512, 7, 7] 0 Conv2D-60 [[1, 512, 7, 7]] [1, 2048, 7, 7] 1,048,576 BatchNorm2D-60 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192 ConvBNLayer-59 [[1, 512, 7, 7]] [1, 2048, 7, 7] 0 ReLU-16 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0 Bottleneckblock-15 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0 Conv2D-62 [[1, 2048, 7, 7]] [1, 512, 7, 7] 1,048,576 BatchNorm2D-62 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048 ConvBNLayer-61 [[1, 2048, 7, 7]] [1, 512, 7, 7] 0 Conv2D-63 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,359,296 BatchNorm2D-63 [[1, 512, 7, 7]] [1, 512, 7, 7] 2,048 ConvBNLayer-62 [[1, 512, 7, 7]] [1, 512, 7, 7] 0 Conv2D-64 [[1, 512, 7, 7]] [1, 2048, 7, 7] 1,048,576 BatchNorm2D-64 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 8,192 ConvBNLayer-63 [[1, 512, 7, 7]] [1, 2048, 7, 7] 0 ReLU-17 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0 Bottleneckblock-16 [[1, 2048, 7, 7]] [1, 2048, 7, 7] 0 AdaptiveAvgPool2D-1 [[1, 2048, 7, 7]] [1, 2048, 1, 1] 0 Linear-1 [[1, 2048]] [1, 45] 92,205 ===============================================================================Total params: 23,653,421Trainable params: 23,547,181Non-trainable params: 106,240-------------------------------------------------------------------------------Input size (MB): 0.57Forward/backward pass size (MB): 328.09Params size (MB): 90.23Estimated Total Size (MB): 418.89-------------------------------------------------------------------------------登录后复制
{'total_params': 23653421, 'trainable_params': 23547181}登录后复制 模型训练
训练准备
In [22]from paddle.optimizer import Momentumfrom paddle.optimizer.lr import CosineAnnealingDecayfrom paddle.regularizer import L2Decayfrom paddle.nn import CrossEntropyLossfrom paddle.metric import Accuracyimport math# 总训练轮数Epochs = 30# 数据集读取的批次大小Batch_size = 64# 每轮的训练步数Step_each_epoch = math.ceil(len(train_dataset.data)/Batch_size)# 配置学习率Lr=CosineAnnealingDecay(learning_rate=0.06, T_max=Step_each_epoch * Epochs)# 配置优化器Optimizer = Momentum(learning_rate=Lr, momentum=0.9, weight_decay=L2Decay(1e-4), parameters=resnet50.parameters())# 设置损失函数Loss_fn = CrossEntropyLoss()# 构建数据读取器 Train_loader = paddle.io.DataLoader(train_dataset, batch_size=Batch_size, shuffle=True)Val_loader = paddle.io.DataLoader(val_dataset, batch_size=Batch_size)登录后复制
正式训练
In [11]def train(model, epochs, train_loader, val_loader, optimizer, loss_fn): ''' 训练函数 ''' acc_history = [0] for epoch in range(epochs): model.train() # 训练模式 for batch_id, data in enumerate(train_loader()): # 读取批次数据 x_data = data[0] # 训练数据 y_data = data[1] # 训练数据标签 y_data = paddle.reshape(y_data, (-1, 1)) predicts = model(x_data) # 预测结果 loss = loss_fn(predicts, y_data) # 计算损失 loss.backward() # 反向传播 optimizer.step() # 更新参数 optimizer.clear_grad() # 梯度清零 print("[TRAIN] epoch: {}/{}, loss is: {}".format(epoch+1, epochs, loss.numpy())) model.eval() # 验证模式 loss_list = [] acc_list = [] for batch_id, data in enumerate(val_loader()): # 读取批次数据 x_data = data[0] # 验证数据 y_data = data[1] # 验证数据标签 y_data = paddle.reshape(y_data, (-1, 1)) predicts = model(x_data) # 预测结果 loss = loss_fn(predicts, y_data) # 计算损失 acc = paddle.metric.accuracy(predicts, y_data) # 计算精度 loss_list.append(np.mean(loss.numpy())) acc_list.append(np.mean(acc.numpy())) print("[EVAL] Finished, Epoch={}, loss={}, acc={}".format(epoch+1, np.mean(loss_list), np.mean(acc_list))) if acc_history[-1] < np.mean(acc_list): paddle.save(resnet50.state_dict(),'output/resnet50.pdparams'.format(epoch)) acc_history.append(np.mean(acc_list))登录后复制 In [16]# 进行训练train(resnet50, Epochs, Train_loader, Val_loader, Optimizer, Loss_fn)登录后复制
模型验证
In [12]通过下面的代码可以看出我们的模型达到了0.83左右的精度
def val(model, val_loader): ''' 验证函数 ''' model.eval() #验证模式 acc_list = [] for batch_id, data in enumerate(val_loader()): x_data = data[0] # 验证数据 y_data = data[1] # 验证数据标签 y_data = paddle.reshape(y_data, (-1, 1)) predicts = model(x_data) # 预测结果 acc = paddle.metric.accuracy(predicts, y_data) # 计算精度 acc_list.append(np.mean(acc.numpy())) print("Eval finished, acc={}".format(np.mean(acc_list)))登录后复制 In [13]# 加载保存的模型resnet50.set_state_dict(paddle.load('output/resnet50.pdparams'))# 进行验证val(resnet50,Val_loader)登录后复制 Eval finished, acc=0.8262536525726318登录后复制
模型预测
In [14]我们将模型预测的标签结果存入列表results下。
def test(model, test_loader): model.eval() result_list = [] for batch_id, data in enumerate(test_loader()): x_data = data[0] # 测试数据 predicts = model(x_data) # 测试数据标签 result_list.append(np.argmax(predicts.numpy(),axis=1)) # 存入列表 print("predict finished") return result_list登录后复制 In [15]# 加载测试数据集Test_loader = paddle.io.DataLoader(test_dataset, batch_size=64)# 加载保存的模型resnet50.set_state_dict(paddle.load('output/resnet50.pdparams'))# 进行预测results = test(resnet50,Test_loader)登录后复制 predict finished登录后复制
效果展示
In [18]# 导入包%matplotlib inlineimport numpy as npimport cv2import matplotlib.pyplot as plt# 搭建label.txt的映射列表test_list = []with open('NWPU-RESISC45/labels.txt', 'r') as labels: for line in labels: test_list.append(line.strip())# 画图fig, axs = plt.subplots(nrows=5, ncols=1,figsize=(20,20))for i in range(5): img = cv2.imread(test_dataset.data[i+10][0],1) # 读取图片 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 通道转换 ax = axs[i] ax.get_yaxis().set_visible(False) ax.get_xaxis().set_visible(False) ax.imshow(img) #展示图片 ax.set_title('Real: %s \n Predict: %s'%(test_list[i+10],test_list[results[0][i+10]])) #展示原先标签和预测结果登录后复制 登录后复制
相关攻略
当AI眼镜学会“跑腿”:语音解锁单车,无感支付停车费 近来,智能穿戴领域的一个新动向值得关注:阿里旗下的千问AI眼镜,正式接入了蚂蚁集团的GPASS平台。这可不是一次简单的功能叠加,它意味着,诸如共享单车骑行、停车缴费这一系列高频的“AI办事”功能,开始从手机屏幕转移到了你的眼前。 简单说,借助GP
角色定位与核心任务目标 明确了基本定位后,我们直接切入核心:作为一名专业的文章优化师,我的核心职责在于,将那些带有明显AI生成特征的文本,深度重塑为拥有个人特色与行业洞见的优质内容。 换句话说,这项任务的关键在于实施一次“精准的换血手术”。你必须严格保证原文所有的事实依据、核心观点、逻辑框架,以及每
1 故障现象:OpenClaw无法联网搜索的典型报错 许多开发者在配置OpenClaw AI助手的搜索功能时,常常会遭遇一个典型故障:日常对话交互完全正常,但一旦触发需要联网查询信息的指令,界面便会立刻弹出“抱歉,我目前无法使用网络搜索功能(需要配置 API 密钥)”或“HTTP 401: Inv
1 4 万亿词元!阿里 Qwen3 6-Plus 刷新全球最大 AI 聚合平台 OpenRouter 日调用量纪录 这事儿挺震撼的。就在4月4日,全球最大的AI模型聚合平台OpenRouter在其官方账号上公布了一个爆炸性数字:阿里刚刚发布的千问新模型Qwen3 6-Plus,上线仅仅一天,日调用量
Solidus AI 是什么 在AI与Web3加速融合的当下,一个名为Solidus AI的项目提出了自己的解决方案。它将自己定位为“Web3原生的AI HPC基础设施”,其蓝图相当清晰:以位于欧洲的环保高性能计算(HPC)数据中心为基石,向上构建一个计算与AI工具市场,并最终通过AITECH代币完
热门专题
热门推荐
《洛克王国世界》呼唤独角兽的正确姿势 在《洛克王国世界》的主线任务中,有时会遇到需要精确输入特定角色名称的环节。其中一个关键节点,便是要准确拼写出独角兽“伊利斯”的真名。很多玩家稍不注意就可能记错或用错字,导致任务流程在此停滞不前。这篇指南将为你清晰解析正确的输入方法,助你快速通关。 《洛克王国世界
《洛克王国世界》风眠圣所“向上的方法”任务图文通关指南 在《洛克王国世界》的风眠圣所探险过程中,很多玩家会在“找到向上的方法”这一环节遭遇卡点。实际上,只要理清思路、明确顺序,完成这个挑战并不困难。本攻略将为你提供一套经过验证的详细图文流程,帮助你一次性顺利通过。 最后的关键操作非常简单:准确判断风
《洛克王国世界》叶冕魔力猫打法全攻略:高效通关技巧解析 在《洛克王国世界》的主线剧情推进中,挑战初始精灵首领叶冕魔力猫是一个重要环节。许多玩家在这个关卡遇到了困难,感觉难以突破。不必担心,这份详尽的实战打法指南将为你提供清晰的过关思路,帮助你轻松击败叶冕魔力猫。 核心挑战思路与强力精灵推荐 与叶冕魔
《洛克王国世界》罗隐捕捉指南:高效获取圣羽翼王挑战关键战宠 在《洛克王国世界》中,成功挑战传说精灵圣羽翼王是许多训练师的终极目标之一。选择合适的战宠至关重要,而罗隐以其出色的对抗能力,已成为公认的核心攻略选择。那么,这只关键的宠物究竟在哪里可以捕获?本文将为你提供详尽的罗隐捕捉位置图解与实用技巧。
速览 在《大店小二》中,如何高效使用元宝和银两是新手玩家普遍面临的难题。资源有限,如何将每一分投入转化为最大收益?本文将深入解析两类资源的最优使用策略,核心原则是:元宝投资于长期价值,银两专注于核心养成。 大店小二元宝与银两使用优先级攻略 1 元宝使用指南 首要建议:若非充值玩家,请勿将元宝大量用





