时间:2025-07-24 作者:游乐小编
本文介绍Paddle 2.1.0版本新功能Paddle.Hub API,可快速加载外部扩展模型。以用PaddleClas预训练模型实现猫的12分类为例,演示同步代码、加载模型列表与模型、预处理数据、训练模型及预测的过程,还提及该版本存在的一些问题。
最新文档:直达链接
API 简介:
使用介绍:简单讲这就是个可以快速调用外部扩展模型的 API只需要将模型的代码托管在 GitHub 或 Gitee 平台上或者存储在本地就可以通过这个 API 进行调用,方便开发者分享模型代码供其他人快速使用# 同步 PaddleClas 代码!git clone https://gitee.com/PaddlePaddle/PaddleClas -b develop --depth 1登录后复制 In [ ]
import paddle# 加载 Repo 中的模型列表model_list = paddle.hub.list('PaddleClas', source='local', force_reload=False)print(model_list)# 查看模型帮助文档model_help = paddle.hub.help('PaddleClas', 'mobilenetv3_large_x1_25', source='local', force_reload=False)print(model_help)# 加载模型model = paddle.hub.load('PaddleClas', 'mobilenetv3_large_x1_25', source='local', force_reload=False)# 模型测试data = paddle.rand((1, 3, 224, 224))out = model(data)print(out.shape) # [1, 1000]登录后复制
['alexnet', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'densenet264', 'googlenet', 'inceptionv3', 'inceptionv4', 'mobilenetv1', 'mobilenetv1_x0_25', 'mobilenetv1_x0_5', 'mobilenetv1_x0_75', 'mobilenetv2_x0_25', 'mobilenetv2_x0_5', 'mobilenetv2_x0_75', 'mobilenetv2_x1_5', 'mobilenetv2_x2_0', 'mobilenetv3_large_x0_35', 'mobilenetv3_large_x0_5', 'mobilenetv3_large_x0_75', 'mobilenetv3_large_x1_0', 'mobilenetv3_large_x1_25', 'mobilenetv3_small_x0_35', 'mobilenetv3_small_x0_5', 'mobilenetv3_small_x0_75', 'mobilenetv3_small_x1_0', 'mobilenetv3_small_x1_25', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'resnext101_32x4d', 'resnext101_64x4d', 'resnext152_32x4d', 'resnext152_64x4d', 'resnext50_32x4d', 'resnext50_64x4d', 'shufflenetv2_x0_25', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg13', 'vgg16', 'vgg19'] MobileNetV3_large_x1_25 Args: pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise. kwargs: class_dim: int=1000. Output dim of last fc layer. Returns: model: nn.Layer. Specific `MobileNetV3_large_x1_25` model depends on args. [1, 1000]登录后复制
!unzip -q -d /home/aistudio/data/data10954 /home/aistudio/data/data10954/cat_12_train.zip!unzip -q -d /home/aistudio/data/data10954 /home/aistudio/data/data10954/cat_12_test.zip登录后复制
对于一个数据集,首先要了解数据的组成是什么:
解压完的数据集中包括两个图片文件夹以及一个数据列表文件,如下图所示
对于一个数据集,为了更好的衡量模型的效果,不能只有训练集和测试集,所以一般需要从中训练集中分出一部分作为验证集使用
了解了上述的内容,就可以开始使用代码来对数据集进行预处理了
In [ ]import osimport paddleimport randomtotal = []# 读取数据标签with open('/home/aistudio/data/data10954/train_list.txt', 'r', encoding='UTF-8') as f: for line in f: # 格式转换 line = line[:-1].split('\t') total.append(' '.join(line)+'\n')# 打乱数据顺序random.shuffle(total)'''切分数据集95%的数据作为训练集5%的数据作为验证集'''split_num = int(len(total)*0.95) # 写入训练数据列表with open('/home/aistudio/data/data10954/train.txt', 'w', encoding='UTF-8') as f: for line in total[:split_num]: f.write(line)# 写入验证数据列表with open('/home/aistudio/data/data10954/dev.txt', 'w', encoding='UTF-8') as f: for line in total[split_num:]: f.write(line)# 写入测试数据列表with open('/home/aistudio/data/data10954/test.txt', 'w', encoding='UTF-8') as f: for line in ['cat_12_test/%s\n' % img for img in os.listdir('/home/aistudio/data/data10954/cat_12_test')]: f.write(line)登录后复制
模型训练的一般步骤如下:
搭建模型构建数据集和数据读取器配置各种参数构建训练任务开始训练模型注:启动训练前请重启 Notebook 内核
注:目前只有 CPU 环境才可以正常运行如下代码
In [ ]import osimport paddleimport randomimport paddle.nn as nnimport paddle.vision.transforms as T# 构建数据集class CatDataset(paddle.io.Dataset): def __init__(self, transforms, dataset_path='/home/aistudio/data/data10954', mode='train'): self.mode = mode self.dataset_path = dataset_path self.transforms = transforms self.num_classes = 5 if self.mode == 'train': self.file = 'train.txt' elif self.mode == 'dev': self.file = 'dev.txt' else: self.file = 'test.txt' self.file = os.path.join(dataset_path, self.file) with open(self.file, 'r') as file: self.data = file.read()[:-1].split('\n') def __getitem__(self, idx): if self.mode in ['train', 'dev']: img_path, grt = self.data[idx].split(' ') img_path = os.path.join(self.dataset_path, img_path) im = paddle.vision.image_load(img_path) im = im.convert("RGB") im = self.transforms(im) return im, int(grt) else: img_path = self.data[idx] img_path = os.path.join(self.dataset_path, img_path) im = paddle.vision.image_load(img_path) im = im.convert("RGB") im = self.transforms(im) return im def __len__(self): return len(self.data)# 加载数据集train_transforms = T.Compose([ T.Resize(256), T.RandomCrop(224), T.RandomHorizontalFlip(), T.RandomVerticalFlip(), T.ToTensor(), T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])test_transforms = T.Compose([ T.Resize(256), T.CenterCrop(224), T.ToTensor(), T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])])train_dataset = CatDataset(train_transforms, mode='train')dev_dataset = CatDataset(test_transforms, mode='dev')test_dataset = CatDataset(test_transforms, mode='test')# 加载模型model = paddle.hub.load('PaddleClas', 'mobilenetv3_large_x0_5', source='local', force_reload=False, class_dim=12, pretrained=True)model = paddle.Model(model)# 定义优化器opt = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())# 配置模型model.prepare(optimizer=opt, loss=nn.CrossEntropyLoss(), metrics=paddle.metric.Accuracy(topk=(1, 5)))model.fit( train_data=train_dataset, eval_data=dev_dataset, batch_size=32, epochs=2, eval_freq=1, log_freq=1, save_dir='save_models', save_freq=1, verbose=1, drop_last=False, shuffle=True, num_workers=0)登录后复制
2024-05-18 12:43:59 INFO: unique_endpoints {''}2024-05-18 12:43:59 INFO: Downloading MobileNetV3_large_x0_5_pretrained.pdparams from https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams100%|██████████| 15875/15875 [00:00<00:00, 18983.36it/s]/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1297: UserWarning: Skip loading for out.weight. out.weight receives a shape [1280, 1000], but the expected shape is [1280, 12]. warnings.warn(("Skip loading for {}. ".format(key) + str(err)))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1297: UserWarning: Skip loading for out.bias. out.bias receives a shape [1000], but the expected shape is [12]. warnings.warn(("Skip loading for {}. ".format(key) + str(err)))登录后复制
The loss value printed in the log is the current step, and the metric is the average value of previous steps.Epoch 1/2登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working return (isinstance(seq, collections.Sequence) and登录后复制
step 65/65 [==============================] - loss: 2.7684 - acc_top1: 0.6628 - acc_top5: 0.9464 - 3s/step save checkpoint at /home/aistudio/save_models/0Eval begin...step 4/4 [==============================] - loss: 0.8948 - acc_top1: 0.7685 - acc_top5: 0.9907 - 732ms/step Eval samples: 108Epoch 2/2step 65/65 [==============================] - loss: 0.5738 - acc_top1: 0.8397 - acc_top5: 0.9942 - 3s/step save checkpoint at /home/aistudio/save_models/1Eval begin...step 4/4 [==============================] - loss: 0.5484 - acc_top1: 0.8611 - acc_top5: 0.9907 - 779ms/step Eval samples: 108save checkpoint at /home/aistudio/save_models/final登录后复制
模型预测一般步骤:
读取数据模型预测预测结果后处理输出最终结果In [ ]import numpy as np# 模型预测results = model.predict(test_dataset, batch_size=32, num_workers=0, stack_outputs=True, callbacks=None)# 对预测结果进行后处理total = []for img, result in zip(test_dataset.data, np.argmax(results[0], 1)): total.append('%s,%s\n' % (img.split('/')[-1], result))# 生成结果文件with open('result.csv','w') as f: for line in total: f.write(line)登录后复制
Predict begin...step 8/8 [==============================] - 805ms/step Predict samples: 240登录后复制
2021-11-05 11:52
手游攻略2021-11-19 18:38
手游攻略2021-10-31 23:18
手游攻略2022-06-03 14:46
游戏资讯2025-06-28 12:37
单机攻略