首页 游戏 软件 资讯 排行榜 专题
首页
AI
科大讯飞-人脸关键点检测挑战赛:基础思路 MAE 2.2

科大讯飞-人脸关键点检测挑战赛:基础思路 MAE 2.2

热心网友
30
转载
2025-07-17
该内容是人脸关键点检测竞赛方案,涉及4个关键点检测。使用5千张带标注训练集和2千张测试集,数据含图像与坐标标注。构建了全连接和CNN两种模型,经数据加载、预处理、训练验证,CNN模型表现更优,40轮训练后验证集MAE约0.061,最后用模型对测试集预测并可视化结果。

科大讯飞-人脸关键点检测挑战赛:基础思路 mae 2.2 - 游乐网

赛题介绍

人脸识别是基于人的面部特征信息进行身份识别的一种生物识别技术,金融和安防是目前人脸识别应用最广泛的两个领域。人脸关键点是人脸识别中的关键技术。人脸关键点检测需要识别出人脸的指定位置坐标,例如眉毛、眼睛、鼻子、嘴巴和脸部轮廓等位置坐标等。

免费影视、动漫、音乐、游戏、小说资源长期稳定更新! 👉 点此立即查看 👈

科大讯飞-人脸关键点检测挑战赛:基础思路 MAE 2.2 - 游乐网

赛事任务

给定人脸图像,找到4个人脸关键点,赛题任务可以视为一个关键点检测问题。

训练集:5千张人脸图像,并且给定了具体的人脸关键点标注。

测试集:约2千张人脸图像,需要选手识别出具体的关键点位置。

数据说明

赛题数据由训练集和测试集组成,train.csv为训练集标注数据,train.npy和test.npy为训练集图片和测试集图片,可以使用numpy.load进行读取。

train.csv的信息为左眼坐标、右眼坐标、鼻子坐标和嘴巴坐标,总共8个点。

left_eye_center_x,left_eye_center_y,right_eye_center_x,right_eye_center_y,nose_tip_x,nose_tip_y,mouth_center_bottom_lip_x,mouth_center_bottom_lip_y66.3423640449,38.5236134831,28.9308404494,35.5777725843,49.256844943800004,68.2759550562,47.783946067399995,85.361582024568.9126037736,31.409116981100002,29.652226415100003,33.0280754717,51.913358490600004,48.408452830200005,50.6988679245,79.574037735868.7089943925,40.371149158899996,27.1308201869,40.9406803738,44.5025226168,69.9884859813,45.9264269159,86.2210093458
登录后复制

评审规则

本次竞赛的评价标准回归MAE进行评价,数值越小性能更优,最高分为0。评估代码参考:

from sklearn.metrics import mean_absolute_errory_true = [3, -0.5, 2, 7]y_pred = [2.5, 0.0, 2, 8]mean_absolute_error(y_true, y_pred)
登录后复制

步骤1:数据集解压

In [1]
!echo y | unzip -O CP936 /home/aistudio/data/data117050/人脸关键点检测挑战赛_数据集.zip!mv 人脸关键点检测挑战赛_数据集/* ./!echo y | unzip test.npy.zip!echo y | unzip train.npy.zip
登录后复制
Archive:  /home/aistudio/data/data117050/人脸关键点检测挑战赛_数据集.zip  inflating: 人脸关键点检测挑战赛_数据集/sample_submit.csv    inflating: 人脸关键点检测挑战赛_数据集/test.npy.zip    inflating: 人脸关键点检测挑战赛_数据集/train.csv    inflating: 人脸关键点检测挑战赛_数据集/train.npy.zip  Archive:  test.npy.zipreplace test.npy? [y]es, [n]o, [A]ll, [N]one, [r]ename:   inflating: test.npy                Archive:  train.npy.zipreplace train.npy? [y]es, [n]o, [A]ll, [N]one, [r]ename:   inflating: train.npy
登录后复制

步骤2:数据集读取

In [2]
import pandas as pdimport numpy as np
登录后复制train.csv:存储的是八个关键点的坐标。train.npy:训练集图像test.npy:测试集图像In [3]
# 读取标注train_df = pd.read_csv('train.csv')train_df = train_df.fillna(48)train_df.head()
登录后复制
   left_eye_center_x  left_eye_center_y  right_eye_center_x  \0          66.342364          38.523613           28.930840   1          68.912604          31.409117           29.652226   2          68.708994          40.371149           27.130820   3          65.334176          35.471878           29.366461   4          68.634857          29.999486           31.094571      right_eye_center_y  nose_tip_x  nose_tip_y  mouth_center_bottom_lip_x  \0           35.577773   49.256845   68.275955                  47.783946   1           33.028075   51.913358   48.408453                  50.698868   2           40.940680   44.502523   69.988486                  45.926427   3           37.767684   50.411373   64.934767                  50.028780   4           29.616429   50.247429   51.450857                  47.948571      mouth_center_bottom_lip_y  0                  85.361582  1                  79.574038  2                  86.221009  3                  74.883241  4                  84.394286
登录后复制In [4]
# 读取数据集train_img = np.load('train.npy')test_img = np.load('test.npy')train_img = np.transpose(train_img, [2, 0, 1])train_img = train_img.reshape(-1, 1, 96, 96)test_img = np.transpose(test_img, [2, 0, 1])test_img = test_img.reshape(-1, 1, 96, 96)print(train_img.shape, test_img.shape)
登录后复制
(5000, 1, 96, 96) (2049, 1, 96, 96)
登录后复制

步骤3: 数据集可视化

In [5]
%pylab inlineidx = 409xy = train_df.iloc[idx].values.reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(train_img[idx, 0, :, :], cmap='gray')
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  from collections import MutableMapping/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  from collections import Iterable, Mapping/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  from collections import Sized
登录后复制
Populating the interactive namespace from numpy and matplotlib
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2349: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  if isinstance(obj, collections.Iterator):/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2366: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working  return list(data) if isinstance(data, collections.MappingView) else data
登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [6]
idx = 4090xy = train_df.iloc[idx].values.reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(train_img[idx, 0, :, :], cmap='gray')
登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [7]
xy = 96 - train_df.mean(0).values.reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')
登录后复制
登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制

步骤4:构建模型和数据集

In [8]
import paddlepaddle.__version__
登录后复制
'2.2.2'
登录后复制

全连接模型

In [9]
from paddle.io import DataLoader, Datasetfrom PIL import Image# 自定义模型class MyDataset(Dataset):    def __init__(self, img, keypoint):        super(MyDataset, self).__init__()        self.img = img        self.keypoint = keypoint        def __getitem__(self, index):        img = Image.fromarray(self.img[index, 0, :, :])        return np.asarray(img).astype(np.float32)/255, self.keypoint[index] / 96.0    def __len__(self):        return len(self.keypoint)# 训练集train_dataset = MyDataset(    train_img[:-500, :, :, :],     paddle.to_tensor(train_df.values[:-500].astype(np.float32)))train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)# 验证集val_dataset = MyDataset(    train_img[-500:, :, :, :],     paddle.to_tensor(train_df.values[-500:].astype(np.float32)))val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False)# 测试集test_dataset = MyDataset(    test_img[:, :, :],     paddle.to_tensor(np.zeros((test_img.shape[2], 8))))test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
登录后复制In [10]
# 定义全连接模型model = paddle.nn.Sequential(    paddle.nn.Flatten(),    paddle.nn.Linear(96*96,128),    paddle.nn.LeakyReLU(),    paddle.nn.Linear(128, 8))paddle.summary(model, (64, 96, 96))
登录后复制
W0123 00:43:41.304462   119 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1W0123 00:43:41.309953   119 device_context.cc:465] device: 0, cuDNN Version: 7.6.
登录后复制
--------------------------------------------------------------------------- Layer (type)       Input Shape          Output Shape         Param #    ===========================================================================   Flatten-1       [[64, 96, 96]]         [64, 9216]             0          Linear-1         [[64, 9216]]          [64, 128]          1,179,776     LeakyReLU-1       [[64, 128]]           [64, 128]              0          Linear-2         [[64, 128]]            [64, 8]             1,032     ===========================================================================Total params: 1,180,808Trainable params: 1,180,808Non-trainable params: 0---------------------------------------------------------------------------Input size (MB): 2.25Forward/backward pass size (MB): 4.63Params size (MB): 4.50Estimated Total Size (MB): 11.38---------------------------------------------------------------------------
登录后复制
{'total_params': 1180808, 'trainable_params': 1180808}
登录后复制In [11]
# 损失函数和优化器optimizer = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=0.0001)criterion = paddle.nn.MSELoss()from sklearn.metrics import mean_absolute_errorfor epoch in range(0, 40):    Train_Loss, Val_Loss = [], []    Train_MAE, Val_MAE = [], []    # 训练    model.train()    for i, (x, y) in enumerate(train_loader):        pred = model(x)        loss = criterion(pred, y)        Train_Loss.append(loss.item())        loss.backward()        optimizer.step()        optimizer.clear_grad()        Train_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96  / y.shape[0])        # 验证    model.eval()    for i, (x, y) in enumerate(val_loader):        pred = model(x)        loss = criterion(pred, y)        Val_Loss.append(loss.item())        Val_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96 / y.shape[0])        if epoch % 1 == 0:        print(f'\nEpoch: {epoch}')        print(f'Loss {np.mean(Train_Loss):3.5f}/{np.mean(Val_Loss):3.5f}')        print(f'MAE {np.mean(Train_MAE):3.5f}/{np.mean(Val_MAE):3.5f}')
登录后复制
Epoch: 0Loss 0.05956/0.02340MAE 0.25278/0.18601Epoch: 1Loss 0.02075/0.02269MAE 0.17376/0.17984Epoch: 2Loss 0.01832/0.01881MAE 0.16236/0.16371Epoch: 3Loss 0.01752/0.01729MAE 0.15944/0.15727Epoch: 4Loss 0.01630/0.01783MAE 0.15351/0.16075Epoch: 5Loss 0.01535/0.01593MAE 0.14883/0.15059Epoch: 6Loss 0.01489/0.01655MAE 0.14582/0.15519Epoch: 7Loss 0.01469/0.01596MAE 0.14487/0.14971Epoch: 8Loss 0.01362/0.01582MAE 0.13930/0.15087Epoch: 9Loss 0.01355/0.01506MAE 0.13915/0.14637Epoch: 10Loss 0.01293/0.01490MAE 0.13586/0.14514Epoch: 11Loss 0.01289/0.01367MAE 0.13555/0.13847Epoch: 12Loss 0.01187/0.01372MAE 0.12944/0.13950Epoch: 13Loss 0.01184/0.01281MAE 0.12905/0.13358Epoch: 14Loss 0.01181/0.01534MAE 0.12995/0.14891Epoch: 15Loss 0.01124/0.01334MAE 0.12593/0.13727Epoch: 16Loss 0.01083/0.01371MAE 0.12342/0.14003Epoch: 17Loss 0.01057/0.01181MAE 0.12188/0.12769Epoch: 18Loss 0.01041/0.01207MAE 0.12105/0.12884Epoch: 19Loss 0.01017/0.01149MAE 0.11868/0.12613Epoch: 20Loss 0.00965/0.01348MAE 0.11610/0.13499Epoch: 21Loss 0.00993/0.01133MAE 0.11817/0.12543Epoch: 22Loss 0.00906/0.01080MAE 0.11226/0.12200Epoch: 23Loss 0.00883/0.01117MAE 0.11127/0.12394Epoch: 24Loss 0.00865/0.01064MAE 0.10986/0.12086Epoch: 25Loss 0.00924/0.01023MAE 0.11396/0.11844Epoch: 26Loss 0.00850/0.01001MAE 0.10874/0.11812Epoch: 27Loss 0.00801/0.00998MAE 0.10525/0.11665Epoch: 28Loss 0.00809/0.00978MAE 0.10666/0.11558Epoch: 29Loss 0.00743/0.01073MAE 0.10161/0.12184Epoch: 30Loss 0.00752/0.00916MAE 0.10146/0.11186Epoch: 31Loss 0.00715/0.00982MAE 0.09895/0.11673Epoch: 32Loss 0.00717/0.00907MAE 0.09980/0.11068Epoch: 33Loss 0.00718/0.00967MAE 0.09976/0.11560Epoch: 34Loss 0.00677/0.01463MAE 0.09663/0.14721Epoch: 35Loss 0.00764/0.00852MAE 0.10249/0.10766Epoch: 36Loss 0.00650/0.00916MAE 0.09434/0.11061Epoch: 37Loss 0.00644/0.00840MAE 0.09397/0.10676Epoch: 38Loss 0.00642/0.00852MAE 0.09410/0.10684Epoch: 39Loss 0.00611/0.00798MAE 0.09161/0.10284
登录后复制In [13]
# 预测函数def make_predict(model, loader):    model.eval()    predict_list = []    for i, (x, y) in enumerate(loader):        pred = model(x)        predict_list.append(pred.numpy())    return np.vstack(predict_list)test_pred = make_predict(model, test_loader) * 96
登录后复制In [14]
idx = 40xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [15]
idx = 42xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制

CNN模型

In [17]
from paddle.io import DataLoader, Datasetfrom PIL import Imageclass MyDataset(Dataset):    def __init__(self, img, keypoint):        super(MyDataset, self).__init__()        self.img = img        self.keypoint = keypoint        def __getitem__(self, index):        img = Image.fromarray(self.img[index, 0, :, :])        return np.asarray(img).reshape(1, 96, 96).astype(np.float32)/255, self.keypoint[index] / 96.0    def __len__(self):        return len(self.keypoint)train_dataset = MyDataset(    train_img[:-500, :, :, :],     paddle.to_tensor(train_df.values[:-500].astype(np.float32)))train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)val_dataset = MyDataset(    train_img[-500:, :, :, :],     paddle.to_tensor(train_df.values[-500:].astype(np.float32)))val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False)test_dataset = MyDataset(    test_img[:, :, :],     paddle.to_tensor(np.zeros((test_img.shape[2], 8))))test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
登录后复制In [18]
# 卷积模型model = paddle.nn.Sequential(    paddle.nn.Conv2D(1, 10, (5, 5)),    paddle.nn.ReLU(),    paddle.nn.MaxPool2D((2, 2)),    paddle.nn.Conv2D(10, 20, (5, 5)),    paddle.nn.ReLU(),    paddle.nn.MaxPool2D((2, 2)),    paddle.nn.Conv2D(20, 40, (5, 5)),    paddle.nn.ReLU(),    paddle.nn.MaxPool2D((2, 2)),    paddle.nn.Flatten(),    paddle.nn.Linear(2560, 8),)paddle.summary(model, (64, 1, 96, 96))
登录后复制
--------------------------------------------------------------------------- Layer (type)       Input Shape          Output Shape         Param #    ===========================================================================   Conv2D-4      [[64, 1, 96, 96]]     [64, 10, 92, 92]         260          ReLU-4       [[64, 10, 92, 92]]    [64, 10, 92, 92]          0         MaxPool2D-4    [[64, 10, 92, 92]]    [64, 10, 46, 46]          0          Conv2D-5      [[64, 10, 46, 46]]    [64, 20, 42, 42]        5,020         ReLU-5       [[64, 20, 42, 42]]    [64, 20, 42, 42]          0         MaxPool2D-5    [[64, 20, 42, 42]]    [64, 20, 21, 21]          0          Conv2D-6      [[64, 20, 21, 21]]    [64, 40, 17, 17]       20,040         ReLU-6       [[64, 40, 17, 17]]    [64, 40, 17, 17]          0         MaxPool2D-6    [[64, 40, 17, 17]]     [64, 40, 8, 8]           0          Flatten-3      [[64, 40, 8, 8]]        [64, 2560]             0          Linear-4         [[64, 2560]]           [64, 8]            20,488     ===========================================================================Total params: 45,808Trainable params: 45,808Non-trainable params: 0---------------------------------------------------------------------------Input size (MB): 2.25Forward/backward pass size (MB): 145.54Params size (MB): 0.17Estimated Total Size (MB): 147.97---------------------------------------------------------------------------
登录后复制
{'total_params': 45808, 'trainable_params': 45808}
登录后复制In [19]
# 损失函数和优化器optimizer = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=0.0001)criterion = paddle.nn.MSELoss()from sklearn.metrics import mean_absolute_errorfor epoch in range(0, 40):    Train_Loss, Val_Loss = [], []    Train_MAE, Val_MAE = [], []        # 训练    model.train()    for i, (x, y) in enumerate(train_loader):        pred = model(x)        loss = criterion(pred, y)        Train_Loss.append(loss.item())        loss.backward()        optimizer.step()        optimizer.clear_grad()        Train_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96  / y.shape[0])        # 验证    model.eval()    for i, (x, y) in enumerate(val_loader):        pred = model(x)        loss = criterion(pred, y)        Val_Loss.append(loss.item())        Val_MAE.append(mean_absolute_error(y.numpy(), pred.numpy()) * 96 / y.shape[0])        if epoch % 1 == 0:        print(f'\nEpoch: {epoch}')        print(f'Loss {np.mean(Train_Loss):3.5f}/{np.mean(Val_Loss):3.5f}')        print(f'MAE {np.mean(Train_MAE):3.5f}/{np.mean(Val_MAE):3.5f}')
登录后复制
Epoch: 0Loss 0.23343/0.03865MAE 0.44735/0.23946Epoch: 1Loss 0.03499/0.03301MAE 0.22689/0.22072Epoch: 2Loss 0.03006/0.02846MAE 0.20913/0.20492Epoch: 3Loss 0.02614/0.02548MAE 0.19541/0.19341Epoch: 4Loss 0.02270/0.02314MAE 0.18112/0.18211Epoch: 5Loss 0.01965/0.01952MAE 0.16927/0.16763Epoch: 6Loss 0.01704/0.01763MAE 0.15715/0.15866Epoch: 7Loss 0.01492/0.01483MAE 0.14711/0.14516Epoch: 8Loss 0.01260/0.01268MAE 0.13498/0.13350Epoch: 9Loss 0.01034/0.00996MAE 0.12187/0.11828Epoch: 10Loss 0.00855/0.00836MAE 0.11041/0.10738Epoch: 11Loss 0.00751/0.00737MAE 0.10320/0.10133Epoch: 12Loss 0.00644/0.00657MAE 0.09478/0.09471Epoch: 13Loss 0.00592/0.00626MAE 0.09048/0.09321Epoch: 14Loss 0.00556/0.00568MAE 0.08704/0.08790Epoch: 15Loss 0.00518/0.00538MAE 0.08444/0.08551Epoch: 16Loss 0.00491/0.00524MAE 0.08204/0.08433Epoch: 17Loss 0.00474/0.00495MAE 0.08087/0.08178Epoch: 18Loss 0.00450/0.00476MAE 0.07885/0.08041Epoch: 19Loss 0.00431/0.00460MAE 0.07685/0.07922Epoch: 20Loss 0.00421/0.00458MAE 0.07596/0.07887Epoch: 21Loss 0.00393/0.00421MAE 0.07302/0.07515Epoch: 22Loss 0.00387/0.00419MAE 0.07282/0.07502Epoch: 23Loss 0.00373/0.00416MAE 0.07131/0.07482Epoch: 24Loss 0.00354/0.00385MAE 0.06945/0.07177Epoch: 25Loss 0.00347/0.00386MAE 0.06882/0.07173Epoch: 26Loss 0.00340/0.00368MAE 0.06781/0.06999Epoch: 27Loss 0.00323/0.00363MAE 0.06601/0.06949Epoch: 28Loss 0.00320/0.00349MAE 0.06580/0.06794Epoch: 29Loss 0.00307/0.00349MAE 0.06427/0.06842Epoch: 30Loss 0.00300/0.00336MAE 0.06357/0.06692Epoch: 31Loss 0.00291/0.00329MAE 0.06240/0.06611Epoch: 32Loss 0.00287/0.00326MAE 0.06206/0.06594Epoch: 33Loss 0.00280/0.00323MAE 0.06119/0.06572Epoch: 34Loss 0.00276/0.00312MAE 0.06076/0.06427Epoch: 35Loss 0.00268/0.00304MAE 0.05994/0.06345Epoch: 36Loss 0.00262/0.00301MAE 0.05915/0.06306Epoch: 37Loss 0.00256/0.00294MAE 0.05834/0.06231Epoch: 38Loss 0.00256/0.00288MAE 0.05833/0.06166Epoch: 39Loss 0.00246/0.00284MAE 0.05717/0.06128
登录后复制In [20]
def make_predict(model, loader):    model.eval()    predict_list = []    for i, (x, y) in enumerate(loader):        pred = model(x)        predict_list.append(pred.numpy())    return np.vstack(predict_list)test_pred = make_predict(model, test_loader) * 96
登录后复制In [21]
idx = 40xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制In [22]
idx = 42xy = test_pred[idx, :].reshape(-1, 2)plt.scatter(xy[:, 0], xy[:, 1], c='r')plt.imshow(test_img[idx, 0, :, :], cmap='gray')
登录后复制登录后复制
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_min = np.asscalar(a_min.astype(scaled_dtype))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead  a_max = np.asscalar(a_max.astype(scaled_dtype))
登录后复制登录后复制登录后复制登录后复制登录后复制
登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制登录后复制
来源:https://www.php.cn/faq/1412436.html
免责声明: 游乐网为非赢利性网站,所展示的游戏/软件/文章内容均来自于互联网或第三方用户上传分享,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系youleyoucom@outlook.com。

相关攻略

Pywinrm,一个 Python 管理利器!
科技数码
Pywinrm,一个 Python 管理利器!

Pywinrm 通过Windows远程管理(WinRM)协议,让Python能够像操作本地一样执行远程Windows命令,真正打通了跨平台管理的最后一公里。 在混合IT环境中,Linux机器管理Wi

热心网友
04.07
全网炸了!5亿人用的Axios竟被投毒,你的密钥还保得住吗?
科技数码
全网炸了!5亿人用的Axios竟被投毒,你的密钥还保得住吗?

早些时候,聊过 Python 领域那场惊心动魄的供应链攻击。当时我就感叹,虽然我们 JavaScript 开发者对这类套路烂熟于心,但亲眼目睹这种规模的“投毒”还是头一次。 早些时候,聊过 Pyth

热心网友
04.07
Toga,一个超精简的 Python 项目!
科技数码
Toga,一个超精简的 Python 项目!

Toga 是 BeeWare 家族的核心成员,号称“写一次,跑遍所有平台”,而且用的是系统原生控件,不是那种一看就是网页套壳的界面 。 写了这么多年 Python,你是不是也想过:要是能一套代码跑

热心网友
04.07
Python 异常处理:别再用裸奔的 try 了
科技数码
Python 异常处理:别再用裸奔的 try 了

异常处理的核心:让错误在正确的地方被有效处理。正确的地方,就是别在底层就把异常吞了,也别在顶层还抛裸奔的 Exception。 异常处理写得好,半夜不用起来改 bug。1 你是不是也这么干过?tr

热心网友
04.07
OpenClaw如何自定义SKILL
AI
OpenClaw如何自定义SKILL

1 Skills机制概述 提起OpenClaw的Skills机制,不少人可能会把它想象成传统意义上的可执行插件。其实,它的内涵要更精妙一些。 简单说,Skills本质上是一套基于提示驱动的能力扩展机制。它并不是一个可以独立“跑”起来的程序模块,而是通过一份结构化描述文件(核心就是那个SKILL m

热心网友
04.07

最新APP

宝宝过生日
宝宝过生日
应用辅助 04-07
台球世界
台球世界
体育竞技 04-07
解绳子
解绳子
休闲益智 04-07
骑兵冲突
骑兵冲突
棋牌策略 04-07
三国真龙传
三国真龙传
角色扮演 04-07

热门推荐

美国SEC主席Paul Atkins证实:加密货币安全港提案已送交白宫审查
web3.0
美国SEC主席Paul Atkins证实:加密货币安全港提案已送交白宫审查

加密货币行业翘首以盼的监管里程碑,终于有了实质性进展。美国证券交易委员会(SEC)主席保罗·阿特金斯(Paul Atkins)近日证实,那份允许加密项目在早期获得注册豁免权的“安全港”框架提案,已经正式送抵白宫,进入了最终审查阶段。 在范德堡大学与区块链协会联合举办的数字资产峰会上,阿特金斯透露了这

热心网友
04.08
微策略Strategy报告:第一季录得144.6亿美元浮亏 再斥资约3.3亿美元买进4871枚比特币
web3.0
微策略Strategy报告:第一季录得144.6亿美元浮亏 再斥资约3.3亿美元买进4871枚比特币

微策略Strategy报告:第一季录得144 6亿美元浮亏 再斥资约3 3亿美元买进4871枚比特币 市场震荡的威力有多大?看看Strategy的最新季报就明白了。根据其最新向美国证管会(SEC)提交的8-K报告,受市场剧烈波动影响,这家公司所持的比特币在第一季度录得了一笔惊人的数字——144 6亿

热心网友
04.08
稳定币发行商Tether再扩Web3版图!Paolo Ardoino:正开发去中心化搜索引擎Hypersearch
web3.0
稳定币发行商Tether再扩Web3版图!Paolo Ardoino:正开发去中心化搜索引擎Hypersearch

稳定币巨头Tether的动向,向来是加密世界的风向标。这不,它向Web3基础设施的版图扩张,又迈出了关键一步。公司执行长Paolo Ardoino在社交平台X上透露,其工程团队正在全力“烹制”一个新项目——去中心化搜索引擎 “Hypersearch”。这个消息一出,立刻引发了行业的广泛猜想。 采用D

热心网友
04.08
Base链首个原生DeFi借贷协议Seamless Protocol倒闭 将于2026年6月30日下线
web3.0
Base链首个原生DeFi借贷协议Seamless Protocol倒闭 将于2026年6月30日下线

基地位于Coinbase旗下以太坊Layer2网络Base的Seamless Protocol,日前正式宣告了服务的终结。这个曾经吸引了超过20万用户的原生DeFi借贷协议,在运营不到三年后,终究没能跑赢时间。它主打的核心产品是Integrated Leverage Markets(ILMs)——一

热心网友
04.08
PAAL代币如何参与治理?社区投票能决定哪些事项?
web3.0
PAAL代币如何参与治理?社区投票能决定哪些事项?

PAAL代币揭秘:深度解析Web3社区治理的核心钥匙 在去中心化自治组织的浪潮中,谁真正掌握了项目的话语权?PAAL代币提供了一套系统化的答案。它不仅是生态内流转的价值媒介,更是开启链上治理大门的核心凭证。通过持有并质押PAAL代币,用户能够对协议升级、资金分配乃至战略方向等关键事务投出决定性的一票

热心网友
04.08