首页 游戏 软件 资讯 排行榜 专题
首页
AI
基于Paddle2.0的猫12分类

基于Paddle2.0的猫12分类

热心网友
83
转载
2025-07-21
本项目用Paddle2.0rc解决12类猫的分类问题,旨在掌握其图像分类使用方法。数据集含2160张训练图、240张测试图,先解压并加载库,配置参数后划分训练和验证集,定义数据集类,初始化mobilenet_v1模型,配置优化器等,训练10轮,最后评估、测试并保存模型与结果。

基于paddle2.0的猫12分类 - 游乐网

项目描述

本项目采用Paddle2.0rc版本解决猫的分类问题。

主要目的是跑通流程,掌握Paddle2.0rc版本在图像分类问题上的使用方法。

数据集介绍


训练集共有2160张图片,测试集共有240张图片。共有12个猫的分类。


基于Paddle2.0的猫12分类 - 游乐网 基于Paddle2.0的猫12分类 - 游乐网 基于Paddle2.0的猫12分类 - 游乐网 基于Paddle2.0的猫12分类 - 游乐网

解压缩原始数据集

In [ ]
!unzip -q data/data63045/cat_12_train.zip -d work/!unzip -q data/data63045/cat_12_test.zip -d work/!cp data/data63045/train_list.txt work/
登录后复制

加载需要的库

In [ ]
import osimport shutilimport numpy as npimport paddlefrom paddle.io import Datasetfrom paddle.vision.datasets import DatasetFolder, ImageFolderfrom paddle.vision.transforms import Compose, Resize, Transpose
登录后复制

全局参数配置

In [ ]
'''参数配置'''train_parameters = {    "class_dim": 12,  #分类数    "target_path":"/home/aistudio/work/",                         'train_image_dir': '/home/aistudio/work/trainImages',    'eval_image_dir': '/home/aistudio/work/evalImages',    'test_image_dir': '/home/aistudio/work/cat_12_test',    'train_list_path': '/home/aistudio/work/train_list.txt',}
登录后复制

划分训练集和验证集

In [ ]
def create_train_eval():    '''    划分训练集和验证集    '''    train_dir = train_parameters['train_image_dir']    eval_dir = train_parameters['eval_image_dir']    train_list_path = train_parameters['train_list_path']    target_path = train_parameters['target_path']    print('creating training and eval images')    if not os.path.exists(train_dir):        os.mkdir(train_dir)    if not os.path.exists(eval_dir):        os.mkdir(eval_dir)     with open(train_list_path, 'r') as f:        data = f.readlines()        for i in range(len(data)):            img_path = data[i].split('\t')[0]            class_label = data[i].split('\t')[1][:-1]            if i % 8 == 0:                                  # 每8张图片取一个做验证数据                eval_target_dir = os.path.join(eval_dir, str(class_label))                 eval_img_path = os.path.join(target_path, img_path)                if not os.path.exists(eval_target_dir):                    os.mkdir(eval_target_dir)                  shutil.copy(eval_img_path, eval_target_dir)                                     else:                train_target_dir = os.path.join(train_dir, str(class_label))                 train_img_path = os.path.join(target_path, img_path)                                     if not os.path.exists(train_target_dir):                    os.mkdir(train_target_dir)                shutil.copy(train_img_path, train_target_dir)     print ('划分训练集和验证集完成!')
登录后复制In [ ]
create_train_eval()
登录后复制
creating training and eval images划分训练集和验证集完成!
登录后复制

定义数据集

In [ ]
class CatDataset(Dataset):    """    步骤一:继承paddle.io.Dataset类    """    def __init__(self, mode='train'):        """        步骤二:实现构造函数,定义数据读取方式,划分训练和测试数据集        """        super(CatDataset, self).__init__()        train_image_dir = train_parameters['train_image_dir']        eval_image_dir = train_parameters['eval_image_dir']        test_image_dir = train_parameters['test_image_dir']        transform_train = Compose([Resize(size=(112,112)), Transpose()])        transform_eval = Compose([Resize(size=(112,112)), Transpose()])        train_data_folder = DatasetFolder(train_image_dir, transform=transform_train)        eval_data_folder = DatasetFolder(eval_image_dir, transform=transform_eval)        test_data_folder = ImageFolder(test_image_dir, transform=transform_eval)        self.mode = mode        if self.mode  == 'train':            self.data = train_data_folder        elif self.mode  == 'eval':            self.data = eval_data_folder        elif self.mode  == 'test':            self.data = test_data_folder    def __getitem__(self, index):        """        步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)        """        data = np.array(self.data[index][0]).astype('float32')        if self.mode  == 'test':            return data        else:            label = np.array([self.data[index][1]]).astype('int64')            return data, label    def __len__(self):        """        步骤四:实现__len__方法,返回数据集总数目        """        return len(self.data)
登录后复制In [ ]
train_dataset = CatDataset(mode='train')val_dataset = CatDataset(mode='eval')test_dataset = CatDataset(mode='test')
登录后复制

初始化模型

In [ ]
# 使用内置的模型,这边可以选择多种不同网络,这里选了mobilenet_v1网络model = paddle.vision.models.mobilenet_v1(pretrained=True, num_classes=train_parameters["class_dim"])model = paddle.Model(model)
登录后复制
100%|██████████| 25072/25072 [00:00<00:00, 28070.41it/s]/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1175: UserWarning: Skip loading for fc.weight. fc.weight receives a shape [1024, 1000], but the expected shape is [1024, 12].  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1175: UserWarning: Skip loading for fc.bias. fc.bias receives a shape [1000], but the expected shape is [12].  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
登录后复制In [ ]
## 查看模型结构model.summary((-1, 3, 224, 224))
登录后复制
---------------------------------------------------------------------------------    Layer (type)          Input Shape          Output Shape         Param #    =================================================================================     Conv2D-160        [[1, 3, 224, 224]]   [1, 32, 112, 112]         864         BatchNorm2D-107    [[1, 32, 112, 112]]   [1, 32, 112, 112]         128             ReLU-88        [[1, 32, 112, 112]]   [1, 32, 112, 112]          0           ConvBNLayer-1      [[1, 3, 224, 224]]   [1, 32, 112, 112]          0            Conv2D-161       [[1, 32, 112, 112]]   [1, 32, 112, 112]         288         BatchNorm2D-108    [[1, 32, 112, 112]]   [1, 32, 112, 112]         128             ReLU-89        [[1, 32, 112, 112]]   [1, 32, 112, 112]          0           ConvBNLayer-2     [[1, 32, 112, 112]]   [1, 32, 112, 112]          0            Conv2D-162       [[1, 32, 112, 112]]   [1, 64, 112, 112]        2,048        BatchNorm2D-109    [[1, 64, 112, 112]]   [1, 64, 112, 112]         256             ReLU-90        [[1, 64, 112, 112]]   [1, 64, 112, 112]          0           ConvBNLayer-3     [[1, 32, 112, 112]]   [1, 64, 112, 112]          0       DepthwiseSeparable-1  [[1, 32, 112, 112]]   [1, 64, 112, 112]          0            Conv2D-163       [[1, 64, 112, 112]]    [1, 64, 56, 56]          576         BatchNorm2D-110     [[1, 64, 56, 56]]     [1, 64, 56, 56]          256             ReLU-91         [[1, 64, 56, 56]]     [1, 64, 56, 56]           0           ConvBNLayer-4     [[1, 64, 112, 112]]    [1, 64, 56, 56]           0            Conv2D-164        [[1, 64, 56, 56]]     [1, 128, 56, 56]        8,192        BatchNorm2D-111     [[1, 128, 56, 56]]    [1, 128, 56, 56]         512             ReLU-92         [[1, 128, 56, 56]]    [1, 128, 56, 56]          0           ConvBNLayer-5      [[1, 64, 56, 56]]     [1, 128, 56, 56]          0       DepthwiseSeparable-2  [[1, 64, 112, 112]]    [1, 128, 56, 56]          0            Conv2D-165        [[1, 128, 56, 56]]    [1, 128, 56, 56]        1,152        BatchNorm2D-112     [[1, 128, 56, 56]]    [1, 128, 56, 56]         512             ReLU-93         [[1, 128, 56, 56]]    [1, 128, 56, 56]          0           ConvBNLayer-6      [[1, 128, 56, 56]]    [1, 128, 56, 56]          0            Conv2D-166        [[1, 128, 56, 56]]    [1, 128, 56, 56]       16,384        BatchNorm2D-113     [[1, 128, 56, 56]]    [1, 128, 56, 56]         512             ReLU-94         [[1, 128, 56, 56]]    [1, 128, 56, 56]          0           ConvBNLayer-7      [[1, 128, 56, 56]]    [1, 128, 56, 56]          0       DepthwiseSeparable-3   [[1, 128, 56, 56]]    [1, 128, 56, 56]          0            Conv2D-167        [[1, 128, 56, 56]]    [1, 128, 28, 28]        1,152        BatchNorm2D-114     [[1, 128, 28, 28]]    [1, 128, 28, 28]         512             ReLU-95         [[1, 128, 28, 28]]    [1, 128, 28, 28]          0           ConvBNLayer-8      [[1, 128, 56, 56]]    [1, 128, 28, 28]          0            Conv2D-168        [[1, 128, 28, 28]]    [1, 256, 28, 28]       32,768        BatchNorm2D-115     [[1, 256, 28, 28]]    [1, 256, 28, 28]        1,024            ReLU-96         [[1, 256, 28, 28]]    [1, 256, 28, 28]          0           ConvBNLayer-9      [[1, 128, 28, 28]]    [1, 256, 28, 28]          0       DepthwiseSeparable-4   [[1, 128, 56, 56]]    [1, 256, 28, 28]          0            Conv2D-169        [[1, 256, 28, 28]]    [1, 256, 28, 28]        2,304        BatchNorm2D-116     [[1, 256, 28, 28]]    [1, 256, 28, 28]        1,024            ReLU-97         [[1, 256, 28, 28]]    [1, 256, 28, 28]          0          ConvBNLayer-10      [[1, 256, 28, 28]]    [1, 256, 28, 28]          0            Conv2D-170        [[1, 256, 28, 28]]    [1, 256, 28, 28]       65,536        BatchNorm2D-117     [[1, 256, 28, 28]]    [1, 256, 28, 28]        1,024            ReLU-98         [[1, 256, 28, 28]]    [1, 256, 28, 28]          0          ConvBNLayer-11      [[1, 256, 28, 28]]    [1, 256, 28, 28]          0       DepthwiseSeparable-5   [[1, 256, 28, 28]]    [1, 256, 28, 28]          0            Conv2D-171        [[1, 256, 28, 28]]    [1, 256, 14, 14]        2,304        BatchNorm2D-118     [[1, 256, 14, 14]]    [1, 256, 14, 14]        1,024            ReLU-99         [[1, 256, 14, 14]]    [1, 256, 14, 14]          0          ConvBNLayer-12      [[1, 256, 28, 28]]    [1, 256, 14, 14]          0            Conv2D-172        [[1, 256, 14, 14]]    [1, 512, 14, 14]       131,072       BatchNorm2D-119     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-100         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-13      [[1, 256, 14, 14]]    [1, 512, 14, 14]          0       DepthwiseSeparable-6   [[1, 256, 28, 28]]    [1, 512, 14, 14]          0            Conv2D-173        [[1, 512, 14, 14]]    [1, 512, 14, 14]        4,608        BatchNorm2D-120     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-101         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-14      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-174        [[1, 512, 14, 14]]    [1, 512, 14, 14]       262,144       BatchNorm2D-121     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-102         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-15      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0       DepthwiseSeparable-7   [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-175        [[1, 512, 14, 14]]    [1, 512, 14, 14]        4,608        BatchNorm2D-122     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-103         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-16      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-176        [[1, 512, 14, 14]]    [1, 512, 14, 14]       262,144       BatchNorm2D-123     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-104         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-17      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0       DepthwiseSeparable-8   [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-177        [[1, 512, 14, 14]]    [1, 512, 14, 14]        4,608        BatchNorm2D-124     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-105         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-18      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-178        [[1, 512, 14, 14]]    [1, 512, 14, 14]       262,144       BatchNorm2D-125     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-106         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-19      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0       DepthwiseSeparable-9   [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-179        [[1, 512, 14, 14]]    [1, 512, 14, 14]        4,608        BatchNorm2D-126     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-107         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-20      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-180        [[1, 512, 14, 14]]    [1, 512, 14, 14]       262,144       BatchNorm2D-127     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-108         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-21      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0       DepthwiseSeparable-10  [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-181        [[1, 512, 14, 14]]    [1, 512, 14, 14]        4,608        BatchNorm2D-128     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-109         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-22      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-182        [[1, 512, 14, 14]]    [1, 512, 14, 14]       262,144       BatchNorm2D-129     [[1, 512, 14, 14]]    [1, 512, 14, 14]        2,048           ReLU-110         [[1, 512, 14, 14]]    [1, 512, 14, 14]          0          ConvBNLayer-23      [[1, 512, 14, 14]]    [1, 512, 14, 14]          0       DepthwiseSeparable-11  [[1, 512, 14, 14]]    [1, 512, 14, 14]          0            Conv2D-183        [[1, 512, 14, 14]]     [1, 512, 7, 7]         4,608        BatchNorm2D-130      [[1, 512, 7, 7]]      [1, 512, 7, 7]         2,048           ReLU-111          [[1, 512, 7, 7]]      [1, 512, 7, 7]           0          ConvBNLayer-24      [[1, 512, 14, 14]]     [1, 512, 7, 7]           0            Conv2D-184         [[1, 512, 7, 7]]     [1, 1024, 7, 7]        524,288       BatchNorm2D-131     [[1, 1024, 7, 7]]     [1, 1024, 7, 7]         4,096           ReLU-112         [[1, 1024, 7, 7]]     [1, 1024, 7, 7]           0          ConvBNLayer-25       [[1, 512, 7, 7]]     [1, 1024, 7, 7]           0       DepthwiseSeparable-12  [[1, 512, 14, 14]]    [1, 1024, 7, 7]           0            Conv2D-185        [[1, 1024, 7, 7]]     [1, 1024, 7, 7]         9,216        BatchNorm2D-132     [[1, 1024, 7, 7]]     [1, 1024, 7, 7]         4,096           ReLU-113         [[1, 1024, 7, 7]]     [1, 1024, 7, 7]           0          ConvBNLayer-26      [[1, 1024, 7, 7]]     [1, 1024, 7, 7]           0            Conv2D-186        [[1, 1024, 7, 7]]     [1, 1024, 7, 7]       1,048,576      BatchNorm2D-133     [[1, 1024, 7, 7]]     [1, 1024, 7, 7]         4,096           ReLU-114         [[1, 1024, 7, 7]]     [1, 1024, 7, 7]           0          ConvBNLayer-27      [[1, 1024, 7, 7]]     [1, 1024, 7, 7]           0       DepthwiseSeparable-13  [[1, 1024, 7, 7]]     [1, 1024, 7, 7]           0        AdaptiveAvgPool2D-3   [[1, 1024, 7, 7]]     [1, 1024, 1, 1]           0             Linear-3            [[1, 1024]]            [1, 12]            12,300     =================================================================================Total params: 3,241,164Trainable params: 3,197,388Non-trainable params: 43,776---------------------------------------------------------------------------------Input size (MB): 0.57Forward/backward pass size (MB): 174.57Params size (MB): 12.36Estimated Total Size (MB): 187.51---------------------------------------------------------------------------------
登录后复制
{'total_params': 3241164, 'trainable_params': 3197388}
登录后复制

模型配置:为模型训练做准备,设置优化器,损失函数和精度计算方式

In [ ]
# 调用飞桨框架的VisualDL模块,保存信息到目录中。callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir')
登录后复制In [ ]
model.prepare(optimizer=paddle.optimizer.Adam(              learning_rate=0.001,              parameters=model.parameters()),              loss=paddle.nn.CrossEntropyLoss(),              metrics=paddle.metric.Accuracy())
登录后复制

模型训练

In [ ]
model.fit(train_dataset,          val_dataset,          epochs=10,          batch_size=32,          callbacks=callback,          verbose=1)
登录后复制
Epoch 1/10
登录后复制
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:637: UserWarning: When training, we now always track global mean and variance.  "When training, we now always track global mean and variance.")
登录后复制
step 60/60 [==============================] - loss: 2.2463 - acc: 0.6090 - 294ms/step        Eval begin...step 9/9 [==============================] - loss: 0.4193 - acc: 0.6704 - 285ms/stepEval samples: 270Epoch 2/10step 60/60 [==============================] - loss: 1.6789 - acc: 0.8571 - 292ms/step        Eval begin...step 9/9 [==============================] - loss: 1.3652 - acc: 0.7630 - 280ms/stepEval samples: 270Epoch 3/10step 60/60 [==============================] - loss: 2.7546 - acc: 0.9069 - 301ms/step        Eval begin...step 9/9 [==============================] - loss: 0.5788 - acc: 0.7704 - 277ms/stepEval samples: 270Epoch 4/10step 60/60 [==============================] - loss: 2.8266 - acc: 0.9212 - 294ms/step        Eval begin...step 9/9 [==============================] - loss: 1.0253 - acc: 0.7630 - 289ms/stepEval samples: 270Epoch 5/10step 60/60 [==============================] - loss: 2.9948 - acc: 0.9111 - 298ms/step        Eval begin...step 9/9 [==============================] - loss: 1.0942 - acc: 0.6963 - 279ms/stepEval samples: 270Epoch 6/10step 60/60 [==============================] - loss: 0.8608 - acc: 0.9101 - 293ms/step        Eval begin...step 9/9 [==============================] - loss: 1.5086 - acc: 0.7444 - 282ms/stepEval samples: 270Epoch 7/10step 60/60 [==============================] - loss: 2.5132 - acc: 0.9392 - 296ms/step        Eval begin...step 9/9 [==============================] - loss: 1.0570 - acc: 0.7593 - 276ms/stepEval samples: 270Epoch 8/10step 60/60 [==============================] - loss: 0.8880 - acc: 0.9508 - 298ms/step        Eval begin...step 9/9 [==============================] - loss: 1.3956 - acc: 0.7111 - 285ms/stepEval samples: 270Epoch 9/10step 60/60 [==============================] - loss: 0.0301 - acc: 0.9370 - 291ms/step        Eval begin...step 9/9 [==============================] - loss: 1.6584 - acc: 0.7333 - 280ms/stepEval samples: 270Epoch 10/10step 60/60 [==============================] - loss: 8.0418 - acc: 0.9757 - 294ms/step        Eval begin...step 9/9 [==============================] - loss: 0.2750 - acc: 0.7630 - 279ms/stepEval samples: 270
登录后复制

可视化结果

基于Paddle2.0的猫12分类 - 游乐网

启动模型评估,指定数据集,设置日志格式

In [ ]
model.evaluate(val_dataset, verbose=1)
登录后复制
Eval begin...step 270/270 [==============================] - loss: 1.6444 - acc: 0.7630 - 16ms/step        Eval samples: 270
登录后复制
{'loss': [1.6443943], 'acc': 0.762962962962963}
登录后复制

启动模型测试,指定测试集

In [ ]
results = model.predict(test_dataset)
登录后复制
Predict begin...step 240/240 [==============================] - 14ms/step         Predict samples: 240
登录后复制

预测结果写入csv文件

In [ ]
labels = []for result in results[0]:    lab = np.argmax(result)    labels.append(lab)
登录后复制In [ ]
test_paths = os.listdir('work/cat_12_test')
登录后复制In [ ]
final_result=[]for i in range(len(labels)):    final_result.append(test_paths[i] + ',' + str(labels[i]) + '\n')
登录后复制In [ ]
with open("work/result.csv","w") as f:     f.writelines(final_result)
登录后复制

保存模型

In [ ]
model.save('work/model')  # save for training
登录后复制
来源:https://www.php.cn/faq/1419595.html
免责声明: 游乐网为非赢利性网站,所展示的游戏/软件/文章内容均来自于互联网或第三方用户上传分享,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系youleyoucom@outlook.com。

相关攻略

Pywinrm,一个 Python 管理利器!
科技数码
Pywinrm,一个 Python 管理利器!

Pywinrm 通过Windows远程管理(WinRM)协议,让Python能够像操作本地一样执行远程Windows命令,真正打通了跨平台管理的最后一公里。 在混合IT环境中,Linux机器管理Wi

热心网友
04.07
全网炸了!5亿人用的Axios竟被投毒,你的密钥还保得住吗?
科技数码
全网炸了!5亿人用的Axios竟被投毒,你的密钥还保得住吗?

早些时候,聊过 Python 领域那场惊心动魄的供应链攻击。当时我就感叹,虽然我们 JavaScript 开发者对这类套路烂熟于心,但亲眼目睹这种规模的“投毒”还是头一次。 早些时候,聊过 Pyth

热心网友
04.07
Toga,一个超精简的 Python 项目!
科技数码
Toga,一个超精简的 Python 项目!

Toga 是 BeeWare 家族的核心成员,号称“写一次,跑遍所有平台”,而且用的是系统原生控件,不是那种一看就是网页套壳的界面 。 写了这么多年 Python,你是不是也想过:要是能一套代码跑

热心网友
04.07
Python 异常处理:别再用裸奔的 try 了
科技数码
Python 异常处理:别再用裸奔的 try 了

异常处理的核心:让错误在正确的地方被有效处理。正确的地方,就是别在底层就把异常吞了,也别在顶层还抛裸奔的 Exception。 异常处理写得好,半夜不用起来改 bug。1 你是不是也这么干过?tr

热心网友
04.07
OpenClaw如何自定义SKILL
AI
OpenClaw如何自定义SKILL

1 Skills机制概述 提起OpenClaw的Skills机制,不少人可能会把它想象成传统意义上的可执行插件。其实,它的内涵要更精妙一些。 简单说,Skills本质上是一套基于提示驱动的能力扩展机制。它并不是一个可以独立“跑”起来的程序模块,而是通过一份结构化描述文件(核心就是那个SKILL m

热心网友
04.07

最新APP

宝宝过生日
宝宝过生日
应用辅助 04-07
台球世界
台球世界
体育竞技 04-07
解绳子
解绳子
休闲益智 04-07
骑兵冲突
骑兵冲突
棋牌策略 04-07
三国真龙传
三国真龙传
角色扮演 04-07

热门推荐

美国SEC主席Paul Atkins证实:加密货币安全港提案已送交白宫审查
web3.0
美国SEC主席Paul Atkins证实:加密货币安全港提案已送交白宫审查

加密货币行业翘首以盼的监管里程碑,终于有了实质性进展。美国证券交易委员会(SEC)主席保罗·阿特金斯(Paul Atkins)近日证实,那份允许加密项目在早期获得注册豁免权的“安全港”框架提案,已经正式送抵白宫,进入了最终审查阶段。 在范德堡大学与区块链协会联合举办的数字资产峰会上,阿特金斯透露了这

热心网友
04.08
微策略Strategy报告:第一季录得144.6亿美元浮亏 再斥资约3.3亿美元买进4871枚比特币
web3.0
微策略Strategy报告:第一季录得144.6亿美元浮亏 再斥资约3.3亿美元买进4871枚比特币

微策略Strategy报告:第一季录得144 6亿美元浮亏 再斥资约3 3亿美元买进4871枚比特币 市场震荡的威力有多大?看看Strategy的最新季报就明白了。根据其最新向美国证管会(SEC)提交的8-K报告,受市场剧烈波动影响,这家公司所持的比特币在第一季度录得了一笔惊人的数字——144 6亿

热心网友
04.08
稳定币发行商Tether再扩Web3版图!Paolo Ardoino:正开发去中心化搜索引擎Hypersearch
web3.0
稳定币发行商Tether再扩Web3版图!Paolo Ardoino:正开发去中心化搜索引擎Hypersearch

稳定币巨头Tether的动向,向来是加密世界的风向标。这不,它向Web3基础设施的版图扩张,又迈出了关键一步。公司执行长Paolo Ardoino在社交平台X上透露,其工程团队正在全力“烹制”一个新项目——去中心化搜索引擎 “Hypersearch”。这个消息一出,立刻引发了行业的广泛猜想。 采用D

热心网友
04.08
Base链首个原生DeFi借贷协议Seamless Protocol倒闭 将于2026年6月30日下线
web3.0
Base链首个原生DeFi借贷协议Seamless Protocol倒闭 将于2026年6月30日下线

基地位于Coinbase旗下以太坊Layer2网络Base的Seamless Protocol,日前正式宣告了服务的终结。这个曾经吸引了超过20万用户的原生DeFi借贷协议,在运营不到三年后,终究没能跑赢时间。它主打的核心产品是Integrated Leverage Markets(ILMs)——一

热心网友
04.08
PAAL代币如何参与治理?社区投票能决定哪些事项?
web3.0
PAAL代币如何参与治理?社区投票能决定哪些事项?

PAAL代币揭秘:深度解析Web3社区治理的核心钥匙 在去中心化自治组织的浪潮中,谁真正掌握了项目的话语权?PAAL代币提供了一套系统化的答案。它不仅是生态内流转的价值媒介,更是开启链上治理大门的核心凭证。通过持有并质押PAAL代币,用户能够对协议升级、资金分配乃至战略方向等关键事务投出决定性的一票

热心网友
04.08