首页 游戏 软件 资讯 排行榜 专题
首页
AI
【金融风控系列】_[2]_欺诈识别

【金融风控系列】_[2]_欺诈识别

热心网友
54
转载
2025-07-22
本文围绕IEEE-CIS欺诈检测赛题展开,目标是识别欺诈交易。介绍了训练集和测试集数据情况,含交易和身份数据字段。阐述了关键策略,如构建用户唯一标识、聚合特征等,还涉及特征选择、编码、验证策略及模型训练,最终线上评分为0.959221,旨在学习特征构建。

【金融风控系列】_[2]_欺诈识别 - 游乐网

IEEE-CIS 欺诈检测

该赛题来自 KAGGLE,仅用作学习交流

该赛题的主要目标是识别出每笔交易是否是欺诈的。

免费影视、动漫、音乐、游戏、小说资源长期稳定更新! 👉 点此立即查看 👈

其中训练集样本约59万(欺诈占3.5%),测试集样本约50万。

数据主要分为2类,交易数据transaction和identity数据。

本文主要是对与参考文献的收集整理


字段表

交易表


分类特征:

ProductCDcard1 - card6addr1, addr2P_emaildomainR_emaildomainM1 - M9

身份表

该表中的变量是身份信息——与交易相关的网络连接信息(IP、ISP、代理等)和数字签名(UA/浏览器/操作系统/版本等)。

它们由 Vesta 的欺诈保护系统和数字安全合作伙伴收集。

(字段名称被屏蔽,不提供成对字典用于隐私保护和合同协议)


分类特征:

DeviceTypeDeviceInfoid_12 - id_38

参考:

[1] https://zhuanlan.zhihu.com/p/85947569

[2] https://www.kaggle.com/c/ieee-fraud-detection/discussion/111284

[3] https://www.kaggle.com/c/ieee-fraud-detection/discussion/111308

[4] https://www.kaggle.com/c/ieee-fraud-detection/discussion/101203

主要策略

构建用户的唯一标识(十分重要)使用UID构建聚合特征类别特征的编码(主要是用频率编码和label encode)水平方向:模型融合;垂直方向:针对用户的后处理

欺诈行为定义

标记的逻辑是将卡上报告的退款定义为欺诈交易 (isFraud=1),并将其后的用户帐户、电子邮件地址或账单地址直接关联到这些属性的交易也定义为欺诈。如果以上均未在120天内出现,则我们定义该笔定义为合法交易(isFraud=0)。

你可能认为 120 天后,一张卡片就变成了isFraud=0。我们很少在训练数据中看到这一点。(也许欺诈性信用卡会被终止使用)。训练数据集有 73838 个客户(信用卡)有2 个或更多交易。其中,71575 (96.9%) 始终为isFraud=0,2134 (2.9%) 始终为isFraud=1。只有129(0.2%)具有的混合物isFraud=0和isFraud=1。
登录后复制        

从中,我们可以获得在业务中欺诈的逻辑,一个用户有过欺诈经历,那么他下次欺诈的概率还是非常高的,我们需要关注到这一点。


唯一客户标识

原始数据中未包含唯一UID,因此需要对客户进行唯一标识,识别客户的关键是三列card1,addr1和D1

D1 列是“自客户(信用卡)开始以来的天数”

card1 列是“银行卡的前多少位”

addr1 列是“用户地址代码”

确定了用户的唯一标识之后,我们并不能直接把它当作一个特征直接加入到模型中去,因为通过分析发现,测试集中有68.2%的用户是新用户,并不在训练集中。我们需要间接的使用`UID`,用`UID`构造一些聚合特征。
登录后复制        

特征选择

前向特征选择(使用单个或一组特征)递归特征消除(使用单个或一组特征)排列重要性对抗验证相关分析时间一致性客户一致性训练/测试分布分析

一个叫做“时间一致性”的有趣技巧是在训练数据集的第一个月使用单个特征(或一小组特征)训练单个模型,并预测isFraud最后一个月的训练数据集。这会评估特征本身是否随时间保持一致。95% 是,但我们发现 5% 的列不符合我们的模型。他们的训练 AUC 约为 0.60,验证 AUC 为 0.40。


验证策略

训练两个月/ 跳过两个月 / 预测两个月训练四个月/ 跳过一个月 / 预测一个月

特征编码

主要使用以下五种特征编码方式

频率编码 :统计该值出现的个数

def encode_FE(df1, df2, cols):    for col in cols:        df = pd.concat([df1[col], df2[col]])        vc = df.value_counts(dropna=True, normalize=True).to_dict()        vc[-1] = -1        nm = col + "FE"        df1[nm] = df1[col].map(vc)        df1[nm] = df1[nm].astype("float32")        df2[nm] = df2[col].map(vc)        df2[nm] = df2[nm].astype("float32")        print(col)
登录后复制        

标签编码 :将原数据映射称为一组顺序数字,类似ONE-HOT,不过 pd.factorize 映射为[1],[2],[3]。 pd.get_dummies() 映射为 [1,0,0],[0,1,0],[0,0,1]

def encode_LE(col, train=X_train, test=X_test, verbose=True):    df_comb = pd.concat([train[col], test[col]], axis=0)    df_comb, _ = pd.factorize(df_comb)    nm = col    if df_comb.max() > 32000:        train[nm] = df_comb[0: len(train)].astype("float32")        test[nm] = df_comb[len(train):].astype("float32")    else:        train[nm] = df_comb[0: len(train)].astype("float16")        test[nm] = df_comb[len(train):].astype("float16")    del df_comb    gc.collect()    if verbose:        print(col)
登录后复制        

统计特征:主要使用 pd.groupby对变量进行分组,再使用agg计算分组的统计特征

def encode_AG(main_columns, uids, aggregations=["mean"], df_train=X_train, df_test=X_test, fillna=True, usena=False):    for main_column in main_columns:        for col in uids:            for agg_type in aggregations:                new_column = main_column + "_" + col + "_" + agg_type                temp_df = pd.concat([df_train[[col, main_column]], df_test[[col, main_column]]])                if usena:                    temp_df.loc[temp_df[main_column] == -1, main_column] = np.nan                #求每个uid下,该col的均值或标准差                temp_df = temp_df.groupby([col])[main_column].agg([agg_type]).reset_index().rename(                    columns={agg_type: new_column})                #将uid设成index                temp_df.index = list(temp_df[col])                temp_df = temp_df[new_column].to_dict()                #temp_df是一个映射字典                df_train[new_column] = df_train[col].map(temp_df).astype("float32")                df_test[new_column] = df_test[col].map(temp_df).astype("float32")                if fillna:                    df_train[new_column].fillna(-1, inplace=True)                    df_test[new_column].fillna(-1, inplace=True)                print(new_column)
登录后复制        

交叉特征:对两列的特征重新组合成为新特征,再进行标签编码

def encode_CB(col1, col2, df1=X_train, df2=X_test):    nm = col1 + '_' + col2    df1[nm] = df1[col1].astype(str) + '_' + df1[col2].astype(str)    df2[nm] = df2[col1].astype(str) + '_' + df2[col2].astype(str)    encode_LE(nm, verbose=False)    print(nm, ', ', end='')
登录后复制        

唯一值特征:分组后返回目标属性的唯一值个数

def encode_AG2(main_columns, uids, train_df=X_train, test_df=X_test):    for main_column in main_columns:        for col in uids:            comb = pd.concat([train_df[[col] + [main_column]], test_df[[col] + [main_column]]], axis=0)            mp = comb.groupby(col)[main_column].agg(['nunique'])['nunique'].to_dict()            train_df[col + '_' + main_column + '_ct'] = train_df[col].map(mp).astype('float32')            test_df[col + '_' + main_column + '_ct'] = test_df[col].map(mp).astype('float32')            print(col + '_' + main_column + '_ct, ', end='')
登录后复制    

复现代码

因为数据集命名有空格的问题,请先将文件夹/data104475下数据集手动重命名为 IEEE_CIS_Fraud_Detection.zip

In [2]
# 解压数据集 仅第一次运行时运行!unzip -q -o data/data104475/IEEE_CIS_Fraud_Detection.zip -d /home/aistudio/data
登录后复制        
unzip:  cannot find or open data/data104475/IEEE_CIS_Fraud_Detection.zip, data/data104475/IEEE_CIS_Fraud_Detection.zip.zip or data/data104475/IEEE_CIS_Fraud_Detection.zip.ZIP.
登录后复制        In [3]
# 安装依赖包!pip install xgboost
登录后复制    In [6]
import numpy as np  # linear algebraimport pandas as pd  # data processing, CSV file I/O (e.g. pd.read_csv)import os, gcfrom sklearn.model_selection import GroupKFoldfrom sklearn.metrics import roc_auc_scoreimport xgboost as xgbimport datetime
登录后复制    In [4]
path_train_transaction = "./data/raw_data/train_transaction.csv"path_train_id = "./data/raw_data/train_identity.csv"path_test_transaction = "./data/raw_data/test_transaction.csv"path_test_id = "./data/raw_data/test_identity.csv"path_sample_submission = './data/raw_data/sample_submission.csv'path_submission = 'sub_xgb_95.csv'
登录后复制    In [7]
BUILD95 = FalseBUILD96 = True# cols with stringsstr_type = ['ProductCD', 'card4', 'card6', 'P_emaildomain', 'R_emaildomain', 'M1', 'M2', 'M3', 'M4', 'M5',            'M6', 'M7', 'M8', 'M9', 'id_12', 'id_15', 'id_16', 'id_23', 'id_27', 'id_28', 'id_29', 'id_30',            'id_31', 'id_33', 'id_34', 'id_35', 'id_36', 'id_37', 'id_38', 'DeviceType', 'DeviceInfo']# fisrt 53 columnscols = ['TransactionID', 'TransactionDT', 'TransactionAmt',        'ProductCD', 'card1', 'card2', 'card3', 'card4', 'card5', 'card6',        'addr1', 'addr2', 'dist1', 'dist2', 'P_emaildomain', 'R_emaildomain',        'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11',        'C12', 'C13', 'C14', 'D1', 'D2', 'D3', 'D4', 'D5', 'D6', 'D7', 'D8',        'D9', 'D10', 'D11', 'D12', 'D13', 'D14', 'D15', 'M1', 'M2', 'M3', 'M4',        'M5', 'M6', 'M7', 'M8', 'M9']# V COLUMNS TO LOAD DECIDED BY CORRELATION EDA# https://www.kaggle.com/cdeotte/eda-for-columns-v-and-idv = [1, 3, 4, 6, 8, 11]v += [13, 14, 17, 20, 23, 26, 27, 30]v += [36, 37, 40, 41, 44, 47, 48]v += [54, 56, 59, 62, 65, 67, 68, 70]v += [76, 78, 80, 82, 86, 88, 89, 91]# v += [96, 98, 99, 104] #relates to groups, no NANv += [107, 108, 111, 115, 117, 120, 121, 123]  # maybe group, no NANv += [124, 127, 129, 130, 136]  # relates to groups, no NAN# LOTS OF NAN BELOWv += [138, 139, 142, 147, 156, 162]  # b1v += [165, 160, 166]  # b1v += [178, 176, 173, 182]  # b2v += [187, 203, 205, 207, 215]  # b2v += [169, 171, 175, 180, 185, 188, 198, 210, 209]  # b2v += [218, 223, 224, 226, 228, 229, 235]  # b3v += [240, 258, 257, 253, 252, 260, 261]  # b3v += [264, 266, 267, 274, 277]  # b3v += [220, 221, 234, 238, 250, 271]  # b3v += [294, 284, 285, 286, 291, 297]  # relates to grous, no NANv += [303, 305, 307, 309, 310, 320]  # relates to groups, no NANv += [281, 283, 289, 296, 301, 314]  # relates to groups, no NAN# v += [332, 325, 335, 338] # b4 lots NANcols += ['V' + str(x) for x in v]dtypes = {}for c in cols + ['id_0' + str(x) for x in range(1, 10)] + ['id_' + str(x) for x in range(10, 34)]:    dtypes[c] = 'float32'for c in str_type:    dtypes[c] = 'category'# load data and mergeprint("load data...")X_train = pd.read_csv(path_train_transaction, index_col="TransactionID", dtype=dtypes, usecols=cols + ["isFraud"])train_id = pd.read_csv(path_train_id, index_col="TransactionID", dtype=dtypes)X_train = X_train.merge(train_id, how="left", left_index=True, right_index=True)X_test = pd.read_csv(path_test_transaction, index_col="TransactionID", dtype=dtypes, usecols=cols)test_id = pd.read_csv(path_test_id, index_col="TransactionID", dtype=dtypes)X_test = X_test.merge(test_id, how="left", left_index=True, right_index=True)# targety_train = X_train["isFraud"]del train_id, test_id, X_train["isFraud"]print("X_train shape:{}, X_test shape:{}".format(X_train.shape, X_test.shape))
登录后复制        
load data...X_train shape:(590540, 213), X_test shape:(506691, 213)
登录后复制        In [21]
# transform D feature "time delta" as "time point"for i in range(1, 16):    if i in [1, 2, 3, 5, 9]:        continue    X_train["D" + str(i)] = X_train["D" + str(i)] - X_train["TransactionDT"] / np.float32(60 * 60 * 24)    X_test["D" + str(i)] = X_test["D" + str(i)] - X_test["TransactionDT"] / np.float32(60 * 60 * 24)# encoding function# frequency encodedef encode_FE(df1, df2, cols):    for col in cols:        df = pd.concat([df1[col], df2[col]])        vc = df.value_counts(dropna=True, normalize=True).to_dict()        vc[-1] = -1        nm = col + "FE"        df1[nm] = df1[col].map(vc)        df1[nm] = df1[nm].astype("float32")        df2[nm] = df2[col].map(vc)        df2[nm] = df2[nm].astype("float32")        print(col)# label encodedef encode_LE(col, train=X_train, test=X_test, verbose=True):    df_comb = pd.concat([train[col], test[col]], axis=0)    df_comb, _ = pd.factorize(df_comb)    nm = col    if df_comb.max() > 32000:        train[nm] = df_comb[0: len(train)].astype("float32")        test[nm] = df_comb[len(train):].astype("float32")    else:        train[nm] = df_comb[0: len(train)].astype("float16")        test[nm] = df_comb[len(train):].astype("float16")    del df_comb    gc.collect()    if verbose:        print(col)def encode_AG(main_columns, uids, aggregations=["mean"], df_train=X_train, df_test=X_test, fillna=True, usena=False):    for main_column in main_columns:        for col in uids:            for agg_type in aggregations:                new_column = main_column + "_" + col + "_" + agg_type                temp_df = pd.concat([df_train[[col, main_column]], df_test[[col, main_column]]])                if usena:                    temp_df.loc[temp_df[main_column] == -1, main_column] = np.nan                #求每个uid下,该col的均值或标准差                temp_df = temp_df.groupby([col])[main_column].agg([agg_type]).reset_index().rename(                    columns={agg_type: new_column})                #将uid设成index                temp_df.index = list(temp_df[col])                temp_df = temp_df[new_column].to_dict()                #temp_df是一个映射字典                df_train[new_column] = df_train[col].map(temp_df).astype("float32")                df_test[new_column] = df_test[col].map(temp_df).astype("float32")                if fillna:                    df_train[new_column].fillna(-1, inplace=True)                    df_test[new_column].fillna(-1, inplace=True)                print(new_column)# COMBINE FEATURES交叉特征def encode_CB(col1, col2, df1=X_train, df2=X_test):    nm = col1 + '_' + col2    df1[nm] = df1[col1].astype(str) + '_' + df1[col2].astype(str)    df2[nm] = df2[col1].astype(str) + '_' + df2[col2].astype(str)    encode_LE(nm, verbose=False)    print(nm, ', ', end='')# GROUP AGGREGATION NUNIQUEdef encode_AG2(main_columns, uids, train_df=X_train, test_df=X_test):    for main_column in main_columns:        for col in uids:            comb = pd.concat([train_df[[col] + [main_column]], test_df[[col] + [main_column]]], axis=0)            mp = comb.groupby(col)[main_column].agg(['nunique'])['nunique'].to_dict()            train_df[col + '_' + main_column + '_ct'] = train_df[col].map(mp).astype('float32')            test_df[col + '_' + main_column + '_ct'] = test_df[col].map(mp).astype('float32')            print(col + '_' + main_column + '_ct, ', end='')print("encode cols...")# TRANSACTION AMT CENTSX_train['cents'] = (X_train['TransactionAmt'] - np.floor(X_train['TransactionAmt'])).astype('float32')X_test['cents'] = (X_test['TransactionAmt'] - np.floor(X_test['TransactionAmt'])).astype('float32')print('cents, ', end='')
登录后复制        
encode cols...cents,
登录后复制        In [19]
# FREQUENCY ENCODE: ADDR1, CARD1, CARD2, CARD3, P_EMAILDOMAINencode_FE(X_train, X_test, ['addr1', 'card1', 'card2', 'card3', 'P_emaildomain'])# COMBINE COLUMNS CARD1+ADDR1, CARD1+ADDR1+P_EMAILDOMAINencode_CB('card1', 'addr1')encode_CB('card1_addr1', 'P_emaildomain')# FREQUENCY ENOCDEencode_FE(X_train, X_test, ['card1_addr1', 'card1_addr1_P_emaildomain'])# GROUP AGGREGATEencode_AG(['TransactionAmt', 'D9', 'D11'], ['card1', 'card1_addr1', 'card1_addr1_P_emaildomain'], ['mean', 'std'],          usena=False)for col in str_type:    encode_LE(col, X_train, X_test)"""Feature Selection - Time ConsistencyWe added 28 new feature above. We have already removed 219 V Columns from correlation analysis done here. So we currently have 242 features now. We will now check each of our 242 for "time consistency". We will build 242 models. Each model will be trained on the first month of the training data and will only use one feature. We will then predict the last month of the training data. We want both training AUC and validation AUC to be above AUC = 0.5. It turns out that 19 features fail this test so we will remove them.  Additionally we will remove 7 D columns that are mostly NAN. More techniques for feature selection are listed here"""cols = list(X_train.columns)cols.remove('TransactionDT')for c in ['D6', 'D7', 'D8', 'D9', 'D12', 'D13', 'D14']:    cols.remove(c)# FAILED TIME CONSISTENCY TESTfor c in ['C3', 'M5', 'id_08', 'id_33']:    cols.remove(c)for c in ['card4', 'id_07', 'id_14', 'id_21', 'id_30', 'id_32', 'id_34']:    cols.remove(c)for c in ['id_' + str(x) for x in range(22, 28)]:    cols.remove(c)print('NOW USING THE FOLLOWING', len(cols), 'FEATURES.')# CHRIS - TRAIN 75% PREDICT 25%idxT = X_train.index[:3 * len(X_train) // 4]idxV = X_train.index[3 * len(X_train) // 4:]print(X_train.info())# X_train = X_train.convert_objects(convert_numeric=True)# X_test = X_test.convert_objects(convert_numeric=True)for col in str_type:    print(col)    X_train[col] = X_train[col].astype(int)    X_test[col] = X_test[col].astype(int)print("after transform:")print(X_train.info())# fillnafor col in cols:    X_train[col].fillna(-1, inplace=True)    X_test[col].fillna(-1, inplace=True)
登录后复制    In [22]
START_DATE = datetime.datetime.strptime('2017-11-30', '%Y-%m-%d')X_train['DT_M'] = X_train['TransactionDT'].apply(lambda x: (START_DATE + datetime.timedelta(seconds=x)))X_train['DT_M'] = (X_train['DT_M'].dt.year - 2017) * 12 + X_train['DT_M'].dt.monthX_test['DT_M'] = X_test['TransactionDT'].apply(lambda x: (START_DATE + datetime.timedelta(seconds=x)))X_test['DT_M'] = (X_test['DT_M'].dt.year - 2017) * 12 + X_test['DT_M'].dt.monthprint("training...")if BUILD95:    oof = np.zeros(len(X_train))    preds = np.zeros(len(X_test))    skf = GroupKFold(n_splits=6)    for i, (idxT, idxV) in enumerate(skf.split(X_train, y_train, groups=X_train['DT_M'])):        month = X_train.iloc[idxV]['DT_M'].iloc[0]        print('Fold', i, 'withholding month', month)        print(' rows of train =', len(idxT), 'rows of holdout =', len(idxV))        clf = xgb.XGBClassifier(            n_estimators=5000,            max_depth=12,            learning_rate=0.02,            subsample=0.8,            colsample_bytree=0.4,            missing=-1,            eval_metric='auc',            # USE CPU            # nthread=4,            # tree_method='hist'            # USE GPU            tree_method='gpu_hist'        )        h = clf.fit(X_train[cols].iloc[idxT], y_train.iloc[idxT],                    eval_set=[(X_train[cols].iloc[idxV], y_train.iloc[idxV])],                    verbose=100, early_stopping_rounds=200)        oof[idxV] += clf.predict_proba(X_train[cols].iloc[idxV])[:, 1]        preds += clf.predict_proba(X_test[cols])[:, 1] / skf.n_splits        del h, clf        x = gc.collect()    print('#' * 20)    print('XGB95 OOF CV=', roc_auc_score(y_train, oof))if BUILD95:    sample_submission = pd.read_csv(path_sample_submission)    sample_submission.isFraud = preds    sample_submission.to_csv(path_submission, index=False)X_train['day'] = X_train.TransactionDT / (24 * 60 * 60)X_train['uid'] = X_train.card1_addr1.astype(str) + '_' + np.floor(X_train.day - X_train.D1).astype(str)X_test['day'] = X_test.TransactionDT / (24 * 60 * 60)X_test['uid'] = X_test.card1_addr1.astype(str) + '_' + np.floor(X_test.day - X_test.D1).astype(str)# FREQUENCY ENCODE UIDencode_FE(X_train, X_test, ['uid'])# AGGREGATEencode_AG(['TransactionAmt', 'D4', 'D9', 'D10', 'D15'], ['uid'], ['mean', 'std'], fillna=True, usena=True)# AGGREGATEencode_AG(['C' + str(x) for x in range(1, 15) if x != 3], ['uid'], ['mean'], X_train, X_test, fillna=True, usena=True)# AGGREGATEencode_AG(['M' + str(x) for x in range(1, 10)], ['uid'], ['mean'], fillna=True, usena=True)# AGGREGATEencode_AG2(['P_emaildomain', 'dist1', 'DT_M', 'id_02', 'cents'], ['uid'], train_df=X_train, test_df=X_test)# AGGREGATEencode_AG(['C14'], ['uid'], ['std'], X_train, X_test, fillna=True, usena=True)# AGGREGATEencode_AG2(['C13', 'V314'], ['uid'], train_df=X_train, test_df=X_test)# AGGREATEencode_AG2(['V127', 'V136', 'V309', 'V307', 'V320'], ['uid'], train_df=X_train, test_df=X_test)# NEW FEATUREX_train['outsider15'] = (np.abs(X_train.D1 - X_train.D15) > 3).astype('int8')X_test['outsider15'] = (np.abs(X_test.D1 - X_test.D15) > 3).astype('int8')print('outsider15')cols = list(X_train.columns)cols.remove('TransactionDT')for c in ['D6', 'D7', 'D8', 'D9', 'D12', 'D13', 'D14']:    if c in cols:        cols.remove(c)for c in ['oof', 'DT_M', 'day', 'uid']:    if c in cols:        cols.remove(c)# FAILED TIME CONSISTENCY TESTfor c in ['C3', 'M5', 'id_08', 'id_33']:    if c in cols:        cols.remove(c)for c in ['card4', 'id_07', 'id_14', 'id_21', 'id_30', 'id_32', 'id_34']:    if c in cols:        cols.remove(c)for c in ['id_' + str(x) for x in range(22, 28)]:    if c in cols:        cols.remove(c)print('NOW USING THE FOLLOWING', len(cols), 'FEATURES.')print(np.array(cols))if BUILD96:    oof = np.zeros(len(X_train))    preds = np.zeros(len(X_test))    skf = GroupKFold(n_splits=6)    for i, (idxT, idxV) in enumerate(skf.split(X_train, y_train, groups=X_train['DT_M'])):        month = X_train.iloc[idxV]['DT_M'].iloc[0]        print('Fold', i, 'withholding month', month)        print(' rows of train =', len(idxT), 'rows of holdout =', len(idxV))        clf = xgb.XGBClassifier(            n_estimators=5000,            max_depth=12,            learning_rate=0.02,            subsample=0.8,            colsample_bytree=0.4,            missing=-1,            eval_metric='auc',            # USE CPU            # nthread=4,            # tree_method='hist'            # USE GPU            tree_method='gpu_hist'        )        h = clf.fit(X_train[cols].iloc[idxT], y_train.iloc[idxT],                    eval_set=[(X_train[cols].iloc[idxV], y_train.iloc[idxV])],                    verbose=100, early_stopping_rounds=200)        oof[idxV] += clf.predict_proba(X_train[cols].iloc[idxV])[:, 1]        preds += clf.predict_proba(X_test[cols])[:, 1] / skf.n_splits        del h, clf        x = gc.collect()    print('#' * 20)    print('XGB96 OOF CV=', roc_auc_score(y_train, oof))if BUILD96:    sample_submission = pd.read_csv(path_sample_submission)    sample_submission.isFraud = preds    sample_submission.to_csv(path_submission, index=False)
登录后复制    

总结

本项目主要对IEEE-CIS Fraud Detection相关资料进行了收集汇总,目的是学习特征的构建。

数据的提交结果如下:(提交需要科学上网)

来源:https://www.php.cn/faq/1421593.html
免责声明: 游乐网为非赢利性网站,所展示的游戏/软件/文章内容均来自于互联网或第三方用户上传分享,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系youleyoucom@outlook.com。

相关攻略

苹果替多款第三方应用推送更新 功能未见变化 原因成谜
网络安全
苹果替多款第三方应用推送更新 功能未见变化 原因成谜

近日,有用户发现,苹果在未事先说明的情况下,为多款在 App Store 上架的第三方应用推送了来自苹果最新的更新,且涉及新旧应用,范围看似随机。然而,开发者并未对这些应用的代码进行任何修改,更新说

热心网友
04.07
联想YOGA AI迷你主机首销5499元:配Ultra 5 325U与DingOS系统
礼仪与书信
联想YOGA AI迷你主机首销5499元:配Ultra 5 325U与DingOS系统

快科技4月1日消息,联想YOGA AI Mini主机现已上市,搭载DingOS操作系统,售价5499元。该主机体积小巧,约0 65L,仅重600g,机身采用YOGA自然色系浅海贝配色,选用5系铝合金

热心网友
04.01
昆仑万维发布三大世界第一梯队AI模型
科技数码
昆仑万维发布三大世界第一梯队AI模型

据昆仑万维集团消息,3月27日下午,昆仑万维(300418 SZ)旗下天工AI顺利举办“世界模型前沿技术与天工AIGC全家桶大模型生态”专场发布会,携Matrix-Game 3 0、SkyReels

热心网友
03.27
小米移交MIUI维护,安全保障与补丁更新不受影响
科技数码
小米移交MIUI维护,安全保障与补丁更新不受影响

来源:环球网【环球网科技综合报道】3月27日消息,小米MIUI近日停更相关话题引发网友关注,小米澎湃OS最新微博就此作出回应,明确MIUI已完成系统交棒,未来将逐步退出维护,同时会持续为相关设备提供

热心网友
03.27
中科院启动新一代开源芯片研发,具身智能首项行业标准亮相
科技数码
中科院启动新一代开源芯片研发,具身智能首项行业标准亮相

《科创板日报》3月27日讯,今日科创板早报主要内容有:广州强化智能算力布局,支持以市场为主导的智能算力基础设施建设;中芯国际2025年净利润同比增长36%;华虹公司2025年净利润同比下降1 04%

热心网友
03.27

最新APP

宝宝过生日
宝宝过生日
应用辅助 04-07
台球世界
台球世界
体育竞技 04-07
解绳子
解绳子
休闲益智 04-07
骑兵冲突
骑兵冲突
棋牌策略 04-07
三国真龙传
三国真龙传
角色扮演 04-07

热门推荐

稳定币发行商Tether再扩Web3版图!Paolo Ardoino:正开发去中心化搜索引擎Hypersearch
web3.0
稳定币发行商Tether再扩Web3版图!Paolo Ardoino:正开发去中心化搜索引擎Hypersearch

稳定币巨头Tether的动向,向来是加密世界的风向标。这不,它向Web3基础设施的版图扩张,又迈出了关键一步。公司执行长Paolo Ardoino在社交平台X上透露,其工程团队正在全力“烹制”一个新项目——去中心化搜索引擎 “Hypersearch”。这个消息一出,立刻引发了行业的广泛猜想。 采用D

热心网友
04.08
Base链首个原生DeFi借贷协议Seamless Protocol倒闭 将于2026年6月30日下线
web3.0
Base链首个原生DeFi借贷协议Seamless Protocol倒闭 将于2026年6月30日下线

基地位于Coinbase旗下以太坊Layer2网络Base的Seamless Protocol,日前正式宣告了服务的终结。这个曾经吸引了超过20万用户的原生DeFi借贷协议,在运营不到三年后,终究没能跑赢时间。它主打的核心产品是Integrated Leverage Markets(ILMs)——一

热心网友
04.08
PAAL代币如何参与治理?社区投票能决定哪些事项?
web3.0
PAAL代币如何参与治理?社区投票能决定哪些事项?

PAAL代币揭秘:深度解析Web3社区治理的核心钥匙 在去中心化自治组织的浪潮中,谁真正掌握了项目的话语权?PAAL代币提供了一套系统化的答案。它不仅是生态内流转的价值媒介,更是开启链上治理大门的核心凭证。通过持有并质押PAAL代币,用户能够对协议升级、资金分配乃至战略方向等关键事务投出决定性的一票

热心网友
04.08
什么是CTSI代币?代币经济学怎么样?
web3.0
什么是CTSI代币?代币经济学怎么样?

CTSI代币深度解析:Cartesi网络的灵魂与价值引擎 在飞速演进的Web3世界中,区块链的可扩展性始终是制约其大规模应用的核心瓶颈。Cartesi网络以其独特的“链下计算”方案脱颖而出,而驱动这一精密生态运转的核心燃料,正是CTSI代币。它不仅仅是一种支付媒介,更是集成了支付结算、网络安全、去中

热心网友
04.08
SUI区块链的技术基础包括什么?共识机制的工作原理是什么?
web3.0
SUI区块链的技术基础包括什么?共识机制的工作原理是什么?

SUI区块链技术深度解析:如何重塑高性能公链格局 当谈到下一代高性能区块链时,SUI区块链凭借其革命性的技术架构,已成为行业无法绕开的焦点。其核心竞争力并非源于单一优化,而是由Move编程语言、以对象为核心的数据模型以及并行执行引擎三者深度协同构成的完整技术体系。更引人注目的是其共识层的创新——Na

热心网友
04.08