使用keras构建带time_step的LSTM模型

使用keras构建带time_step的LSTM模型

2023年7月20日发(作者:)

使⽤keras构建带time_step的LSTM模型之前使⽤TensorFlow构建LSTM模型时,被各种shape烦死。最近尝试了keras,发现好⽤很多,但⽹上看了很多教程,⾥⾯的time_step都是为1,在实际的时序数据预测中⼏乎⽆任何实⽤价值,因此⾃⼰琢磨了两天后,写了测试代码,作为备忘。同时希望能对其他朋友起到帮助作⽤,相互学习交流。测试数据说明数据使⽤2010⾄2014年北京的空⽓污染数据,⽹上有很多下载的地⽅,可以⾃⾏搜索下载。包括以下⼏个字段:'No', 'year', 'month', 'day','hour', 'pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir'。其中pm2.5即为我们需要预测的字段,我设置的time_step =72。即,使⽤过去72⼩时的检测数据及污染数据,预测下⼀个⼩时的污染情况。 ⾏数 年 ⽉ ⽇ ⼩时2.5 PM2.5浓度 露点 温度 ⼤⽓压 风向 风速 累积雪量 累积⾬量数据预处理1、处理⽇期字段,将年⽉⽇字段合并为⼀个字段2、新建⼀列y,作为预测数据的原始数据(后续会做处理)3、删除nan的数据⾏4、将原始数据中字符串数据转化为数值数据,然后对所有数据进⾏归⼀化其中对y进⾏归⼀化的scalar要保存,⽤于对最后的预测结果值进⾏转化,将归⼀化后的值转化为真实值⽇期处理# 格式化⽇期def parse_date(year, month, day, hour): if len(str(month)) == 1: month = '0' + str(month) if len(str(day)) == 1: day = '0' + str(day)

return me(str(year)+str(month)+str(day)+str(hour), '%Y%m%d%H').strftime('%Y-%m-%d %H:00:00') # 格式化⽇期数据转化(部分代码)# 处理原始数据为合适的数据def make_train_test_data(path, ts, train_end, datasetsCount=None): air_data = _csv(path, header='infer') air_data['date'] = air_(lambda r: parse_date(r['year'], r['month'], r['day'], r['hour']), axis=1) # 处理⽇期 # 对风向cbwd进⾏encode encoder = LabelEncoder() air_data['cbwd'] = _transform(air_data['cbwd']) # 转化数据type air_data[['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']] = air_data[ ['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']].astype('float64') air_data = air_().reset_index(drop=True) # 使⽤过去N=ts⼩时数据预测未来⼀个⼩时的污染 air_data['y'] = air_data['pm2.5'] # 根据传⼊的参数缩减数据集 if datasetsCount: air_data = air_data[:datasetsCount] # 切分训练和预测集 train_all = air_data[:train_end] test_all = air_data[train_end:] # 训练数据进⾏归⼀化 train_x = scalar__transform(train_all[['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']]) train_y = scalar__transform(train_all[['y']]) # 预测数据归⼀化 test_x = scalar_orm(test_all[['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']]) test_y = scalar_orm(test_all[['y']])此时数据进⾏了归⼀化,在训练集上fit后,在预测集上进⾏transform,有利于提⾼模型效果训练及预测数据构建将原来的⼀维数据转化为适合lstm的三维数据,shape=(batch_size, time_step, feature_dim)思路(使⽤滑窗法):1、训练数据记为X,预测数据记为Y,⾏号记为i;2、根据设置的time_step数,合并相应⾏数的X,转为⼀个(None, feature_dim*time_step)的⼆维数据X';3、根据time_step调整后的Y读取当前⾏号i+time_step后的index对应的Y的值;4、对新的X'进⾏reshape,将其reshape为(sample_size, time_step, feature_dim),该数据可直接进⼊model进⾏使⽤。# 构建带time_step的数据集# ############# 构建训练和预测集 ################### ts_train_x = ([]) ts_train_y = ([]) ts_test_x = ([]) ts_test_y = ([]) # 构建训练数据集 print('训练数据的原始shape:', train_) for i in range(train_[0]): if i + ts == train_[0]: break ts_train_x = (ts_train_x, train_x[i: i + ts, :]) ts_train_y = (ts_train_y, train_y[i +ts]) # 构建预测数据集 print('预测数据的原始shape:', test_) for i in range(test_[0]): if i + ts == test_[0]: break ts_test_x = (ts_test_x, test_x[i:i + ts, :]) ts_test_y = (ts_test_y, test_y[i + ts]) return ts_train_e((train_[0] - ts, ts, train_[1])), ts_train_y, ts_test_e((test_[0] - ts, ts, test_[1])), ts_test_y, scalar_y数据展⽰模型构建使⽤keras构建序列模型是⽐较简单的。初始化为Sequential后,add需要的LSTM层即可。因为该模型是回归预测,因此最后add⼀个Dense(1),整个model即构建完成了。若担⼼过拟合,可在layer之间增加Dropout层唯⼀的疑问是,增加的LSTM层在倒数第⼀个layer之前,return_sequences⼀定要置为True(默认为False),这是后⾯需要搞懂的地⽅??# 构建modeldef build_model(ts, fea_dim): model = Sequential() (LSTM(64, input_shape=(ts, fea_dim), activation='sigmoid', return_sequences=True, dropout=0.01)) (LSTM(128, activation='sigmoid', return_sequences=True, dropout=0.01)) (Dropout(rate=0.01)) (LSTM(128, activation='sigmoid', dropout=0.01)) (Dense(1)) e(loss='mse', optimizer=Adam(lr=0.002, decay=0.01))

return model预测效果展⽰(归⼀化的预测结果mse=0.001675884,真实值的mse=1606.236424,总的来说,效果不算特别好)红线是预测曲线,黄⾊线是真实数据,从趋势看,拟合的还⾏,但从具体预测值上看,还有很⼤进步空间思考&可改进部分为什么LSTM层在倒数第⼀个layer之前,return_sequences⼀定要置为True(默认为False),否则报错。相关解释见本⼈另⼀篇备忘/p/7f0c7d3d67af本次构建model默认是cpu的并⾏,也未考虑在spark集群使⽤的问题完整代码如下# coding:utf-8# 归⼀化部分做了优化,训练和预测数据分别归⼀化,训练集做fit,然后对测试集进⾏transform# 同时删减了冗余代码import pandas as pdimport pandas as pdimport numpy as npfrom datetime import datetimeimport tensorflow as tffrom tensorflow import kerasfrom import Sequentialfrom import GRU, LSTM, Dense, Dropoutfrom zers import Adamfrom cessing import MinMaxScaler, LabelEncoderimport as pltfrom s import mean_squared_errorimport os# 格式化⽇期def parse_date(year, month, day, hour): if len(str(month)) == 1: month = '0' + str(month) if len(str(day)) == 1: day = '0' + str(day) return me(str(year) + str(month) + str(day) + str(hour), '%Y%m%d%H').strftime( '%Y-%m-%d %H:00:00') # 格式化⽇期# 构建训练集、预测集,训练和预测分别transformdef make_train_test_data(path, ts, train_end, datasetsCount=None): air_data = _csv(path, header='infer') air_data['date'] = air_(lambda r: parse_date(r['year'], r['month'], r['day'], r['hour']), axis=1) # 处理⽇期 # 对风向cbwd进⾏encode encoder = LabelEncoder() air_data['cbwd'] = _transform(air_data['cbwd']) # 转化数据type air_data[['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']] = air_data[ ['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']].astype('float64') air_data = air_().reset_index(drop=True) # 使⽤过去N=ts⼩时数据预测未来⼀个⼩时的污染 air_data['y'] = air_data['pm2.5'] # 根据传⼊的参数缩减数据集 if datasetsCount: air_data = air_data[:datasetsCount] # 切分训练和预测集 train_all = air_data[:train_end] test_all = air_data[train_end:] # 训练数据进⾏归⼀化 train_x = scalar__transform(train_all[['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']]) train_y = scalar__transform(train_all[['y']]) # 预测数据归⼀化 test_x = scalar_orm(test_all[['pm2.5', 'DEWP', 'TEMP', 'PRES', 'cbwd', 'Iws', 'Is', 'Ir']]) test_y = scalar_orm(test_all[['y']]) # ############# 构建训练和预测集 ################### ts_train_x = ([]) ts_train_y = ([]) ts_test_x = ([]) ts_test_y = ([]) # 构建训练数据集 print('训练数据的原始shape:', train_) for i in range(train_[0]): if i + ts == train_[0]: break ts_train_x = (ts_train_x, train_x[i: i + ts, :]) ts_train_y = (ts_train_y, train_y[i + ts]) # 构建预测数据集 print('预测数据的原始shape:', test_) for i in range(test_[0]): if i + ts == test_[0]: break ts_test_x = (ts_test_x, test_x[i: i + ts, :]) ts_test_y = (ts_test_y, test_y[i + ts]) return ts_train_e((train_[0] - ts, ts, train_[1])), ts_train_y, ts_test_e((test_[0] - ts, ts, test_[1])), ts_test_y, scalar_y# 构建modeldef build_model(ts, fea_dim): model = Sequential() (LSTM(64, input_shape=(ts, fea_dim), activation='sigmoid', return_sequences=True, dropout=0.01)) (LSTM(128, activation='sigmoid', return_sequences=True, dropout=0.01)) (Dropout(rate=0.01)) (LSTM(128, activation='sigmoid', dropout=0.01)) (Dense(1)) e(loss='mse', optimizer=Adam(lr=0.002, decay=0.01)) return model# keras的使⽤学习脚本if __name__ == '__main__': n["KMP_DUPLICATE_LIB_OK"] = "TRUE" # 超参设置 batch_size = 60 data_dim = 8 time_step = 72 # 归⼀化 scalar_x = MinMaxScaler(feature_range=(0, 1)) scalar_y = MinMaxScaler(feature_range=(0, 1)) # 获取训练和预测数据 # 使⽤365天的数据作为原始训练集 # 只使⽤10000⼩时的数据作为总数据 x_train, y_train, x_test, y_test, scalar_Y = make_train_test_data('/Users/getui/Data/rnn_', 72, 8760, 10000) print(x_, y_, x_, y_) # 构建model lstm_model = build_model(time_step, data_dim) # 训练model,使⽤20%的数据作为验证集 lstm_(x_train, y_train, epochs=50, batch_size=60, validation_split=0.2) # 预测结果 pred_y = lstm_t(x_test) # 转换为真实值 pred_y_inverse = scalar_e_transform(pred_y) true_y_inverse = scalar_e_transform(y_e(len(y_test), 1)) # 归⼀化的y的mse minmax_mse = mean_squared_error(y_pred=pred_y, y_true=y_test) # 真实值的mse true_mse = mean_squared_error(y_pred=pred_y_inverse, y_true=true_y_inverse) print('归⼀化后的mse和真实mse分别是:', minmax_mse, true_mse) # 画图观察 fig = (figsize=(20, 8)) fig = (figsize=(20, 8)) t(1, 1, 1) (pred_y_inverse, 'r', label='prediction') t(1, 1, 1) (true_y_inverse, 'y', label='true') (loc='best') ()!!本⼈在简书上写的内容均为本⼈原创,转载需经本⼈同意,欢迎转载分享,请注明出处!!

发布者:admin,转转请注明出处:http://www.yc00.com/xiaochengxu/1689848981a290323.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信