关于我的项目的一些背景:我打算研究关于子弹的各种参数以及它们如何影响弹丸的弹道系数(即子弹性能)。我有不同的参数,如重量、口径、截面密度等。我觉得我这样做都错了;我只是在阅读教程,并在我的项目中应用我认为有用和相关的东西。
我的回归模型的输出看起来有点不对劲;经过训练的模型在我程序的model.fit()
部分中不断输出0.0201
作为MSE。
此外,model.预测(X)
似乎具有100%的准确性,但是,这似乎不正确;我从描述Keras模型的教程中借用了一些代码来显示模型输出,同时显示预期输出。
这是程序构建模型并训练它
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.utils import shuffle
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard
from pandas.plotting import scatter_matrix
import time
name = 'Bullet Database Analysis v2-{}'.format(int(time.time()))
tensorboard = TensorBoard(log_dir='logs/{}'.format(name))
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
df = pd.read_csv('Bullet Optimization\ShootForum Bullet DB_2.csv')
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
dataset = df.values
X = dataset[:,0:12]
X = np.asarray(X).astype(np.float32)
y = dataset[:,13]
y = np.asarray(y).astype(np.float32)
X_train, X_val_and_test, y_train, y_val_and_test = train_test_split(X, y, test_size=0.3, shuffle=True)
X_val, X_test, y_val, y_test = train_test_split(X_val_and_test, y_val_and_test, test_size=0.5)
from keras.models import Sequential
from keras.layers import Dense, BatchNormalization
model = Sequential(
[
#2430 is the shape of X_train
#BatchNormalization(axis=-1, momentum = 0.1),
Dense(2430, activation='relu'),
Dense(32, activation='relu'),
Dense(1),
]
)
model.compile(loss='mse', metrics=['mse'])
history = model.fit(X_train, y_train,
batch_size=64,
epochs=20,
validation_data=(X_val, y_val),
#callbacks = [tensorboard]
)
# plt.plot(history.history['loss'],'r')
# plt.plot(history.history['val_loss'],'m')
plt.plot(history.history['mse'],'b')
plt.show()
model.summary()
model.save("Bullet Optimization\Bullet Database Analysis.h5")
这是我的代码,通过h5加载我之前训练的模型
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.models import load_model
import pandas as pd
df = pd.read_csv('Bullet Optimization\ShootForum Bullet DB_2.csv')
model = load_model('Bullet Optimization\Bullet Database Analysis.h5')
dataset = df.values
X = dataset[:,0:12]
y = dataset[:,13]
model.fit(X,y, epochs=10)
#predictions = np.argmax(model.predict(X), axis=-1)
predictions = model.predict(X)
# summarize the first 5 cases
for i in range(5):
print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))
这是输出
Epoch 1/10
2021-03-09 10:38:06.372303: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-03-09 10:38:07.747241: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
109/109 [==============================] - 2s 4ms/step - loss: 0.0201 - mse: 0.0201
Epoch 2/10
109/109 [==============================] - 1s 5ms/step - loss: 0.0201 - mse: 0.0201
Epoch 3/10
109/109 [==============================] - 0s 4ms/step - loss: 0.0201 - mse: 0.0201
Epoch 4/10
109/109 [==============================] - 0s 5ms/step - loss: 0.0201 - mse: 0.0201
Epoch 5/10
109/109 [==============================] - 1s 5ms/step - loss: 0.0201 - mse: 0.0201
Epoch 6/10
109/109 [==============================] - 1s 5ms/step - loss: 0.0201 - mse: 0.0201
Epoch 7/10
109/109 [==============================] - 1s 5ms/step - loss: 0.0201 - mse: 0.0201
Epoch 8/10
109/109 [==============================] - 0s 4ms/step - loss: 0.0201 - mse: 0.0201
Epoch 9/10
109/109 [==============================] - 1s 5ms/step - loss: 0.0201 - mse: 0.0201
Epoch 10/10
109/109 [==============================] - 0s 4ms/step - loss: 0.0201 - mse: 0.0201
[0.314, 7.9756, 100.0, 100.0, 31.4, 0.00314, 318.4713376, 6.480041472000001, 0.51, 12.95400001, 4.067556004, 0.145] => 0 (expected 0)
[0.358, 9.0932, 148.0, 148.0, 52.983999999999995, 0.002418919, 413.4078212, 9.590461379, 0.635, 16.12900002, 5.774182006, 0.165] => 0 (expected 0)
[0.313, 7.9502, 83.0, 83.0, 25.979, 0.003771084, 265.1757188, 5.378434422000001, 0.504, 12.80160001, 4.006900804, 0.121] => 0 (expected 0)
[0.251, 6.3754, 50.0, 50.0, 12.55, 0.00502, 199.20318730000002, 3.2400207360000004, 0.4, 10.16000001, 2.5501600030000002, 0.113] => 0 (expected 0)
[0.251, 6.3754, 50.0, 50.0, 12.55, 0.00502, 199.20318730000002, 3.2400207360000004, 0.41, 10.41400001, 2.613914003, 0.113] => 0 (expected 0)
这是我的训练集的链接。在我的代码中,我使用train_test_split
来创建测试和训练数据集。
最后,在Tensorboard中是否有一种方法可以可视化模型与数据集的拟合?我真的觉得,尽管我的模型正在训练,但它并没有进行任何显著的拟合,即使MSE误差减少了。
因为您的数据集中有nan
值。在拆分之前,您可以使用df. isna().sum()
检查它。这些可能会对您的网络产生负面影响。这里我只是简单地删除了它们(df.dropna(inplace=True,轴=0)
),但您可以使用一些插补技术来替换它们。
对于这些数据,2430个神经元也可能过度杀伤,从更少的神经元开始。
model = tf.keras.models.Sequential(
[
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1),
]
)
这是最后一个时代:
Epoch 20/20
27/27 [==============================] - 0s 8ms/step - loss: 8.2077e-04 - mse: 8.2077e-04 -
val_loss: 8.5023e-04 - val_mse: 8.5023e-04
在进行回归时,直接计算精度不是一个有效的选择。您可以使用model.评估(X_test,y_test)
,或者当您通过model.预测
获得预测时,您可以使用其他回归指标来计算您的预测有多接近。