如何教Keras神经网络解决sqrt

射击

我正在学习使用python和keras进行机器学习。我创建了一个神经网络,可以根据{1,4,9,16,25,36,...,100}范围内的偶数整数来预测平方根。我已经编写了代码来做到这一点,但是结果远非真实(无论我将提供给网络的多少数字,它都预测为1.0)。

我尝试更改层数,每层神经元数,激活功能,但没有任何帮助。

这是我到目前为止编写的代码:

from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
from keras import optimizers

# laod dataset
# dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
dataset = loadtxt('sqrt.csv', delimiter=',')

# split into input (X) and output (y) variables
X = dataset[:,0:1] * 1.0
y = dataset[:,1] * 1.0

# define the keras model
model = Sequential()
model.add(Dense(6, input_dim=1, activation='relu'))
model.add(Dense(1, activation='linear'))

# compile the keras model
opt = optimizers.adam(lr=0.01)
model.compile(loss='mean_squared_error', optimizer=opt, metrics=['accuracy'])

# fit the keras model on the dataset (CPU)
model.fit(X, y, epochs=150, batch_size=10, verbose=0)

# evaluate the keras model
_, accuracy = model.evaluate(X, y, verbose=0)
print('Accuracy: %.2f' % (accuracy*100))

# make class predictions with the model
predicitions = model.predict_classes(X)

# summarize the first 10 cases
for i in range(10):
    print('%s => %.2f (expected %.2f)' % (X[i].tolist(), predicitions[i], y[i]))

这是数据集:

1,1
4,2
9,3
16,4
25,5
36,6
49,7
64,8
81,9
100,10

当我运行此网络时,会得到以下结果:

[1.0] => 0.00 (expected 1.00)
[4.0] => 0.00 (expected 2.00)
[9.0] => 1.00 (expected 3.00)
[16.0] => 1.00 (expected 4.00)
[25.0] => 1.00 (expected 5.00)
[36.0] => 1.00 (expected 6.00)
[49.0] => 1.00 (expected 7.00)
[64.0] => 1.00 (expected 8.00)
[81.0] => 1.00 (expected 9.00)
[100.0] => 1.00 (expected 10.00)

我究竟做错了什么?

改变的回油量

这是一个回归问题。因此,您应该使用model.predict()而不是model.predict_classes()

数据集也不够大。但是,使用以下代码可以获得一些明智的预测。

from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
from keras import optimizers

# laod dataset
# dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
dataset = loadtxt('sqrt.csv', delimiter=',')

# split into input (X) and output (y) variables
X = dataset[:,0:1] * 1.0
y = dataset[:,1] * 1.0

# define the keras model
model = Sequential()
model.add(Dense(6, input_dim=1, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))

# compile the keras model
opt = optimizers.adam(lr=0.001)
model.compile(loss='mean_squared_error', optimizer=opt)

# fit the keras model on the dataset (CPU)
model.fit(X, y, epochs=1500, batch_size=10, verbose=0)

# evaluate the keras model
_, accuracy = model.evaluate(X, y, verbose=0)
print('Accuracy: %.2f' % (accuracy*100))

# make class predictions with the model
predicitions = model.predict(X)

# summarize the first 10 cases
for i in range(10):
    print('%s => %.2f (expected %.2f)' % (X[i].tolist(), predicitions[i], y[i]))

输出:

[1.0] => 1.00 (expected 1.00)
[4.0] => 2.00 (expected 2.00)
[9.0] => 3.32 (expected 3.00)
[16.0] => 3.89 (expected 4.00)
[25.0] => 4.61 (expected 5.00)
[36.0] => 5.49 (expected 6.00)
[49.0] => 6.52 (expected 7.00)
[64.0] => 7.72 (expected 8.00)
[81.0] => 9.07 (expected 9.00)
[100.0] => 10.58 (expected 10.00)

编辑:

正如@desertnaut在评论中指出的那样,该度量标准accuracy在回归任务中没有任何意义。R_squared因此,通常将自定义值(确定的AKA系数)用作度量。R_squared值表示回归模型中拟合的优度。以下是要计算的代码R_squared

def r_squared(y_true, y_pred):
    from keras import backend as K
    SS_res =  K.sum(K.square(y_true - y_pred)) 
    SS_tot = K.sum(K.square(y_true - K.mean(y_true))) 
    return ( 1 - SS_res/(SS_tot + K.epsilon()) )

现在,您可以使用编译模型了;

model.compile(loss='mean_squared_error', optimizer=opt, metrics=[r_squared])

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章