使用人工智能预测序列中的下一个数字(n + 1)

时间:2019-05-02 17:27:19

标签: python numpy machine-learning artificial-intelligence

AI必须使用Python预测给定增量整数序列中的下一个数字,但到目前为止,我还没有得到预期的结果。我尝试更改学习率和迭代次数,但到目前为止没有任何运气。

应该基于此 PATTERN

预测下一个数字:

序列(1)中的第一个数字是一个随机整数,其间隔为[2 ^ 0(当前索引),2 ^ 1(下一个索引),依此类推...

AI应该能够决定从区间中选择哪个数字

我遇到的问题是在AI中实现上述模式,因此它可以预测n + 1,因为我对机器学习还很陌生,我不知道如何向AI提供该模式以及我拥有的库与之合作。

这是我使用的代码:

import numpy as np

# Init sequence
data =\
    [
        [1, 3, 7, 8, 21, 49, 76, 224, 467, 514, 1155, 2683, 5216, 10544, 51510, 95823,
        198669, 357535, 863317, 1811764, 3007503, 5598802, 14428676, 33185509, 54538862,
        111949941, 227634408, 400708894, 1033162084, 2102388551, 3093472814, 7137437912, 14133072157,
        20112871792, 42387769980, 100251560595, 146971536592, 323724968937, 1003651412950, 1458252205147,
        2895374552463, 7409811047825, 15404761757071, 19996463086597, 51408670348612, 119666659114170,
        191206974700443, 409118905032525, 611140496167764, 2058769515153876, 4216495639600700, 6763683971478124,
        9974455244496710, 30045390491869460, 44218742292676575, 138245758910846492, 199976667976342049,
        525070384258266191]
    ]

X = np.matrix(data)[:, 0]
y = np.matrix(data)[:, 1]

def J(X, y, theta):
    theta = np.matrix(theta).T
    m = len(y)
    predictions = X * theta
    sqError = np.power((predictions-y), [2])
    return 1/(2*m) * sum(sqError)

dataX = np.matrix(data)[:, 0:1]
X = np.ones((len(dataX), 2))
X[:, 1:] = dataX

# gradient descent function
def gradient(X, y, alpha, theta, iters):
    J_history = np.zeros(iters)
    m = len(y)
    theta = np.matrix(theta).T
    for i in range(iters):
        h0 = X * theta
        delta = (1 / m) * (X.T * h0 - X.T * y)
        theta = theta - alpha * delta
        J_history[i] = J(X, y, theta.T)
     return J_history, theta
print('\n'+40*'=')

# Theta initialization
theta = np.matrix([np.random.random(), np.random.random()])

# Learning rate
alpha = 0.02

# Iterations
iters = 1000000

print('\n== Model summary ==\nLearning rate: {}\nIterations: {}\nInitial 
theta: {}\nInitial J: {:.2f}\n'
  .format(alpha, iters, theta, J(X, y, theta).item()))
print('Training model... ')

# Train model and find optimal Theta value
J_history, theta_min = gradient(X, y, alpha, theta, iters)
print('Done, Model is trained')
print('\nModelled prediction function is:\ny = {:.2f} * x + {:.2f}'
  .format(theta_min[1].item(), theta_min[0].item()))
print('Cost is: {:.2f}'.format(J(X, y, theta_min.T).item()))

# Calculate the predicted profit
def predict(pop):
    return [1, pop] * theta_min

# Now
p = len(data)
print('\n'+40*'=')
print('Initial sequence was:\n', *np.array(data)[:, 1])
print('\nNext numbers should be: {:,.1f}'
  .format(predict(p).item()))

1 个答案:

答案 0 :(得分:0)

我认为没有必要使用AI,线性回归模型足以胜任这项任务。

Input=[('scale',StandardScaler()),('model',LinearRegression())] # Standardizes the data
pipe=Pipeline(Input)
# perform prediction using a linear regression model using the features Z and targets y
pipe.fit(Z,y)
ypipe=pipe.predict(Z)