pandas_datareader._utils.RemoteDataError:无法读取URL:https://finance.yahoo.com/quote/CBOE

时间:2019-01-17 11:29:40

标签: python pandas pickle

我正在尝试从维基百科中提取S&P500列表;但是,当我运行代码时,它只提取了90家公司,并且出现了这个巨大的错误:

 Traceback (most recent call last):
  File "D:/Python projects/Pandas_1/S&P500 Tickers.py", line 46, in <module>
    get_data_from_yahoo()
  File "D:/Python projects/Pandas_1/S&P500 Tickers.py", line 37, in get_data_from_yahoo
    df = web.DataReader(ticker, 'yahoo', start, end)
  File "C:\Users\UserX\venv\lib\site-packages\pandas_datareader\data.py", line 310, in DataReader
    session=session).read()
  File "C:\Users\UserX\venv\lib\site-packages\pandas_datareader\base.py", line 210, in read
    params=self._get_params(self.symbols))
  File "C:\Users\UserX\venv\lib\site-packages\pandas_datareader\yahoo\daily.py", line 129, in _read_one_data
    resp = self._get_response(url, params=params)
  File "C:\Users\UserX\venv\lib\site-packages\pandas_datareader\base.py", line 155, in _get_response
    raise RemoteDataError(msg)
pandas_datareader._utils.RemoteDataError: Unable to read URL: https://finance.yahoo.com/quote/CBOE: CBOE/history?period1=1262311200&period2=1547776799&interval=1d&frequency=1d&filter=history

以及响应代码,该代码太大而无法放入本文。 我是新手,所以我不知道该怎么做。我的代码是:

import bs4 as bs
import datetime as dt
import os
import pandas_datareader.data as web
import pickle
import requests


def save_sp500_tickers():
    resp = requests.get('http://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
    soup = bs.BeautifulSoup(resp.text, 'lxml')
    table = soup.find('table', {'class': 'wikitable sortable'})
    tickers = []
    for row in table.findAll('tr')[1:]:
        ticker = row.findAll('td')[0].text
        tickers.append(ticker)
    with open("sp500tickers.pickle", "wb") as f:
        pickle.dump(tickers, f)
    return tickers


def get_data_from_yahoo(reload_sp500=False):
    if reload_sp500:
        tickers = save_sp500_tickers()
    else:
        with open("sp500tickers.pickle", "rb") as f:
            tickers = pickle.load(f)
    if not os.path.exists('stock_dfs'):
        os.makedirs('stock_dfs')

    start = dt.datetime(2010, 1, 1)
    end = dt.datetime.now()
    for ticker in tickers:
        if not os.path.exists('stock_dfs/{}.csv'.format(ticker)):
            df = web.DataReader(ticker, 'yahoo', start, end)
            df.reset_index(inplace=True)
            df.set_index("Date", inplace=True)
            df = df.drop("Symbol", axis=1)
            df.to_csv('stock_dfs/{}.csv'.format(ticker))
        else:
            print('Already have {}'.format(ticker))


get_data_from_yahoo()

0 个答案:

没有答案