运行python manage.py test会得到"达到最大递归深度"错误

时间:2017-06-27 10:03:44

标签: python django django-upgrade

所以我有一个基于1.6.5的django项目,现在我将它迁移到1.9.5。我成功地将其迁移到1.7.0然后再迁移到1.8.0。当从1.8.0到1.9.0这样做时,我不得不用collections.OrderedDict替换SortedDict。现在,当我进行python manage.py test时,我遇到了这个错误:

    File "forum/models/base.py", line 134, in iterator
    key_list = [v[0] for v in self.values_list(*values_list)]
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/query.py", line 258, in __iter__
    self._fetch_all()
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/query.py", line 1074, in _fetch_all
    self._result_cache = list(self.iterator())
  File "forum/models/base.py", line 134, in iterator
    key_list = [v[0] for v in self.values_list(*values_list)]
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/query.py", line 258, in __iter__
    self._fetch_all()
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/query.py", line 1074, in _fetch_all
    self._result_cache = list(self.iterator())
  File "forum/models/base.py", line 134, in iterator
    key_list = [v[0] for v in self.values_list(*values_list)]
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/query.py", line 725, in values_list
    clone = self._values(*fields)
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/query.py", line 671, in _values
    clone = self._clone()
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/query.py", line 1059, in _clone
    query = self.query.clone()
  File "venv/mystuff/lib/python2.7/site-packages/django/db/models/sql/query.py", line 298, in clone
    obj._annotations = self._annotations.copy() if self._annotations is not None else None
  File "/opt/python/python-2.7/lib64/python2.7/collections.py", line 194, in copy
    return self.__class__(self)
  File "/opt/python/python-2.7/lib64/python2.7/collections.py", line 57, in __init__
    self.__update(*args, **kwds)
  File "venv/mystuff/lib64/python2.7/abc.py", line 151, in __subclasscheck__
    if subclass in cls._abc_cache:
  File "venv/mystuff/lib64/python2.7/_weakrefset.py", line 72, in __contains__
    wr = ref(item)
RuntimeError: maximum recursion depth exceeded

其他类似问题的解决方案要求将python升级到2.7.5,但我已经在2.7.11上运行了它。

编辑: 论坛/模型/ base.py

def iterator(self):
    cache_key = self.model._generate_cache_key("QUERY:%s" % self._get_query_hash())
    on_cache_query_attr = self.model.value_to_list_on_cache_query()

    to_return = None
    to_cache = {}

    with_aggregates = len(self.query.aggregates) > 0
    key_list = self._fetch_from_query_cache(cache_key)

    if key_list is None:
        if not with_aggregates:
            values_list = [on_cache_query_attr]

            if len(self.query.extra):
                values_list += self.query.extra.keys()

            key_list = [v[0] for v in self.values_list(*values_list)] #Line 134
            to_cache[cache_key] = (datetime.datetime.now(), key_list)
        else:
            to_return = list(super(CachedQuerySet, self).iterator())
            to_cache[cache_key] = (datetime.datetime.now(), [
                (row.__dict__[on_cache_query_attr], dict([(k, row.__dict__[k]) for k in self.query.aggregates.keys()]))
                for row in to_return])
    elif with_aggregates:
        tmp = key_list
        key_list = [k[0] for k in tmp]
        with_aggregates = [k[1] for k in tmp]
        del tmp

    if (not to_return) and key_list:
        row_keys = [self.model.infer_cache_key({on_cache_query_attr: attr}) for attr in key_list]
        cached = cache.get_many(row_keys)

        to_return = [
            (ck in cached) and self.obj_from_datadict(cached[ck]) or ToFetch(force_unicode(key_list[i])) for i, ck in enumerate(row_keys)
        ]

        if len(cached) != len(row_keys):
            to_fetch = [unicode(tr) for tr in to_return if isinstance(tr, ToFetch)]

            fetched = dict([(force_unicode(r.__dict__[on_cache_query_attr]), r) for r in
                          models.query.QuerySet(self.model).filter(**{"%s__in" % on_cache_query_attr: to_fetch})])

            to_return = [(isinstance(tr, ToFetch) and fetched[unicode(tr)] or tr) for tr in to_return]
            to_cache.update(dict([(self.model.infer_cache_key({on_cache_query_attr: attr}), r._as_dict()) for attr, r in fetched.items()]))

        if with_aggregates:
            for i, r in enumerate(to_return):
                r.__dict__.update(with_aggregates[i])


    if len(to_cache):
        cache.set_many(to_cache, 60 * 60)

    if to_return:
        for row in to_return:
            if hasattr(row, 'leaf'):
                row = row.leaf

            row.reset_original_state()
            yield row

django的/分贝/模型/ query.py:

def _fetch_all(self):
    if self._result_cache is None:
        self._result_cache = list(self.iterator()) #Line 1074
    if self._prefetch_related_lookups and not self._prefetch_done:
        self._prefetch_related_objects()

django的/分贝/模型/ query.py

def __iter__(self):
    """
    The queryset iterator protocol uses three nested iterators in the
    default case:
        1. sql.compiler:execute_sql()
           - Returns 100 rows at time (constants.GET_ITERATOR_CHUNK_SIZE)
             using cursor.fetchmany(). This part is responsible for
             doing some column masking, and returning the rows in chunks.
        2. sql/compiler.results_iter()
           - Returns one row at time. At this point the rows are still just
             tuples. In some cases the return values are converted to
             Python values at this location.
        3. self.iterator()
           - Responsible for turning the rows into model objects.
    """
    self._fetch_all() #Line 258
    return iter(self._result_cache)

更新: 我的Django Log显示了这个:

/forum/settings/base.py TIME: 2017-06-27 06:49:53,410 MSG: base.py:value:65 Error retrieving setting from database (FORM_EMPTY_QUESTION_BODY): maximum recursion depth exceeded in cmp
/forum/settings/base.py TIME: 2017-06-27 06:49:53,444 MSG: base.py:value:65 Error retrieving setting from database (FORM_MIN_NUMBER_OF_TAGS): maximum recursion depth exceeded

1 个答案:

答案 0 :(得分:3)

所以你从迭代器中调用values_list(),迭代器克隆QuerySet,它迭代QuerySet,它调用你的迭代器,克隆了Queryset ......

API在this commit中更改了。它应该为您提供足够的信息来重新实现您的Queryset。

另一方面,看起来Django本身已经实现了查询缓存,所以在重构之前你可能会看看你的CachedQuerySet已经过时了。

相关问题