Flask和Celery

时间:2017-06-21 22:46:59

标签: python flask celery

我正在使用芹菜和弦,并且标题中的一些任务成功,有些失败并出现错误:

celery.backends.base.ChordError: NotRegistered("'app.tasks.grab_articles'",)

但是,当我的Celery工作人员加载时,我看到任务加载如下:

(latentcall)
2017-06-21T22:24:04.505411+00:00 app[worker.1]: ---- **** ----- 
2017-06-21T22:24:04.505412+00:00 app[worker.1]: --- * ***  * -- Linux-3.13.0-121-generic-x86_64-with-debian-stretch-sid 2017-06-21 22:24:04
2017-06-21T22:24:04.505413+00:00 app[worker.1]: -- * - **** --- 
2017-06-21T22:24:04.505414+00:00 app[worker.1]: - ** ---------- [config]
2017-06-21T22:24:04.505415+00:00 app[worker.1]: - ** ---------- .> app:         app:0x7f3db1959208
2017-06-21T22:24:04.505418+00:00 app[worker.1]: - ** ---------- .> transport:   amqp://...
2017-06-21T22:24:04.505419+00:00 app[worker.1]: - ** ---------- .> results:     postgres:...
2017-06-21T22:24:04.505420+00:00 app[worker.1]: - *** --- * --- .> concurrency: 1 (prefork)
2017-06-21T22:24:04.505421+00:00 app[worker.1]: -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
2017-06-21T22:24:04.505421+00:00 app[worker.1]: --- ***** ----- 
2017-06-21T22:24:04.505422+00:00 app[worker.1]:  -------------- [queues]
2017-06-21T22:24:04.505423+00:00 app[worker.1]:                 .> celery           exchange=celery(direct) key=celery
2017-06-21T22:24:04.505423+00:00 app[worker.1]:                 
2017-06-21T22:24:04.505424+00:00 app[worker.1]: 
2017-06-21T22:24:04.505425+00:00 app[worker.1]: [tasks]
2017-06-21T22:24:04.505425+00:00 app[worker.1]:   . app.tasks.calculate_best_matches
2017-06-21T22:24:04.505426+00:00 app[worker.1]:   . app.tasks.grab_articles

我的应用设置与此处的应用相同:

https://github.com/miguelgrinberg/flasky-with-celery

这是我的应用程序的结构:

proj
 |---app
   |---__init__.py
   |---tasks.py
 |---celery_worker.py
 |---config.py
 |---manage.py

__init__.py

from celery import Celery
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from config import Config

db = SQLAlchemy()
celery = Celery(__name__, broker=Config.CELERY_BROKER_URL, backend=Config.CELERY_RESULT_BACKEND)

def create_app():
    app = Flask(__name__)
    app.config.from_object(Config)
    Config.init_app(app)

    db.init_app(app)
    celery.conf.update(app.config)

    from .main import main as main_blueprint
    app.register_blueprint(main_blueprint)

    return app

tasks.py

from . import celery
from sqlalchemy.sql import select
from sklearn.metrics.pairwise import pairwise_distances
from datetime import datetime
import os
import json
import redis
import pandas as pd
from . import helpers
from . import models

r = redis.from_url(os.environ['REDIS_URL'])

@celery.task(name='task_grab_articles', bind=True)
def grab_articles(self, ids):
    print(self.request.id)
    article = models.Article
    query = article.query.with_entities(article.id, article.tfidf).filter(article.id.in_(ids)).all()
    df = pd.DataFrame(query, columns=['id', 'tfidf']).to_json()
    return query

celery_worker:

#!/usr/bin/env python
from app import celery, create_app

app = create_app()
app.app_context().push()

manage.py

from app import create_app

app = create_app()

Procfile:

web: gunicorn manage:app
worker: celery worker --app=celery_worker.celery --loglevel=info --concurrency=1

在做了一些调试之后,我发现Celery创建了8个任务,只向我的工作进程发送了三个或四个(参见Procfile),一旦完成这四个任务,我的回调任务将用True响应.ready()。但是,当我检查我的celery_taskmeta表时,其他四个任务仍然是PENDING。除了后端之外,这四个任务在任何地方都不可见。我想知道他们是否会被发送到由于多处理而生成的一些神秘工作者/节点。

0 个答案:

没有答案
相关问题