为什么我有4个芹菜过程而不是我预期的2个?

时间:2017-04-27 17:54:39

标签: celery django-celery

我已经配置celery来运行2个worker,每个worker的并发度为1.我的/etc/default/celeryd文件包含(以及其他设置):

CELERYD_NODES="worker1 worker2"
CELERYD_OPTS="-Q:worker1 central -c:worker1 1 -Q:worker2 RetailSpider -c:worker2 1"

换句话说,我预计会有2名工人,因为并发性是每个工人1个进程;一个工作程序从队列'central'消耗,另一个工作程序从名为'RetailSpider'的队列中消耗。两者都有并发性1。

同样sudo service celeryd status显示:

celery init v10.1.
Using config script: /etc/default/celeryd
celeryd (node worker1) (pid 46610) is up...
celeryd (node worker2) (pid 46621) is up...

然而令我困惑的是ps aux|grep 'celery worker'的输出,即

scraper  34384  0.0  1.0 348780 77780 ?        S    13:07   0:00 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q central -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker1.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker1.pid --hostname=worker1@scraping0-evo
scraper  34388  0.0  1.0 348828 77884 ?        S    13:07   0:00 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q RetailSpider -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker2.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker2.pid --hostname=worker2@scraping0-evo
scraper  46610  0.1  1.2 348780 87552 ?        Sl   Apr26   1:55 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q central -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker1.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker1.pid --hostname=worker1@scraping0-evo
scraper  46621  0.1  1.2 348828 87920 ?        Sl   Apr26   1:53 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q RetailSpider -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker2.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker2.pid --hostname=worker2@scraping0-evo

有哪些额外的2个流程 - 具有ids 34384和34388的流程?

(这是一个Django项目)

编辑:

我想知道这是否与celery默认启动与number of CPUs/cores available一样多的并发工作进程这一事实有关。这台机器为2个核心,因此每个工人2个。但是,我希望-c:worker1 1-c:worker2 1选项可以覆盖它。

我将--concurrency=1添加到CELERYD_OPTS,并将CELERYD_CONCURRENCY = 1添加到settings.py。然后我杀死了所有进程并重新启动了celeryd,但我仍然看到了4个进程(每个工作2个进程)。

0 个答案:

没有答案