2016-05-14 91 views
3

試圖運行supervisord(3.2.2)與芹菜多。芹菜多與supervisord

似乎是supervisord無法處理它。單一芹菜工人工作正常。

這是我supervisord配置

celery multi v3.1.20 (Cipater) 
> Starting nodes... 
    > [email protected]: OK 
Stale pidfile exists. Removing it. 
    > [email protected]: OK 
Stale pidfile exists. Removing it. 

celeryd.conf

; ================================== 
; celery worker supervisor example 
; ================================== 

[program:celery] 
; Set full path to celery program if using virtualenv 
command=/usr/local/src/imbue/application/imbue/supervisorctl/celeryd/celeryd.sh 
process_name = %(program_name)s%(process_num)[email protected]%(host_node_name)s 
directory=/usr/local/src/imbue/application/imbue/conf/ 
numprocs=2 
stderr_logfile=/usr/local/src/imbue/application/imbue/log/celeryd.err 
logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log 
stdout_logfile_backups = 10 
stderr_logfile_backups = 10 
stdout_logfile_maxbytes = 50MB 
stderr_logfile_maxbytes = 50MB 
autostart=true 
autorestart=false 
startsecs=10 

進出口使用以下supervisord變量效仿我開始芹菜方式:

  • %(程序名) s
  • %(process_num)d
  • @
  • %(host_node_name)■

Supervisorctl

supervisorctl 
celery:[email protected] FATAL  Exited too quickly (process log may have details) 
celery:[email protected] FATAL  Exited too quickly (process log may have details) 

我試着在/ usr/local/lib/python2.7/dist-packages/supervisor/options.py from 0 to 1:

numprocs_start = integer(get(section, 'numprocs_start', 1)) 

改變這個值我仍然得到:

celery:[email protected] FATAL  Exited too quickly (process log may have details) 
celery:[email protected] EXITED May 14 12:47 AM 

芹菜開始了,但supervisord並沒有跟蹤它。

根@ parzee-DEV-APP-SFO1:/等/主管#

ps -ef | grep celery 
root  2728  1 1 00:46 ?  00:00:02 [celeryd: [email protected]:MainProcess] -active- (worker -c 16 -n [email protected] --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/1.pid) 
root  2973  1 1 00:46 ?  00:00:02 [celeryd: [email protected]:MainProcess] -active- (worker -c 16 -n [email protected] --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/2.pid) 

celery.sh

source ~/.profile 
CELERY_LOGFILE=/usr/local/src/imbue/application/imbue/log/celeryd.log 
CELERYD_OPTS=" --loglevel=DEBUG" 
CELERY_WORKERS=2 
CELERY_PROCESSES=16 
cd /usr/local/src/imbue/application/imbue/conf 
exec celery multi start $CELERY_WORKERS -P processes -c $CELERY_PROCESSES -n [email protected]{HOSTNAME} -f $CELERY_LOGFILE $CELERYD_OPTS 

類似: Running celeryd_multi with supervisor How to use Supervisor + Django + Celery with multiple Queues and Workers?

回答

8

由於主管在監控(開始/停止/重新啓動)進程,該進程應該在前臺運行(不應該是守護進程編號爲)。

芹菜多守護自身,所以它不能與管理員一起運行。

您可以爲每個工人創建單獨的流程並將它們合併爲一個。

[program:worker1] 
command=celery worker -l info -n worker1 

[program:worker2] 
command=celery worker -l info -n worker2 

[group:workers] 
programs=worker1,worker2 

你也可以寫一個shell腳本makes daemon process run in foreground這樣。

#! /usr/bin/env bash 
set -eu 

pidfile="/var/run/your-daemon.pid" 
command=/usr/sbin/your-daemon 

# Proxy signals 
function kill_app(){ 
    kill $(cat $pidfile) 
    exit 0 # exit okay 
} 
trap "kill_app" SIGINT SIGTERM 

# Launch daemon 
$ celery multi start 2 -l INFO 

sleep 2 

# Loop while the pidfile and the process exist 
while [ -f $pidfile ] && kill -0 $(cat $pidfile) ; do 
    sleep 0.5 
done 
exit 1000 # exit unexpected 
+0

謝謝我最終將我的多配置分解爲單個工人 – spicyramen