在不同的服务器上运行相同dag的任务

时间:2019-04-12 13:54:49

标签: docker-compose airflow airflow-scheduler

我需要运行以下dag-

dag = DAG('dummy_for_testing', 
default_args=default_args,schedule_interval=None)

t1 = BashOperator(
    task_id='print_date',
    bash_command='date',
    dag=dag)

t2 = BashOperator(
    task_id='print_host',
    bash_command='hostname',
    queue='druid_queue',
    dag=dag)

t3 = BashOperator(
    task_id='print_directory',
    bash_command='pwd',
    dag=dag)

t3.set_upstream(t2)
t2.set_upstream(t1)

其中t1和t3在服务器A上运行,而t2在服务器B(queue = druid_queue)上运行。 我目前正在使用puckel/docker-airflow设置气流。服务器的yml文件如下所示:
Server1

version: '2.1'
services:
    redis:
        image: 'redis:3.2.7'
        ports:
            - "10.0.11.4:6999:6379"
        command: redis-server

    postgres:
        image: postgres:9.6
        container_name: postgres-airflow
        ports:
            - "10.0.11.4:5434:5432"
        environment:
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow

    webserver:
        image: puckel/docker-airflow:1.10.2
        container_name: airflow
        restart: always
        depends_on:
            - postgres
            - redis
        environment:
            - LOAD_EX=n
            - FERNET_KEY=<>
            - EXECUTOR=Celery
            - user_logs_config_loc=dags/user_logs/configurations/
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow
        volumes:
            - /data/druid-data/airflow/dags:/usr/local/airflow/dags
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - "10.0.11.4:8085:8080"
        command: webserver
        healthcheck:
            test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
            interval: 30s

    flower:
        image: puckel/docker-airflow:1.10.2
        restart: always
        depends_on:
            - redis
        environment:
            - EXECUTOR=Celery
        ports:
            - "5555:5555"
        command: flower

    scheduler:
        image: puckel/docker-airflow:1.10.2
        restart: always
        depends_on:
            - webserver
        volumes:
            - /data/druid-data/airflow/dags:/usr/local/airflow/dags
        environment:
            - LOAD_EX=n
            - FERNET_KEY=<>
            - EXECUTOR=Celery
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow
            - user_logs_config_loc=dags/user_logs/configurations/
        command: scheduler

    worker:
        image: puckel/docker-airflow:1.10.2
        restart: always
        depends_on:
            - scheduler
        volumes:
            - /data/druid-data/airflow/dags:/usr/local/airflow/dags
        environment:
            - FERNET_KEY=<>
            - EXECUTOR=Celery
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow
            - user_logs_config_loc=dags/user_logs/configurations/
        command: worker

Server2

version: '2.1'
services:
    redis:
        image: 'redis:3.2.7'
        ports:
            - "10.0.11.5:6999:6379"
        command: redis-server

    postgres:
        image: postgres:9.6
        container_name: postgres-airflow
        ports:
            - "10.0.11.5:5434:5432"
        environment:
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow

    webserver:
        image: puckel/docker-airflow:latest
        container_name: airflow
        restart: always
        depends_on:
            - postgres
            - redis
        environment:
            - LOAD_EX=n
            - FERNET_KEY=<>
            - EXECUTOR=Celery
            - user_logs_config_loc=dags/user_logs/configurations/
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow
        volumes:
            - /data/qa/druid-data/airflow/dags:/usr/local/airflow/dags
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - "10.0.11.5:8085:8080"
        command: webserver
        healthcheck:
            test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
            interval: 30s

    flower:
        image: puckel/docker-airflow:1.10.2
        restart: always
        depends_on:
            - redis
        environment:
            - EXECUTOR=Celery
        ports:
            - "5555:5555"
        command: flower

    scheduler:
        image: puckel/docker-airflow:1.10.2
        restart: always
        depends_on:
            - webserver
        volumes:
            - ./dags:/usr/local/airflow/dags
            - /data/qa/druid-data/airflow/dags:/usr/local/airflow/dags
        environment:
            - LOAD_EX=n
            - FERNET_KEY=<>
            - EXECUTOR=Celery
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow
        command: scheduler

    worker:
        image: puckel/docker-airflow:1.10.2
        restart: always
        depends_on:
            - scheduler
        volumes:
            - ./dags:/usr/local/airflow/dags
            - /data/qa/druid-data/airflow/dags:/usr/local/airflow/dags
        environment:
            - FERNET_KEY=<>
            - EXECUTOR=Celery
            - POSTGRES_USER=airflow
            - POSTGRES_PASSWORD=airflow
            - POSTGRES_DB=airflow
        command: worker -q druid_queue

服务器1中的变量如下所示:
broker_url = redis:// redis:6379/1 result_backend = db + postgresql:// airflow:airflow @ postgres:5432 / airflow

服务器2中的变量如下所示:
broker_url = redis://10.0.11.4:6999/1 result_backend = db + postgresql:// airflow:airflow@10.0.11.4:5434 / airflow

我的配置有问题吗? 当我从服务器A的Web服务器运行dag时,dag卡住了:
enter image description here

在服务器A容器的调度程序中捕获的日志:

[2019-04-12 14:42:35,184] {{jobs.py:1215}} INFO - Setting the follow tasks to queued state:
    <TaskInstance: dummy_for_testing.print_date 2019-04-12 14:42:33.552786+00:00 [scheduled]>
[2019-04-12 14:42:35,194] {{jobs.py:1299}} INFO - Setting the following 1 tasks to queued state:
    <TaskInstance: dummy_for_testing.print_date 2019-04-12 14:42:33.552786+00:00 [queued]>
[2019-04-12 14:42:35,194] {{jobs.py:1341}} INFO - Sending ('dummy_for_testing', 'print_date', datetime.datetime(2019, 4, 12, 14, 42, 33, 552786, tzinfo=<TimezoneInfo [UTC, GMT, +00:00:00, STD]>), 1) to executor with priority 3 and queue default
[2019-04-12 14:42:35,194] {{base_executor.py:56}} INFO - Adding to queue: airflow run dummy_for_testing print_date 2019-04-12T14:42:33.552786+00:00 --local -sd /usr/local/airflow/dags/dag_test.py
[2019-04-12 14:42:35,199] {{celery_executor.py:83}} INFO - [celery] queuing ('dummy_for_testing', 'print_date', datetime.datetime(2019, 4, 12, 14, 42, 33, 552786, tzinfo=<TimezoneInfo [UTC, GMT, +00:00:00, STD]>), 1) through celery, queue=default
[2019-04-12 14:42:37,152] {{jobs.py:1559}} INFO - Harvesting DAG parsing results
[2019-04-12 14:42:39,154] {{jobs.py:1559}} INFO - Harvesting DAG parsing results
[2019-04-12 14:42:40,610] {{sqlalchemy.py:79}} WARNING - DB connection invalidated. Reconnecting...
[2019-04-12 14:42:41,156] {{jobs.py:1559}} INFO - Harvesting DAG parsing results
[2019-04-12 14:42:41,179] {{jobs.py:1106}} INFO - 1 tasks up for execution:
    <TaskInstance: dummy_for_testing.print_host 2019-04-12 14:42:33.552786+00:00 [scheduled]>
[2019-04-12 14:42:41,182] {{jobs.py:1141}} INFO - Figuring out tasks to run in Pool(name=None) with 128 open slots and 1 task instances in queue
[2019-04-12 14:42:41,184] {{jobs.py:1177}} INFO - DAG dummy_for_testing has 12/16 running and queued tasks
[2019-04-12 14:42:41,184] {{jobs.py:1215}} INFO - Setting the follow tasks to queued state:
    <TaskInstance: dummy_for_testing.print_host 2019-04-12 14:42:33.552786+00:00 [scheduled]>
[2019-04-12 14:42:41,193] {{jobs.py:1299}} INFO - Setting the following 1 tasks to queued state:
    <TaskInstance: dummy_for_testing.print_host 2019-04-12 14:42:33.552786+00:00 [queued]>
[2019-04-12 14:42:41,193] {{jobs.py:1341}} INFO - Sending ('dummy_for_testing', 'print_host', datetime.datetime(2019, 4, 12, 14, 42, 33, 552786, tzinfo=<TimezoneInfo [UTC, GMT, +00:00:00, STD]>), 1) to executor with priority 2 and queue druid_queue
[2019-04-12 14:42:41,194] {{base_executor.py:56}} INFO - Adding to queue: airflow run dummy_for_testing print_host 2019-04-12T14:42:33.552786+00:00 --local -sd /usr/local/airflow/dags/dag_test.py
[2019-04-12 14:42:41,198] {{celery_executor.py:83}} INFO - [celery] queuing ('dummy_for_testing', 'print_host', datetime.datetime(2019, 4, 12, 14, 42, 33, 552786, tzinfo=<TimezoneInfo [UTC, GMT, +00:00:00, STD]>), 1) through celery, queue=druid_queue

服务器A配置:
enter image description here
服务器B配置: enter image description here
服务器芹菜经纪人
enter image description here

1 个答案:

答案 0 :(得分:1)

似乎您在两个不同的服务器上运行相同的docker-compose堆栈,但是Server B的工作程序从命令worker -q druid_queue开始。通常,您希望在所有服务器上仅使用一个调度程序,一个数据库/结果后端以及一个消息代理(redis)来运行气流,而不是在每个服务器上运行每个服务。

您在第一台服务器上的撰写文件在10.0.1.4:6999上公开了redis,下面您注意到第二台服务器上的broker_url是redis://10.0.11.4:6999/1。如果网络设置正确,则可能只需将broker_url更新为redis://10.0.1.4:6999/1(注意:11-> 1)