火花 - 提交纱线 - 多个工作

时间:2016-09-19 20:17:21

标签: hadoop apache-spark yarn

我想用纱线提交多个火花提交作业。我跑的时候

import { Component, Injectable, OnInit } from '@angular/core'; import { Router, Resolve, ActivatedRoute } from '@angular/router'; import { By } from '@angular/platform-browser'; import { Location, CommonModule } from '@angular/common'; import { RouterTestingModule } from '@angular/router/testing'; import { TestBed, inject, fakeAsync, tick, ComponentFixture } from '@angular/core/testing'; @Injectable() class DummyResolve implements Resolve<string> { resolve() { return 'Hello Routing'; } } @Component({ template: ` <router-outlet></router-outlet> ` }) class RoutingComponent { } @Component({ template: ` <h1>Child One</h1> <span>{{ data }}</span> ` }) class Child1Component implements OnInit { data: string; constructor(private route: ActivatedRoute) {} ngOnInit() { this.route.data.forEach((data: { data: string }) => { this.data = data.data; console.log(`data from child 1: ${this.data}`); }); } } @Component({ template: ` <h1>Child Two</h1> <span>{{ data }}</span> ` }) class Child2Component implements OnInit { data: string; constructor(private route: ActivatedRoute) {} ngOnInit() { this.route.data.forEach((data: { data: string }) => { this.data = data.data; console.log(`data from child 2: ${this.data}`); }); } } describe('component: RoutingComponent', function () { let fixture: ComponentFixture<RoutingComponent>; beforeEach(() => { TestBed.configureTestingModule({ imports: [ CommonModule, RouterTestingModule.withRoutes([ { path: '', resolve: { data: DummyResolve }, children: [ { path: 'one', component: Child1Component }, { path: 'two', component: Child2Component } ] } ]) ], providers: [ DummyResolve ], declarations: [ RoutingComponent, Child1Component, Child2Component ] }); fixture = TestBed.createComponent(RoutingComponent); fixture.detectChanges(); }); it('should go to child one', fakeAsync(inject([Router, Location], (router: Router, location: Location) => { router.navigate(['/one']); tick(); fixture.detectChanges(); let debugEl = fixture.debugElement; expect(debugEl.query(By.css('h1')).nativeElement.innerHTML).toEqual('Child One'); expect(debugEl.query(By.css('span')).nativeElement.innerHTML).toEqual('Hello Routing'); }))); it('should go to child two', fakeAsync(inject([Router, Location], (router: Router, location: Location) => { router.navigate(['/two']); tick(); fixture.detectChanges(); let debugEl = fixture.debugElement; expect(debugEl.query(By.css('h1')).nativeElement.innerHTML).toEqual('Child Two'); expect(debugEl.query(By.css('span')).nativeElement.innerHTML).toEqual('Hello Routing'); }))); });

就像现在一样,我必须等待工作完成才能提交更多工作。我看到了心跳:

spark-submit --class myclass --master yarn --deploy-mode cluster blah blah

如何告诉纱线从同一个终端接收另一份工作。最终,我希望能够从一个脚本中运行,我可以一次性发送数百个作业。

谢谢。

2 个答案:

答案 0 :(得分:3)

每个用户都具有纱线配置中指定的固定容量。如果您被分配了N个执行程序(通常,您将被分配一些固定数量的vcores),并且您想要运行100个作业,则需要为每个作业指定分配:

spark-submit --num-executors N/100 --executor-cores 5

否则,作业将循环接受。

您可以在每次调用的最后一次使用&并行启动多个作业。

for i in seq 20 ; do spark-submit --master yarn --num-executors N/100 --executor-cores 5 blah blah &; done

答案 1 :(得分:1)

  • 在spark
  • 中检查动态分配
  • 检查纱线使用的调度程序,如果 FIFO 将其更改为 FAIR
  • 您打算如何为纱线上 N 个作业分配资源?
相关问题