Spring Batch本地分区重启问题

时间:2014-08-11 12:34:12

标签: java spring-batch

我遇到重启本地分区批处理的问题。我在第101个已加工项目上投掷RuntimeException。工作失败,但出现问题,因为在重新启动时,作业将从第150个项目继续(而不是从它应该的第100个项目开始)。

以下是xml-conf

<bean id="taskExecutor" class="org.springframework.scheduling.commonj.WorkManagerTaskExecutor" >
    <property name="workManagerName" value="springWorkManagers" />
</bean>

<bean id="transactionManager" class="org.springframework.transaction.jta.WebSphereUowTransactionManager"/>

<batch:job id="LocalPartitioningJob">
    <batch:step id="masterStep">
        <batch:partition step="slaveStep" partitioner="splitPartitioner">
            <batch:handler grid-size="5" task-executor="taskExecutor"  />
        </batch:partition>
    </batch:step>
</batch:job>

<batch:step id="slaveStep">
    <batch:tasklet transaction-manager="transactionManager">
        <batch:chunk reader="partitionReader" processor="compositeItemProcessor" writer="sqlWriter" commit-interval="50" />
        <batch:transaction-attributes isolation="SERIALIZABLE" propagation="REQUIRE" timeout="600" />
        <batch:listeners>
            <batch:listener ref="Processor1" /> 
            <batch:listener ref="Processor2" /> 
            <batch:listener ref="Processor3" />
        </batch:listeners>
    </batch:tasklet>
</batch:step>

<bean id="jobRepository" class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
    <property name="transactionManager" ref="transactionManager" />
    <property name="tablePrefix" value="${sb.db.tableprefix}" />
    <property name="dataSource" ref="ds" />
    <property name="maxVarCharLength" value="1000"/>
</bean>

<bean id="transactionManager" class="org.springframework.transaction.jta.WebSphereUowTransactionManager"/>

<jee:jndi-lookup id="ds" jndi-name="${sb.db.jndi}" cache="true" expected-type="javax.sql.DataSource" />

splitPartitioner实施Partitioner并拆分初始数据并将其保存到executionContexts列表中。处理器呼叫远程ejb以获取其他数据,而sqlWriter只是org.spring...JdbcBatchItemWriter。 PartitionReader代码如下:

public class PartitionReader implements ItemStreamReader<TransferObjectTO> {
    private List<TransferObjectTO> partitionItems;

    public PartitionReader() {
    }

    public synchronized TransferObjectTO read() {
        if(partitionItems.size() > 0) {
            return partitionItems.remove(0);
        } else {
            return null;
        }
    }

    @SuppressWarnings("unchecked")
    @Override
    public void open(ExecutionContext executionContext) throws ItemStreamException {
        partitionItems = (List<TransferObjectTO>) executionContext.get("partitionItems");
    }

    @Override
    public void update(ExecutionContext executionContext) throws ItemStreamException {
        executionContext.put("partitionItems", partitionItems);
    }

    @Override
    public void close() throws ItemStreamException {
    }
}

1 个答案:

答案 0 :(得分:0)

似乎我对SpringBatch和我的错误代码几乎没有误解。第一个误解是我认为readCount将在RuntimeException上回滚。现在我看到情况并非如此,但是当SpringBatch递增此值并且在步骤失败时,该值将被提交。

与上面相关,我认为总是会调用ItemStreamReader上的update方法,但只会提交或回滚对数据库的executionContext更新。但似乎只有在没有错误发生并且始终提交executionContext更新时才会调用更新。

第三个误解是分区“主步骤”不会在重启时重新执行,而只重新执行从步骤。但实际上,如果“主步骤”的从属步骤失败,则会重新执行“主步骤”。所以我想主要和从属步骤实际上只是单步处理。

然后在PartitionReader中有我的错误代码,它应该可以节省数据库服务器磁盘空间。也许不应该在next()上编辑partitionItems? (与上述语句相关)无论如何这里是工作PartitionReader的代码:

public class PartitionReader implements ItemStreamReader<TransferObjectTO> {
    private List<TransferObjectTO> partitionItems;
    private int index;

    public PartitionReader() {
    }

    public synchronized TransferObjectTO read() {
        if(partitionItems.size() > index) {
            return partitionItems.get(index++);
        } else {
            return null;
        }
    }

    @SuppressWarnings("unchecked")
    @Override
    public void open(ExecutionContext executionContext) throws ItemStreamException {
        partitionItems = (List<TransferObjectTO>) executionContext.get("partitionItems");
        index = executionContext.getInt("partitionIndex", 0);
    }

    @Override
    public void update(ExecutionContext executionContext) throws ItemStreamException {
        executionContext.put("partitionIndex", index);
    }

    @Override
    public void close() throws ItemStreamException {
    }
}