Apache Spark:Spring Transaction not commiting

时间:2015-06-19 06:48:09

标签: java spring transactions apache-spark jdbctemplate

我正在使用Spark和Spring JDBC Template类来提交数据库。 DB是MS SQL Server。 从Apache spark,mapToParition在事务中通过Springs batchUpdate将数据批量发送到DB。事务已完成,但数据未写入数据库。

有人知道这里有什么不对吗?

---Linux machine console error---
 Started by user 
[EnvInject] - Loading node environment variables.
Building in workspace /opt/jenkins/workspace/TestApp
Deleting project workspace... done
[DIMENSIONS] Running checkout on master...
[DIMENSIONS] Running build in '/opt/jenkins/workspace/TestApp'...
[DIMENSIONS] Checking out project "Test:Test"...
[DIMENSIONS] Checking out directory 'TestApp'...
FATAL: Unable to run checkout callout - Dimension command failed -
   (UPDATE /BRIEF /DIR="TestApp"/WORKSET="Test:Test" /USER_DIR="/opt/jenkins/workspace/TestApp" )     (Using Current Project
'Test:Test'.
Using '/opt/jenkins/workspace/TestApp/' as the Project work area.
COR0006326E Error: Project 'Test:Test' does not contain the specified directory 'TestApp'
Scanning repository: 0.00 sec
Getting Project: 0.00 sec
)Finished: FAILURE


-------- windows machine success----
Started by user anonymous
Building in workspace C:\Users\order\.jenkins\workspace\TestApp
[DIMENSIONS] Running checkout on master...
[DIMENSIONS] Running build in 'C:\Users\order\.jenkins\workspace\TestApp'...
[DIMENSIONS] Removing 'file:/C:/Users/order/.jenkins/workspace/TestApp/'...
[DIMENSIONS] Checking out project "Test:Test"...
[DIMENSIONS] Checking out directory 'TestApp'...
[DIMENSIONS] (Note: Dimensions command output was - 
[DIMENSIONS] SUCCESS: Using Current Project 'Test:Test'.
[DIMENSIONS] Using 'C:\Users\order\.jenkins\workspace\TestApp\' as the   Project work area.
[DIMENSIONS] Scanning repository: 0.24 sec
[DIMENSIONS] Scanning local work area: 0.28 sec
[DIMENSIONS]       Updated 'C:\Users\order\.jenkins\workspace\TestApp\TestApp\.project' using  
Item 'Test:PROJECT--1329969986.A-DAT;1'

这里status.isCompleted()返回true。

如果我在apache spark的本地[*]模式下运行,这段代码运行得很好,但是当我在Apache Spark的集群/分布式模式下运行相同的代码时,有3名工作人员,那么它就不会编写任何代码数据到DB。

0 个答案:

没有答案
相关问题