用forkjoinpool Java 8替换线程池

时间:2018-10-04 12:06:35

标签: java forkjoinpool

我一直在研究java fork / join池,并了解到fork-join池是并发运行任务的更有效方式,因为它使用work-stealing algorithm。目前,我们一直在使用ThreadPool提供的TaskExecutor的执行器Spring Boot。现在,我想利用Fork-Join池代替TaskExecutor。问题是Fork-Join中的Task需要递归。

public class DedupeConsumerService {

    final Logger logger = LoggerFactory.getLogger(DedupeConsumerService.class);

    @Autowired
    private TaskExecutor taskExecutor;

    @Autowired
    private PropertyConfig config;

    @Autowired
    private ApplicationContext applicationContext;

    public void consume() {

        String topic = config.getDedupServiceConsumerTopic();
        String consGroup = config.getDedupServiceConsGroup();

        Properties props = new Properties();
        props.put("enable.auto.commit", "false");
        props.put("session.timeout.ms", "20000");
        props.put("max.poll.records", "10000");

        KafkaConsumer<String, AvroSyslogMessage> consumer = new GenericConsumer<String, AvroSyslogMessage>().initialize(topic, consGroup, STREAMSERDE.STRINGDESER, STREAMSERDE.AVRODESER, props);

        logger.info("Dedupe Kafka Consumer Initialized......");

        try {
            while (true) {
                ConsumerRecords<String, AvroSyslogMessage> records = consumer.poll(100);
                if (records.count() > 0) {
                    logger.debug(">>records count = " + records.count());
                    Date startTime = Calendar.getInstance()
                        .getTime();
                    for (ConsumerRecord<String, AvroSyslogMessage> record : records) {
                        logger.debug("record.offset() = " + record.offset() + " : record.key() = " + record.key() + " : record.partition() = " + record.partition() + " : record.topic() = " + record.topic() + " : record.timestamp() = " + record.timestamp());

                        AvroSyslogMessage avroMessage = record.value();
                        logger.debug("avro Message = " + avroMessage);

                        DedupeFilterProcessThread dedupeProcessThread = applicationContext.getBean(DedupeFilterProcessThread.class);
                        dedupeProcessThread.setMessage(avroMessage);
                        taskExecutor.execute(dedupeProcessThread);
                        consumer.commitSync();
                    }

                    Date endTime = Calendar.getInstance()
                        .getTime();
                    long durationInMilliSec = endTime.getTime() - startTime.getTime();
                    logger.info("Number of Records:: " + records.count() + " Time took to process poll :: " + durationInMilliSec);

                }
            }

        } catch (Throwable e) {
            logger.error("Error occured while processing message", e);
            e.printStackTrace();
        } finally {
            logger.debug("dedupe kafka consume is closing");
            consumer.close();
        }

    }

}

有人可以帮助我解决这个问题吗?

0 个答案:

没有答案