优化大型数据库

时间:2016-02-04 03:42:16

标签: postgresql query-optimization

我正在使用SQL查询每月从存储位置数据的巨大PostgreSQL副本数据库中提取数据。目前我已将其分为3个部分(每个10天),每个部分大约需要21个小时才能完成。想知道是否有任何方法可以更快地优化查询和处理数据。

select
  asset_dcs.registration_number,
  date_trunc('day', transmitter_received_dttm + '08:00:00' + '-04:00:00') AS bussines_date,
  min(seq_num) as min_seq_num,
  max(seq_num) as max_seq_num,
  count (*) row_count
from dcs_posn
LEFT OUTER JOIN asset_dcs on (asset_id = asset_dcs.id)
where 1=1 
and date_trunc('day', transmitter_received_dttm + '08:00:00' + '-04:00:00') > '2015-12-31'
and date_trunc('day', transmitter_received_dttm + '08:00:00' + '-04:00:00') <= '2016-01-10'
group by asset_id, bussines_date, asset_dcs.registration_number;

1 个答案:

答案 0 :(得分:1)

最明显的改进是在您的过滤器中:

where 1=1 
and date_trunc('day', transmitter_received_dttm + '08:00:00' + '-04:00:00') > '2015-12-31'
and date_trunc('day', transmitter_received_dttm + '08:00:00' + '-04:00:00') <= '2016-01-10'

应改写为:

WHERE transmitter_received_dttm > '2015-12-31 20:00:00'::timestamp
  AND transmitter_received_dttm <= '2016-01-10 20:00:00'::timestamp

date_trunc()函数使用它的方式非常浪费。

否则,您应该在问题中添加EXPLAIN ...,以便我们可以查看查询计划以及其他与性能相关的信息,例如任何索引。

相关问题