在rails app中提高复杂postgres查询的速度

时间:2017-11-27 10:41:00

标签: sql ruby-on-rails postgresql performance

我的应用程序中有一个可视化大量数据的视图,在后端使用此查询生成数据:

DataPoint Load (20394.8ms)  
SELECT communities.id as com, 
       consumers.name as con, 
       array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, 
       array_agg(consumption ORDER BY data_points.timestamp ASC) as cons 
FROM "data_points" 
     INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" 
     INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" 
     INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" 
     INNER JOIN "clusterings" ON "clusterings"."id" = "communities"."clustering_id" 
WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) 
   AND "data_points"."interval_id" = $3 
   AND "clusterings"."id" = 1 
GROUP BY communities.id, consumers.id  
[["timestamp", "2015-11-20 09:23:00"], ["timestamp", "2015-11-27 09:23:00"], ["interval_id", 2]]

查询大约需要20秒才能执行,这看起来有点过分。

生成查询的代码是:

res = {}
DataPoint.joins(consumer: {communities: :clustering} )
         .where('clusterings.id': self,
               timestamp: chart_cookies[:start_date] .. chart_cookies[:end_date],
               interval_id: chart_cookies[:interval_id])
         .group('communities.id')
         .group('consumers.id')
         .select('communities.id as com, consumers.name as con',
                'array_agg(timestamp ORDER BY data_points.timestamp asc) as tims',
                'array_agg(consumption ORDER BY data_points.timestamp ASC) as cons')
         .each do |d|
      res[d.com] ||= {}
      res[d.com][d.con] = d.tims.zip(d.cons)
      res[d.com]["aggregate"] ||= d.tims.map{|t| [t,0]}
      res[d.com]["aggregate"]  = res[d.com]["aggregate"].zip(d.cons).map{|(a,b),d| [a,(b+d)]}
end
res

相关的数据库模型如下:

  create_table "data_points", force: :cascade do |t|
    t.bigint "consumer_id"
    t.bigint "interval_id"
    t.datetime "timestamp"
    t.float "consumption"
    t.float "flexibility"
    t.datetime "created_at", null: false
    t.datetime "updated_at", null: false
    t.index ["consumer_id"], name: "index_data_points_on_consumer_id"
    t.index ["interval_id"], name: "index_data_points_on_interval_id"
    t.index ["timestamp", "consumer_id", "interval_id"], name: "index_data_points_on_timestamp_and_consumer_id_and_interval_id", unique: true
    t.index ["timestamp"], name: "index_data_points_on_timestamp"
  end

  create_table "consumers", force: :cascade do |t|
    t.string "name"
    t.string "location"
    t.string "edms_id"
    t.bigint "building_type_id"
    t.bigint "connection_type_id"
    t.float "location_x"
    t.float "location_y"
    t.string "feeder_id"
    t.bigint "consumer_category_id"
    t.datetime "created_at", null: false
    t.datetime "updated_at", null: false
    t.index ["building_type_id"], name: "index_consumers_on_building_type_id"
    t.index ["connection_type_id"], name: "index_consumers_on_connection_type_id"
    t.index ["consumer_category_id"], name: "index_consumers_on_consumer_category_id"
  end

  create_table "communities_consumers", id: false, force: :cascade do |t|
    t.bigint "consumer_id", null: false
    t.bigint "community_id", null: false
    t.index ["community_id", "consumer_id"], name: "index_communities_consumers_on_community_id_and_consumer_id"
    t.index ["consumer_id", "community_id"], name: "index_communities_consumers_on_consumer_id_and_community_id"
  end

  create_table "communities", force: :cascade do |t|
    t.string "name"
    t.text "description"
    t.bigint "clustering_id"
    t.datetime "created_at", null: false
    t.datetime "updated_at", null: false
    t.index ["clustering_id"], name: "index_communities_on_clustering_id"
  end

  create_table "clusterings", force: :cascade do |t|
    t.string "name"
    t.text "description"
    t.datetime "created_at", null: false
    t.datetime "updated_at", null: false
  end

如何让查询执行得更快?是否可以重构查询以简化它,或者为数据库模式添加一些额外的索引,以便花费更短的时间?

有趣的是,我在另一个视图中使用的略微简化的查询版本运行得更快,第一个请求只有1161.4ms,后续请求只有41.6ms:

DataPoint Load (1161.4ms)  
SELECT consumers.name as con, 
       array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, 
       array_agg(consumption ORDER BY data_points.timestamp ASC) as cons 
FROM "data_points" 
    INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" 
    INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" 
    INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" 
WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) 
   AND "data_points"."interval_id" = $3 
   AND "communities"."id" = 100 GROUP BY communities.id, consumers.name  
[["timestamp", "2015-11-20 09:23:00"], ["timestamp", "2015-11-27 09:23:00"], ["interval_id", 2]]

在dbconsole中使用命令EXPLAIN (ANALYZE, BUFFERS)和查询,我得到以下输出:

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=12.31..7440.69 rows=246 width=57) (actual time=44.139..20474.015 rows=296 loops=1)
   Group Key: communities.id, consumers.id
   Buffers: shared hit=159692 read=6148105 written=209
   ->  Nested Loop  (cost=12.31..7434.54 rows=246 width=57) (actual time=20.944..20436.806 rows=49728 loops=1)
         Buffers: shared hit=159685 read=6148105 written=209
         ->  Nested Loop  (cost=11.88..49.30 rows=1 width=49) (actual time=0.102..6.374 rows=296 loops=1)
               Buffers: shared hit=988 read=208
               ->  Nested Loop  (cost=11.73..41.12 rows=1 width=57) (actual time=0.084..4.443 rows=296 loops=1)
                     Buffers: shared hit=396 read=208
                     ->  Merge Join  (cost=11.58..40.78 rows=1 width=24) (actual time=0.075..1.365 rows=296 loops=1)
                           Merge Cond: (communities_consumers.community_id = communities.id)
                           Buffers: shared hit=5 read=7
                           ->  Index Only Scan using index_communities_consumers_on_community_id_and_consumer_id on communities_consumers  (cost=0.27..28.71 rows=296 width=16) (actual time=0.039..0.446 rows=296 loops=1)
                                 Heap Fetches: 4
                                 Buffers: shared hit=1 read=6
                           ->  Sort  (cost=11.31..11.31 rows=3 width=16) (actual time=0.034..0.213 rows=247 loops=1)
                                 Sort Key: communities.id
                                 Sort Method: quicksort  Memory: 25kB
                                 Buffers: shared hit=4 read=1
                                 ->  Bitmap Heap Scan on communities  (cost=4.17..11.28 rows=3 width=16) (actual time=0.026..0.027 rows=6 loops=1)
                                       Recheck Cond: (clustering_id = 1)
                                       Heap Blocks: exact=1
                                       Buffers: shared hit=4 read=1
                                       ->  Bitmap Index Scan on index_communities_on_clustering_id  (cost=0.00..4.17 rows=3 width=0) (actual time=0.020..0.020 rows=8 loops=1)
                                             Index Cond: (clustering_id = 1)
                                             Buffers: shared hit=3 read=1
                     ->  Index Scan using consumers_pkey on consumers  (cost=0.15..0.33 rows=1 width=33) (actual time=0.007..0.008 rows=1 loops=296)
                           Index Cond: (id = communities_consumers.consumer_id)
                           Buffers: shared hit=391 read=201
               ->  Index Only Scan using clusterings_pkey on clusterings  (cost=0.15..8.17 rows=1 width=8) (actual time=0.004..0.005 rows=1 loops=296)
                     Index Cond: (id = 1)
                     Heap Fetches: 296
                     Buffers: shared hit=592
         ->  Index Scan using index_data_points_on_consumer_id on data_points  (cost=0.44..7383.44 rows=180 width=24) (actual time=56.128..68.995 rows=168 loops=296)
               Index Cond: (consumer_id = consumers.id)
               Filter: (("timestamp" >= '2015-11-20 09:23:00'::timestamp without time zone) AND ("timestamp" <= '2015-11-27 09:23:00'::timestamp without time zone) AND (interval_id = 2))
               Rows Removed by Filter: 76610
               Buffers: shared hit=158697 read=6147897 written=209
 Planning time: 1.811 ms
 Execution time: 20474.330 ms
(40 rows)

bullet gem返回以下警告:

USE eager loading detected
  Community => [:communities_consumers]
  Add to your finder: :includes => [:communities_consumers]

USE eager loading detected
  Community => [:consumers]
  Add to your finder: :includes => [:consumers]

使用clusterings表删除联接后,新的查询计划如下:

EXPLAIN for: SELECT communities.id as com, consumers.name as con, array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, array_agg(consumption ORDER BY data_points.timestamp ASC) as cons FROM "data_points" INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) AND "data_points"."interval_id" = $3 AND "communities"."clustering_id" = 1 GROUP BY communities.id, consumers.id [["timestamp", "2015-11-29 20:52:30.926247"], ["timestamp", "2015-12-06 20:52:30.926468"], ["interval_id", 2]]
                                                                                                           QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=10839.79..10846.42 rows=241 width=57)
   ->  Sort  (cost=10839.79..10840.39 rows=241 width=57)
         Sort Key: communities.id, consumers.id
         ->  Nested Loop  (cost=7643.11..10830.26 rows=241 width=57)
               ->  Nested Loop  (cost=11.47..22.79 rows=1 width=49)
                     ->  Hash Join  (cost=11.32..17.40 rows=1 width=16)
                           Hash Cond: (communities_consumers.community_id = communities.id)
                           ->  Seq Scan on communities_consumers  (cost=0.00..4.96 rows=296 width=16)
                           ->  Hash  (cost=11.28..11.28 rows=3 width=8)
                                 ->  Bitmap Heap Scan on communities  (cost=4.17..11.28 rows=3 width=8)
                                       Recheck Cond: (clustering_id = 1)
                                       ->  Bitmap Index Scan on index_communities_on_clustering_id  (cost=0.00..4.17 rows=3 width=0)
                                             Index Cond: (clustering_id = 1)
                     ->  Index Scan using consumers_pkey on consumers  (cost=0.15..5.38 rows=1 width=33)
                           Index Cond: (id = communities_consumers.consumer_id)
               ->  Bitmap Heap Scan on data_points  (cost=7631.64..10805.72 rows=174 width=24)
                     Recheck Cond: ((consumer_id = consumers.id) AND ("timestamp" >= '2015-11-29 20:52:30.926247'::timestamp without time zone) AND ("timestamp" <= '2015-12-06 20:52:30.926468'::timestamp without time zone))
                     Filter: (interval_id = 2::bigint)
                     ->  BitmapAnd  (cost=7631.64..7631.64 rows=861 width=0)
                           ->  Bitmap Index Scan on index_data_points_on_consumer_id  (cost=0.00..1589.92 rows=76778 width=0)
                                 Index Cond: (consumer_id = consumers.id)
                           ->  Bitmap Index Scan on index_data_points_on_timestamp  (cost=0.00..6028.58 rows=254814 width=0)
                                 Index Cond: (("timestamp" >= '2015-11-29 20:52:30.926247'::timestamp without time zone) AND ("timestamp" <= '2015-12-06 20:52:30.926468'::timestamp without time zone))
(23 rows)

根据评论中的要求,这些是简化查询的查询计划,包含和不限制communities.id

 DataPoint Load (1563.3ms)  SELECT consumers.name as con, array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, array_agg(consumption ORDER BY data_points.timestamp ASC) as cons FROM "data_points" INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) AND "data_points"."interval_id" = $3 GROUP BY communities.id, consumers.name  [["timestamp", "2015-11-29 20:52:30.926000"], ["timestamp", "2015-12-06 20:52:30.926000"], ["interval_id", 2]]
EXPLAIN for: SELECT consumers.name as con, array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, array_agg(consumption ORDER BY data_points.timestamp ASC) as cons FROM "data_points" INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) AND "data_points"."interval_id" = $3 GROUP BY communities.id, consumers.name [["timestamp", "2015-11-29 20:52:30.926000"], ["timestamp", "2015-12-06 20:52:30.926000"], ["interval_id", 2]]
                                                                                                        QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=140992.34..142405.51 rows=51388 width=49)
   ->  Sort  (cost=140992.34..141120.81 rows=51388 width=49)
         Sort Key: communities.id, consumers.name
         ->  Hash Join  (cost=10135.44..135214.45 rows=51388 width=49)
               Hash Cond: (data_points.consumer_id = consumers.id)
               ->  Bitmap Heap Scan on data_points  (cost=10082.58..134455.00 rows=51388 width=24)
                     Recheck Cond: (("timestamp" >= '2015-11-29 20:52:30.926'::timestamp without time zone) AND ("timestamp" <= '2015-12-06 20:52:30.926'::timestamp without time zone) AND (interval_id = 2::bigint))
                     ->  Bitmap Index Scan on index_data_points_on_timestamp_and_consumer_id_and_interval_id  (cost=0.00..10069.74 rows=51388 width=0)
                           Index Cond: (("timestamp" >= '2015-11-29 20:52:30.926'::timestamp without time zone) AND ("timestamp" <= '2015-12-06 20:52:30.926'::timestamp without time zone) AND (interval_id = 2::bigint))
               ->  Hash  (cost=49.16..49.16 rows=296 width=49)
                     ->  Hash Join  (cost=33.06..49.16 rows=296 width=49)
                           Hash Cond: (communities_consumers.community_id = communities.id)
                           ->  Hash Join  (cost=8.66..20.69 rows=296 width=49)
                                 Hash Cond: (consumers.id = communities_consumers.consumer_id)
                                 ->  Seq Scan on consumers  (cost=0.00..7.96 rows=296 width=33)
                                 ->  Hash  (cost=4.96..4.96 rows=296 width=16)
                                       ->  Seq Scan on communities_consumers  (cost=0.00..4.96 rows=296 width=16)
                           ->  Hash  (cost=16.40..16.40 rows=640 width=8)
                                 ->  Seq Scan on communities  (cost=0.00..16.40 rows=640 width=8)
(19 rows)

  DataPoint Load (1479.0ms)  SELECT consumers.name as con, array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, array_agg(consumption ORDER BY data_points.timestamp ASC) as cons FROM "data_points" INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) AND "data_points"."interval_id" = $3 GROUP BY communities.id, consumers.name  [["timestamp", "2015-11-29 20:52:30.926000"], ["timestamp", "2015-12-06 20:52:30.926000"], ["interval_id", 2]]
EXPLAIN for: SELECT consumers.name as con, array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, array_agg(consumption ORDER BY data_points.timestamp ASC) as cons FROM "data_points" INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) AND "data_points"."interval_id" = $3 GROUP BY communities.id, consumers.name [["timestamp", "2015-11-29 20:52:30.926000"], ["timestamp", "2015-12-06 20:52:30.926000"], ["interval_id", 2]]
                                                                                                        QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=140992.34..142405.51 rows=51388 width=49)
   ->  Sort  (cost=140992.34..141120.81 rows=51388 width=49)
         Sort Key: communities.id, consumers.name
         ->  Hash Join  (cost=10135.44..135214.45 rows=51388 width=49)
               Hash Cond: (data_points.consumer_id = consumers.id)
               ->  Bitmap Heap Scan on data_points  (cost=10082.58..134455.00 rows=51388 width=24)
                     Recheck Cond: (("timestamp" >= '2015-11-29 20:52:30.926'::timestamp without time zone) AND ("timestamp" <= '2015-12-06 20:52:30.926'::timestamp without time zone) AND (interval_id = 2::bigint))
                     ->  Bitmap Index Scan on index_data_points_on_timestamp_and_consumer_id_and_interval_id  (cost=0.00..10069.74 rows=51388 width=0)
                           Index Cond: (("timestamp" >= '2015-11-29 20:52:30.926'::timestamp without time zone) AND ("timestamp" <= '2015-12-06 20:52:30.926'::timestamp without time zone) AND (interval_id = 2::bigint))
               ->  Hash  (cost=49.16..49.16 rows=296 width=49)
                     ->  Hash Join  (cost=33.06..49.16 rows=296 width=49)
                           Hash Cond: (communities_consumers.community_id = communities.id)
                           ->  Hash Join  (cost=8.66..20.69 rows=296 width=49)
                                 Hash Cond: (consumers.id = communities_consumers.consumer_id)
                                 ->  Seq Scan on consumers  (cost=0.00..7.96 rows=296 width=33)
                                 ->  Hash  (cost=4.96..4.96 rows=296 width=16)
                                       ->  Seq Scan on communities_consumers  (cost=0.00..4.96 rows=296 width=16)
                           ->  Hash  (cost=16.40..16.40 rows=640 width=8)
                                 ->  Seq Scan on communities  (cost=0.00..16.40 rows=640 width=8)
(19 rows)

6 个答案:

答案 0 :(得分:3)

您是否尝试在以下位置添加索引:

&#34; data_points&#34; .timestamp&#34; +&#34; data_points&#34; .consumer_id&#34;

OR

data_points&#34; .consumer_id only?

答案 1 :(得分:3)

您使用的是什么版本的Postgres?在Postgres 10中,他们引入了本机表分区。如果你的&#34; data_points&#34; table非常大,这可能会显着加快您的查询速度,因为您正在查看时间范围:

WHERE (data_points.TIMESTAMP BETWEEN $1 AND $2) 

您可以研究的一个策略是在&#34;时间戳&#34;的DATE值上添加分区。领域。然后修改您的查询以包含一个额外的过滤器,以便分区开始:

WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) 
   AND (CAST("data_points"."timestamp" AS DATE) BETWEEN CAST($1 AS DATE) AND CAST($2 AS DATE))
   AND "data_points"."interval_id" = $3 
   AND "data_points"."interval_id" = $3 
   AND "communities"."clustering_id"  = 1 

如果你的&#34; data_points&#34;表非常大,你的&#34;时间戳&#34;过滤范围很小,这应该有所帮助,因为它会快速过滤掉不需要处理的行块。

我还没有在Postgres做过这个,所以我不确定它是多么可行,有用,等等等等。但是需要考虑一下:)

https://www.postgresql.org/docs/10/static/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE

答案 2 :(得分:2)

你在clusterings_id上有外键吗?另外 - 尝试改变你的状况:

SELECT communities.id as com, 
       consumers.name as con, 
       array_agg(timestamp ORDER BY data_points.timestamp asc) as tims, 
       array_agg(consumption ORDER BY data_points.timestamp ASC) as cons 
FROM "data_points" 
     INNER JOIN "consumers" ON "consumers"."id" = "data_points"."consumer_id" 
     INNER JOIN "communities_consumers" ON "communities_consumers"."consumer_id" = "consumers"."id" 
     INNER JOIN "communities" ON "communities"."id" = "communities_consumers"."community_id" 
WHERE ("data_points"."timestamp" BETWEEN $1 AND $2) 
   AND "data_points"."interval_id" = $3 
   AND "communities"."clustering_id"  = 1 
GROUP BY communities.id, consumers.id 

答案 3 :(得分:2)

  1. 您无需加入clusterings。因此,请尝试从查询中删除该内容,然后使用communities.clustering_id = 1替换它。这应该在您的查询计划中删除3个步骤。这可以为您节省最多,因为您的查询计划在三个嵌套循环内对其进行了一些索引扫描。

  2. 您还可以尝试调整汇总timestamp的方式。我假设您不需要在几秒钟内聚合它们?

  3. 我还会删除"index_data_points_on_timestamp"索引,因为您已有一个复合索引。这几乎没用。这样可以提高您的写入性能。

答案 4 :(得分:0)

未使用data_points.timestamp上的索引,可能是由于:: timestamp转换。

我想知道改变列数据类型或创建功能索引会有所帮助。

编辑 - 我想,你的CREATE TABLE中的日期时间是Rails选择显示Postgres时间戳数据类型的方式,所以毕竟可能没有转换。

尽管如此,时间戳上的索引没有被使用,但是根据您的数据分布,这可能是优化器非常明智的选择。

答案 5 :(得分:0)

所以这里我们有Postgres 9.3和长查询。在查询之前,您必须确保为数据库提供最佳设置,并且适合您对磁盘的读写百分比,磁盘ssd类型或旧硬盘,并且不要切换autovacuum,检查表的膨胀和索引,您对用于构建最佳计划的索引具有良好的选择性。

检查行中填充的行类型和大小。更改行的类型也会减少表格和时间的大小。

所以现在你确保这一切。现在让我们思考Postgres如何执行以及如何减少时间和精力。 ORM适用于简单查询,但如果您尝试执行复杂查询,则必须使用execute by sql方法并保留在Query Service Objects中。

在sql中尽可能简单地编写查询Postgres也浪费时间进行解析查询。

检查所有连接字段上的索引。使用explain analyze检查现在您是否拥有最佳扫描方法。

下一点。你尝试做4连接! Postgres尝试在4中找到最佳查询计划!时间(4个阶乘!)让我们考虑使用具有预定义表的子查询或表来进行此选择。

对4个连接使用分离的查询或函数(尝试子查询):

SELECT *
FROM "data_points" as predefined
INNER JOIN "consumers"
ON "consumers"."id" ="data_points"."consumer_id" 
INNER JOIN "communities_consumers"
ON "communities_consumers"."consumer_id" = "consumers"."id" 
INNER JOIN "communities"
ON "communities"."id" = "communities_consumers"."community_id" 
INNER JOIN "clusterings"
ON "clusterings"."id" "communities"."clustering_id" 

WHERE "data_points"."interval_id" = 2 
AND "clusterings"."id" = 1 

2)接下来(不要使用变量通过)

SELECT *
FROM predefined
WHERE "data_points"."timestamp"
BETWEEN "2015-11-20 09:23:00"
AND 2015-11-27 09:23:00

3)你有3次询问data_points查询,你需要更少:

array_agg(timestamp ORDER BY data_points.timestamp asc) as tims
array_agg(consumption ORDER BY data_points.timestamp ASC) as cons
WHERE ("data_points"."timestamp" BETWEEN $1 AND $2)

你应该记住长时间的查询并不是关于查询,关于设置,ORM使用,sql以及Postgres如何使用它们。

相关问题