如何通过增加单个集合中的文档数量来提高ArangoDB 2.7中的检索查询性能

时间:2016-02-08 16:31:25

标签: python arangodb aql

我已将数据存储在arangoDB 2.7中,格式如下:

    {"content": "Book.xml", "type": "string", "name": "name", "key": 102}
    {"content": "D:/XMLexample/Book.xml", "type": "string", "name": "location", "key": 102}
    {"content": "xml", "type": "string", "name": "mime-type", "key": 102}
    {"content": 4130, "type": "string", "name": "size", "key": 102}
    {"content": "Sun Aug 25 07:53:32 2013", "type": "string", "name": "created_date", "key": 102}
    {"content": "Wed Jan 23 09:14:07 2013", "type": "string", "name": "modified_date", "key": 102}
    {"content": "catalog", "type": "tag", "name": "root", "key": 102}
    {"content": "book", "type": "string", "name": "tag", "key": 103} 
    {"content": "bk101", "type": {"py/type": "__builtin__.str"}, "name": "id", "key": 103}
    {"content": "Gambardella, Matthew", "type": {"py/type": "__builtin__.str"}, "name": "author", "key": 1031} 
  {"content": "XML Developer's Guide", "type": {"py/type": "__builtin__.str"}, "name": "title", "key": 1031}
    {"content": "Computer", "type": {"py/type": "__builtin__.str"}, "name": "genre", "key": 1031}
    {"content": "44.95", "type": {"py/type": "__builtin__.str"}, "name": "price", "key": 1031}
    {"content": "2000-10-01", "type": {"py/type": "__builtin__.str"}, "name": "publish_date", "key": 1031}
    {"content": "An in-depth look at creating applications with XML.", "type": {"py/type": "__builtin__.str"}, "name": "description", "key": 1031}

正如在增加文件数量为1000,10000,100000,1000000,10000000等等。平均查询响应时间随着文件数量的增加而增加,从0.2秒到3.0秒不等。我已经在这个集合上创建了Hash索引。我的问题是我们是否可以通过增加没有文件来减少这种情况。

另一方面,我还在内容组件上创建了全文索引,同样的事情在全文搜索中发生,响应时间从.05秒到0.3秒不等。

那么告诉我有没有办法进一步缩短这个时间..

请告诉我,我们可以进一步缩短响应时间吗?

1 个答案:

答案 0 :(得分:1)

无法在第一级嵌套FOR语句中使用索引。 但是,从ArangoDB 2.8开始,您可以使用array indices

您查询的值为data.pname[*].namedata.pname[*].type,因此我们可以为它们创建索引:

db.DSP.ensureIndex({type:"hash", fields: ['data[*].type']});
db.DSP.ensureIndex({type:"hash", fields: ['data[*].name']});

现在让我们重新制定查询,以便它可以利用这个索引。我们从一个简单的版本开始进行实验,并使用explain来重新验证它实际使用索引:

db._explain('FOR k IN DSP FILTER "modified_date" IN k.data[*].name RETURN k')
Query string:
 FOR k IN DSP FILTER "modified_date" IN k.data[*].name RETURN k

Execution plan:
 Id   NodeType        Est.   Comment
  1   SingletonNode      1   * ROOT
  6   IndexNode          1     - FOR k IN DSP   /* hash index scan */
  5   ReturnNode         1       - RETURN k

Indexes used:
 By   Type   Collection   Unique   Sparse   Selectivity   Fields               Ranges
  6   hash   DSP          false    false       100.00 %   [ `data[*].name` ] 
                                              ("modified_date" in k.`data`[*].`name`)

所以我们看到我们可以过滤数组条件,这样你只需要将要检查的文档放到内部循环中:

FOR k IN DSP FILTER "modified_date" IN k.data[*].name || "string" IN k.data[*].type
  FOR p IN k.data FILTER p.name == "modified_date" || p.type == "string" RETURN p
相关问题