Elasaticsearch 6.5亿条记录优化

时间:2018-04-02 08:35:26

标签: javascript node.js elasticsearch elasticsearch-5 elasticsearch-6

如果我的记录每年可扩展到6.58亿条记录,我会尝试找出最适合弹性搜索的解决方案。实际上现在我的所有记录都有一个索引,包含2个分片和0个副本。此外,我注意到有356k记录和一个索引,排序和搜索的工作速度比拥有1000条记录的365个索引要快。问题是,如果我要对搜索进行排序,并删除超过一年的记录或索引,那么以6.58亿条记录存储弹性数据的最佳和最快方式是什么?

Elasticsearch 6.2版本,javascript api。

const defaultPageSize = 10
const indexTemplateSettings = {
    number_of_shards: 2,
    number_of_replicas : 0,
    max_result_window: 1000000000,
    'index.routing.allocation.enable': 'all',
}

const createClient = () =>
    new elasticsearch.Client({
        host: `${config.elastic.host}:${config.elastic.port}`,
        log: config.elastic.logLevel,
        httpAuth: `${config.elastic.userName}:${config.elastic.password}`,
    })

export const get = ({index, skip = 0, pageSize = defaultPageSize, search, sort = {by: 'timestamp', direction: 'desc'}}) => new Promise(async resolve => {
    try {
        logger.silly(`getting data from elastic: index: ${index}, skip: ${skip}, pageSize: ${pageSize}`)

        let client = createClient()

        const sortSettings = {
            order: `${sort.direction.toLowerCase()}`,
            missing: '_last',
            unmapped_type: 'long',
        }

        const params = {
            from: skip,
            size: pageSize || undefined,
            index: `${index.toLowerCase()}`,
            filter_path: 'hits.hits._source, hits.total',
            body: {
                query: {'match_all': {}},
                sort: {
                    [`${sort.by}.keyword`]: sortSettings,
                    [`${sort.by}.seconds`]: sortSettings,
                },
            },
        }

        if (search) {
            params.body.query = {
                query_string : {
                    query: `*${search}* OR *${search}`,
                    analyze_wildcard: true,
                },
            }
        }

        await client.search(params,
            (e, {hits: {hits:  data = [], total: totalCount} = {hits: [], total: 0}} = {}) => {
                logger.silly(`elastic searching completed. Result: contains ${totalCount} items`)

                resolve({items: data.map(t => t._source), totalCount})
            })
    } catch (e) {
        logger.error(e)
    }
})

export const push = (message, type) => new Promise(async resolve => {
    try {
        let client = createClient()
        let oneYearAgoTime = new Date(new Date().setFullYear(new Date().getFullYear() - 1)).toISOString().substring(0, 10)
        let indexCreationTime = new Date('2016-04-27').toISOString().substring(0, 10)

           await client.deleteByQuery({
            index: type.toLowerCase(),
            body: {
                query: {
                    range: {
                        '_timestampIndex' : {
                            lte: oneYearAgoTime,
                        },
                    },
                },
            },
        } , (error, response) => {
            logger.silly('Deleted of data completed', response)
        })

        await client.index({
            index: type.toLowerCase(),
            type,
            body: {
                ...message,
                _timestampIndex: indexCreationTime,
            },
        },
        (error, response) => {
            logger.silly('Pushing of data completed', response)

            resolve(response)
        })

    } catch (e) {
        logger.error(e)
    }
})

1 个答案:

答案 0 :(得分:2)

  1. 每个分片的1,000个文档太低了。根据经验,分片应该在GB范围内;根据用例大小介于10GB(搜索)到50GB(日志)之间 - 假设你有一台尺寸合适的机器。 如果我在您的评论中看到正确,您有1,600万个文档,这需要333MB的存储空间。因此,您拥有大约400倍的文档,因此大约有133GB的数据;也许10个碎片?如果你想对此进行正确的基准测试,请使用1个碎片并查看它何时爆炸 - 这应该可以让您了解最大碎片大小。
  2. 从索引中删除文档总是很昂贵。基于时间的索引(如果您的分片足够大)或过滤器(可能在适当的时间内甚至是filtered alias)可能会让您避免频繁删除大量文档。
相关问题