使用存储过程的Azure documentdb批量插入

时间:2015-02-27 16:13:51

标签: azure bulkinsert azure-cosmosdb

您好我正在使用16个集合来插入大约3-4百万个json对象,范围从每个对象5-10k。我使用存储过程来插入这些文档。我有22个容量单元。

function bulkImport(docs) {
    var collection = getContext().getCollection();
    var collectionLink = collection.getSelfLink();

    // The count of imported docs, also used as current doc index.
    var count = 0;

    // Validate input.
    if (!docs) throw new Error("The array is undefined or null.");

    var docsLength = docs.length;
    if (docsLength == 0) {
        getContext().getResponse().setBody(0);
    }

    // Call the CRUD API to create a document.
    tryCreateOrUpdate(docs[count], callback);

    // Note that there are 2 exit conditions:
    // 1) The createDocument request was not accepted. 
    //    In this case the callback will not be called, we just call setBody and we are done.
    // 2) The callback was called docs.length times.
    //    In this case all documents were created and we don't need to call tryCreate anymore. Just call setBody and we are done.
    function tryCreateOrUpdate(doc, callback) {
        var isAccepted = true;
        var isFound = collection.queryDocuments(collectionLink, 'SELECT * FROM root r WHERE r.id = "' + doc.id + '"', function (err, feed, options) {
            if (err) throw err;
            if (!feed || !feed.length) {
                isAccepted = collection.createDocument(collectionLink, doc, callback);
            }
            else {
                // The metadata document.
                var existingDoc = feed[0];
                isAccepted = collection.replaceDocument(existingDoc._self, doc, callback);
            }
        });

        // If the request was accepted, callback will be called.
        // Otherwise report current count back to the client, 
        // which will call the script again with remaining set of docs.
        // This condition will happen when this stored procedure has been running too long
        // and is about to get cancelled by the server. This will allow the calling client
        // to resume this batch from the point we got to before isAccepted was set to false
        if (!isFound && !isAccepted) getContext().getResponse().setBody(count);
    }

    // This is called when collection.createDocument is done and the document has been persisted.
    function callback(err, doc, options) {
        if (err) throw err;

        // One more document has been inserted, increment the count.
        count++;

        if (count >= docsLength) {
            // If we have created all documents, we are done. Just set the response.
            getContext().getResponse().setBody(count);
        } else {
            // Create next document.
            tryCreateOrUpdate(docs[count], callback);
        }
    }

我的C#代码如下所示

    public async Task<int> Add(List<JobDTO> entities)
            {

                    int currentCount = 0;
                    int documentCount = entities.Count;

                    while(currentCount < documentCount)
                    {
                        string argsJson = JsonConvert.SerializeObject(entities.Skip(currentCount).ToArray());
                        var args = new dynamic[] { JsonConvert.DeserializeObject<dynamic[]>(argsJson) };

                        // 6. execute the batch.
                        StoredProcedureResponse<int> scriptResult = await DocumentDBRepository.Client.ExecuteStoredProcedureAsync<int>(sproc.SelfLink, args);

                        // 7. Prepare for next batch.
                        int currentlyInserted = scriptResult.Response;

                        currentCount += currentlyInserted;

                    }

                    return currentCount;
            }

我面临的问题是我尝试插入的400k文档中有时会丢失文档但没有给出任何错误。

该应用程序是部署在云上的辅助角色。 如果我增加了在documentDB中插入的线程或实例的数量,那么错过的文档数量要高得多。

如何弄清楚是什么问题。谢谢你。

3 个答案:

答案 0 :(得分:9)

我发现在尝试此代码时,我会在docs.length上遇到错误,该错误表明长度未定义。

function bulkImport(docs) {
    var collection = getContext().getCollection();
    var collectionLink = collection.getSelfLink();

    // The count of imported docs, also used as current doc index.
    var count = 0;

    // Validate input.
    if (!docs) throw new Error("The array is undefined or null.");

    var docsLength = docs.length; // length is undefined
}

经过多次测试(在Azure文档中找不到任何内容)后,我意识到我无法按照建议传递数组。参数必须是一个对象。我必须像这样修改批处理代码才能运行它。

我还发现我不能简单地尝试在DocumentDB脚本资源管理器(输入框)中传递一组文档。即使占位符帮助文本说你可以。

此代码对我有用:

// psuedo object for reference only
docObject = {
  "items": [{doc}, {doc}, {doc}]
}

function bulkImport(docObject) {
    var context = getContext();
    var collection = context.getCollection();
    var collectionLink = collection.getSelfLink();
    var count = 0;

    // Check input
    if (!docObject.items || !docObject.items.length) throw new Error("invalid document input parameter or undefined.");
    var docs = docObject.items;
    var docsLength = docs.length;
    if (docsLength == 0) {
        context.getResponse().setBody(0);
    }

    // Call the funct to create a document.
    tryCreateOrUpdate(docs[count], callback);

    // Obviously I have truncated this function. The above code should help you understand what has to change.
}

如果我错过了,希望Azure文档能够赶上或变得更容易找到。

我还会为脚本资源管理器放置一个错误报告,希望Azurites能够更新。

答案 1 :(得分:4)

重要的是要注意存储过程具有有限执行,其中所有操作必须在服务器指定的请求超时持续时间内完成。如果操作未完成该时间限制,则会自动回滚事务。为了简化开发以处理时间限制,所有CRUD(创建,读取,更新和删除)操作都返回一个布尔值,该值表示该操作是否将完成。此布尔值可用于包装执行和实现基于延续的模型以恢复执行的信号(这在下面的代码示例中说明)。

上面提供的批量插入存储过程通过返回成功创建的文档数来实现延续模型。这在存储过程的注释中注明:

    // If the request was accepted, callback will be called.
    // Otherwise report current count back to the client, 
    // which will call the script again with remaining set of docs.
    // This condition will happen when this stored procedure has been running too long
    // and is about to get cancelled by the server. This will allow the calling client
    // to resume this batch from the point we got to before isAccepted was set to false
    if (!isFound && !isAccepted) getContext().getResponse().setBody(count);

如果输出文档计数小于输入文档计数,则需要使用剩余的文档集重新运行存储过程。

答案 2 :(得分:2)

自2018年5月以来,已有new Batch SDK for Cosmos DB。有GitHub repo可以帮助您入门。

我已经能够在9秒内导入100.000条记录。并使用Azure Batch散开插入内容,我在1m15秒钟内完成了1900万条记录。这是一个166万RU / s的集合,您可以在导入后按比例缩小它。