如何在大数据平台中比较大文件?

时间:2018-11-20 09:20:21

标签: apache-spark hadoop hive apache-nifi

以下是每天出现的一些大文件,不是很频繁,每天只有2-3个,它们被转换为JSON格式。

文件内容如下:

[
    {
        "spa_ref_data": {
            "approval_action": "New",
            "spa_ref_no": "6500781413",
            "begin_date": null,
            "end_date": "20191009",
            "doc_file_name": "LEN_SPA_6500781413.json",
            "LEN_V": "v1",
            "version_no": null,
            "spa_ref_id": null,
            "spa_ref_notes": "MC00020544",
            "vend_code": "LEN"
        },
        "cust_data": [
            {
                "cust_name": null,
                "cust_no": null,
                "cust_type": "E",
                "state": null,
                "country": null
            },
            {
                "cust_name": null,
                "cust_no": null,
                "cust_type": "C",
                "state": null,
                "country": null
            }
        ],
        "product_data": [
            {
                "mfg_partno": "40AH0135US",
                "std_price": null,
                "rebate_amt": "180",
                "max_spa_qty": null,
                "rebate_type": null,
                "min_spa_qty": null,
                "min_cust_qty": null,
                "max_cust_qty": null,
                "begin_date": "20180608",
                "end_date": null
            },
            {
                "mfg_partno": "40AJ0135US",
                "std_price": null,
                "rebate_amt": "210",
                "max_spa_qty": null,
                "rebate_type": null,
                "min_spa_qty": null,
                "min_cust_qty": null,
                "max_cust_qty": null,
                "begin_date": "20180608",
                "end_date": null
            }
        ]
    },
    {
        "spa_ref_data": {
            "approval_action": "New",
            "spa_ref_no": "5309745006",
            "begin_date": null,
            "end_date": "20190426",
            "doc_file_name": "LEN_SPA_5309745006.json",
            "LEN_V": "v1",
            "version_no": null,
            "spa_ref_id": null,
            "spa_ref_notes": "MC00020101",
            "vend_code": "LEN"
        },
        "cust_data": [
            {
                "cust_name": null,
                "cust_no": null,
                "cust_type": "E",
                "state": null,
                "country": null
            },
            {
                "cust_name": null,
                "cust_no": null,
                "cust_type": "C",
                "state": null,
                "country": null
            }
        ],
        "product_data": [
            {
                "mfg_partno": "10M8S0HU00",
                "std_price": null,
                "rebate_amt": "698",
                "max_spa_qty": null,
                "rebate_type": null,
                "min_spa_qty": null,
                "min_cust_qty": null,
                "max_cust_qty": null,
                "begin_date": "20180405",
                "end_date": null
            },
            {
                "mfg_partno": "20K5S0CM00",
                "std_price": null,
                "rebate_amt": "1083",
                "max_spa_qty": null,
                "rebate_type": null,
                "min_spa_qty": null,
                "min_cust_qty": null,
                "max_cust_qty": null,
                "begin_date": "20180405",
                "end_date": null
            }
        ]
    }
]

这是一个模拟数据文件。实际上,它是一个长度为30000+的数组。

我的目标是将即将推出的产品与最新的产品进行比较。并获取更改的数据。

这位领导说我必须使用大数据技术。而且性能必须很好。

我们使用Apache NIFI和hadoop大数据工具来做到这一点。

有什么建议吗?

1 个答案:

答案 0 :(得分:0)

例如,您可以将ExecuteScript处理器与js scrpit一起使用以比较json。它运作迅速。您也可以使用SplitRecord处理器拆分大数组json,并通过executeScript处理程序对每个数组进行比较。效果也不错。