Azure数据工厂 - 从Blob批量导入到Azure SQL

时间:2016-03-13 00:45:48

标签: sql-server azure azure-storage azure-storage-blobs azure-data-factory

我有简单的文件 FD_GROUP.TXT ,内容为:

~0100〜^〜乳制品和蛋制品〜
~0200〜^〜香料和草药〜
~0300~ ^〜婴儿食品〜
~0400〜^〜脂肪和油〜
~0500~ ^〜家禽产品〜

我正在尝试使用Azure Data Factory将这些文件(一些包含700,000行)批量导入SQL数据库。

策略是首先使用 ^ 分隔列,然后用空字符替换波浪号(〜),这样我就会丢失波浪号(〜),然后插入。

1。 SQL解决方案:

DECLARE @CsvFilePath NVARCHAR(1000) = 'D:\CodePurehope\Dev\NutrientData\FD_GROUP.txt';

CREATE TABLE #TempTable
 (
    [FoodGroupCode] VARCHAR(666) NOT NULL, 
    [FoodGroupDescription] VARCHAR(60) NOT NULL
 )

DECLARE @sql NVARCHAR(4000) = 'BULK INSERT #TempTable FROM ''' + @CsvFilePath + ''' WITH ( FIELDTERMINATOR =''^'', ROWTERMINATOR =''\n'' )';
EXEC(@sql);

UPDATE #TempTable
   SET [FoodGroupCode] = REPLACE([FoodGroupCode], '~', ''),
       [FoodGroupDescription] = REPLACE([FoodGroupDescription], '~', '')
GO

INSERT INTO [dbo].[FoodGroupDescriptions]
(
    [FoodGroupCode],
    [FoodGroupDescription]
)
SELECT
    [FoodGroupCode],
    [FoodGroupDescription]
FROM
    #TempTable
GO

DROP TABLE #TempTable

2。 SSIS ETL包解决方案: enter image description here

使用 ^ 分隔的平面文件源和派生列转换,以替换上面照片中显示的不必要的波浪号(〜)。

如何使用Microsoft Azure Data Factory进行此操作?
我将 FD_GROUP.TXT 上传到Azure存储Blob上作为输入,并在Azure SQL Server上准备好表格以用于输出

我有:
- 2个链接服务:AzureStorage和AzureSQL - 2个数据集:Blob作为输入,SQL作为输出
- 1管道

enter image description here

FoodGroupDescriptionsAzureBlob 设置

{
    "name": "FoodGroupDescriptionsAzureBlob",
    "properties": {
        "structure": [
            {
                "name": "FoodGroupCode",
                "type": "Int32"
            },
            {
                "name": "FoodGroupDescription",
                "type": "String"
            }
        ],
        "published": false,
        "type": "AzureBlob",
        "linkedServiceName": "AzureStorageLinkedService",
        "typeProperties": {
            "fileName": "FD_GROUP.txt",
            "folderPath": "nutrition-data/NutrientData/",
            "format": {
                "type": "TextFormat",
                "rowDelimiter": "\n",
                "columnDelimiter": "^"
            }
        },
        "availability": {
            "frequency": "Minute",
            "interval": 15
        }
    }
}

FoodGroupDescriptionsSQLAzure 设置

{
    "name": "FoodGroupDescriptionsSQLAzure",
    "properties": {
        "structure": [
            {
                "name": "FoodGroupCode",
                "type": "Int32"
            },
            {
                "name": "FoodGroupDescription",
                "type": "String"
            }
        ],
        "published": false,
        "type": "AzureSqlTable",
        "linkedServiceName": "AzureSqlLinkedService",
        "typeProperties": {
            "tableName": "FoodGroupDescriptions"
        },
        "availability": {
            "frequency": "Minute",
            "interval": 15
        }
    }
}

FoodGroupDescriptionsPipeline 设置

{
    "name": "FoodGroupDescriptionsPipeline",
    "properties": {
        "description": "Copy data from a blob to Azure SQL table",
        "activities": [
            {
                "type": "Copy",
                "typeProperties": {
                    "source": {
                        "type": "BlobSource"
                    },
                    "sink": {
                        "type": "SqlSink",
                        "writeBatchSize": 10000,
                        "writeBatchTimeout": "60.00:00:00"
                    }
                },
                "inputs": [
                    {
                        "name": "FoodGroupDescriptionsAzureBlob"
                    }
                ],
                "outputs": [
                    {
                        "name": "FoodGroupDescriptionsSQLAzure"
                    }
                ],
                "policy": {
                    "timeout": "01:00:00",
                    "concurrency": 1,
                    "executionPriorityOrder": "NewestFirst"
                },
                "scheduler": {
                    "frequency": "Minute",
                    "interval": 15
                },
                "name": "CopyFromBlobToSQL",
                "description": "Bulk Import FoodGroupDescriptions"
            }
        ],
        "start": "2015-07-13T00:00:00Z",
        "end": "2015-07-14T00:00:00Z",
        "isPaused": false,
        "hubName": "gymappdatafactory_hub",
        "pipelineMode": "Scheduled"
    }
}

这个东西在Azure Data Factory上不起作用+我不知道如何在这个上下文中使用replace。任何帮助表示赞赏。

1 个答案:

答案 0 :(得分:1)

我正在使用您的代码,我可以通过执行以下操作来实现它:

在FoodGroupDescriptionsAzureBlob json定义中,您需要在属性节点中添加" external":true。 Blob输入文件是从外部源创建的,而不是从azure数据工厂管道创建的,通过将其设置为true,它让azure数据工厂知道此输入应该可以使用。

另外在blob输入定义中添加: " quoteChar":"〜"到"格式"节点,因为它看起来像是用"〜"这将从数据中删除那些你定义的INT将正确插入到sql表中的数据。

Full blob def:

{
"name": "FoodGroupDescriptionsAzureBlob",
"properties": {
    "structure": [
        {
            "name": "FoodGroupCode",
            "type": "Int32"
        },
        {
            "name": "FoodGroupDescription",
            "type": "String"
        }
    ],
    "published": false,
    "type": "AzureBlob",
    "linkedServiceName": "AzureStorageLinkedService",
    "typeProperties": {
        "fileName": "FD_Group.txt",
        "folderPath": "nutrition-data/NutrientData/",
        "format": {
            "type": "TextFormat",
            "rowDelimiter": "\n",
            "columnDelimiter": "^",
            "quoteChar": "~"
        }
    },
    "availability": {
        "frequency": "Minute",
        "interval": 15
    },
    "external": true,
    "policy": {}
}

}

由于您每隔15分钟设置一个间隔,并且管道的开始和结束日期为一整天,因此您将在整个管道运行持续时间内每15分钟创建一个切片,因为您只想在运行此操作后更改一次结束了:

  "start": "2015-07-13T00:00:00Z",
  "end": "2015-07-13T00:15:00Z",

这将创建1个切片。

希望这有帮助。

相关问题