从本地存储的图像文件中使用Azure计算机视觉API中的手写识别

时间:2020-09-04 18:50:27

标签: python-3.x azure azure-cognitive-services handwriting-recognition

我试图通过探索Azure来提高自己的编码和云计算技能。我想自动化一些管理任务,这些任务涉及解密大量手写文档并以电子方式存储文本。

下面的Python代码是两个代码源的合并。

  1. Taygan Rifat https://www.taygan.co/blog/2018/4/28/image-processing-with-cognitive-services的博客

  2. Microsoft自己的演示代码位于https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/python-hand-text

import json
import os
import sys
import requests
import time
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from PIL import Image
from io import BytesIO

subscription_key = 'XX79fdc005d542XXXb5f29ce04ab1cXXX'
endpoint = 'https://handwritng.cognitiveservices.azure.com/'
analyze_url = endpoint + "vision/v3.0/analyze"
text_recognition_url = endpoint + "/vision/v3.0/read/analyze"

image_url = "https://3j2w6t1pktei3iwq0u47sym8-wpengine.netdna-ssl.com/wp-content/uploads/2014/08/Handwriting-sample-Katie.png"

headers = {'Ocp-Apim-Subscription-Key': subscription_key}
data = {'url': image_url}
response = requests.post(
    text_recognition_url, headers=headers, json=data)
response.raise_for_status()

# Extracting text requires two API calls: One call to submit the
# image for processing, the other to retrieve the text found in the image.

# Holds the URI used to retrieve the recognized text.
operation_url = response.headers["Operation-Location"]

# The recognized text isn't immediately available, so poll to wait for completion.
analysis = {}
poll = True
while (poll):
    response_final = requests.get(
        response.headers["Operation-Location"], headers=headers)
    analysis = response_final.json()

    print(json.dumps(analysis, indent=4))

    time.sleep(1)
    if ("analyzeResult" in analysis):
        poll = False
    if ("status" in analysis and analysis['status'] == 'failed'):
        poll = False

polygons = []
if ("analyzeResult" in analysis):
    # Extract the recognized text, with bounding boxes.
    polygons = [(line["boundingBox"], line["text"])
                for line in analysis["analyzeResult"]["readResults"][0]["lines"]]

# Display the image and overlay it with the extracted text.
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image)
for polygon in polygons:
    vertices = [(polygon[0][i], polygon[0][i + 1])
                for i in range(0, len(polygon[0]), 2)]
    text = polygon[1]
    print(text)
    patch = Polygon(vertices, closed=True, fill=False, linewidth=2, color='y')
    ax.axes.add_patch(patch)
    plt.text(vertices[0][0], vertices[0][1], text, fontsize=20, va="top")


plt.show()

我想做的是在修改脚本方面获得一些帮助,以便它可以处理本地存储的图像文件(而不是使用URL)。

当前,我正在通过在Azure虚拟机上旋转IIS服务器并通过HTML访问要分析的图像的URL来解决此问题。有点笨拙(出于我的目的,这有点不安全)。

谢谢,WL

1 个答案:

答案 0 :(得分:1)

你在这里

...
# You could also read the image file name from command line
# as the first argument passed to your script:

# try:
#    input_image = sys.argv[1]
# except:
#    sys.exit('No input. Pass input image file name as first argument.')

input_image = "your_input_image.jpg"
with open(input_image, 'rb') as f:
    data = f.read()
    headers = {
        'Ocp-Apim-Subscription-Key': subscription_key,
        'Content-type': 'application/octet-stream'
    }
    response = requests.post(
        text_recognition_url, headers=headers, data=data)
    response.raise_for_status()
...

以后,

# Display the image and overlay it with the extracted text.
image = Image.open(input_image)
...

大多数接受图像URL的Azure认知服务还接受原始字节作为Content-type: application/octet-stream,并接受二进制图像数据作为POST负载。

请参见Analyze image

支持的输入法:

原始图像二进制或图像URL。

内容类型:

url

octet-stream

输入要求:

支持的图像格式:JPEG,PNG,GIF,BMP。
图片文件的大小必须小于4MB。
图片尺寸必须至少为50 x 50。

顺便说一句,如果您需要快速的Web服务器来完成未来的任务,Python会为您提供支持:

# usage:
# python3 -m http.server [-h] [--cgi] [--bind ADDRESS]
#                        [--directory DIRECTORY] [port]

$ python3 -m http.server
Serving HTTP on :: port 8000 (http://[::]:8000/) ...
相关问题