如何准确地从图像中提取数据?使用PyTesseract

时间:2019-09-10 06:41:32

标签: python ocr tesseract python-tesseract

我正在尝试使用python从图像中准确提取文本。

这是我在这种情况下使用的图像:

Image 1

这是我的python文件:

from PIL import Image
import pytesseract

pytesseract.pytesseract.tesseract_cmd = r'C:\Users\test\AppData\Roaming\Python\Python37\site-packages\tesseract.exe'

img=Image.open('C:/Users/test/Desktop/Everything else/work/Almonds.jpg')

text = pytesseract.image_to_string(img, lang = 'eng')


print(text)

这是在命令提示符下运行python文件时的输出:

INGREDIENTS: Almonds: [Nuts] Allergy Advice:
For allergens, see ingredients in Bold

Nutritional Information
TYPICALVALUES Per 100g

Energy kJ 2597.0}
Energy kcal 626.0)
Fat 50.6g|

of which Saturates 3.9g

Carbohy drate 19.7g

of which Sugars 4.89|
Fibre 3.59
Protein 21.3g|

May contain traces of
other nuts, peanut,
sesame or gluten

This product may contain
pieces of shell

Store in a cool dry place
jout of direct sunlight

Net weight:



Salt 0.ig

For Best Before & Batch see pack 1 k

如您所见,并非所有文字均拼写正确。有什么建议可以提高文本输出的准确性吗?

额外

这里是关于我要实现的目标的想法,与问题无关,但可以让您了解我在这里要实现的目标。

我有多个产品的图像文件,可以将它们与excel表进行比较。

Excel工作表的格式如下(1个示例数据):

Product Code: 0001
Product Desc: Californian Whole Almonds
Ingredients: Almonds: [Nuts]
Allergy Advice: True
etc...

然后,我将编写一个脚本,该脚本将检测图像文件中的文本,将其与Excel工作表进行比较,并分析每个部分是否匹配在一起,从而给出“ True”或“ False”的输出

1 个答案:

答案 0 :(得分:0)

在将图像丢入Pytesseract之前对其进行预处理以平滑/消除噪点会有所帮助。也许删除水平/垂直线会改善检测效果

enter image description here

import cv2

image = cv2.imread('1.jpg',0)
thresh = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]

# Remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cv2.fillPoly(thresh, cnts, [0,0,0])

# Remove vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,45))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cv2.fillPoly(thresh, cnts, [0,0,0])

result = 255 - thresh

cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()