Swift 3 - 如何提高Tesseract的图像质量?

时间:2017-03-12 01:55:33

标签: ios swift swift3 tesseract

我正在使用Swift 3构建一个移动应用程序,允许用户拍摄照片并在生成的图像上运行Tesseract OCR。

但是,我一直在努力提高扫描质量,但它似乎并没有起到太大的作用。我已将照片细分为更多"放大"我想要识别的区域,甚至试图将它变成黑白色。是否有任何策略可以增强"或优化图片质量/尺寸,以便Tesseract能够更好地识别它?谢谢!

tesseract.image = // the camera photo here
tesseract.recognize()
print(tesseract.recognizedText)

我收到了这些错误,不知道该怎么做:

Error in pixCreateHeader: depth must be {1, 2, 4, 8, 16, 24, 32}
Error in pixCreateNoInit: pixd not made
Error in pixCreate: pixd not made
Error in pixGetData: pix not defined
Error in pixGetWpl: pix not defined
2017-03-11 22:22:30.019717 ProjectName[34247:8754102] Cannot convert image to Pix with bpp = 64
Error in pixSetYRes: pix not defined
Error in pixGetDimensions: pix not defined
Error in pixGetColormap: pix not defined
Error in pixClone: pixs not defined
Error in pixGetDepth: pix not defined
Error in pixGetWpl: pix not defined
Error in pixGetYRes: pix not defined
Please call SetImage before attempting recognition.Please call SetImage before attempting recognition.2017-03-11 22:22:30.026605 EOB-Reader[34247:8754102] No recognized text. Check that -[Tesseract setImage:] is passed an image bigger than 0x0.

1 个答案:

答案 0 :(得分:6)

我已经使用以下内容在swift 3中相当成功地使用了tesseract:

func performImageRecognition(_ image: UIImage) {

    let tesseract = G8Tesseract(language: "eng")
    var textFromImage: String?
    tesseract?.engineMode = .tesseractCubeCombined
    tesseract?.pageSegmentationMode = .singleBlock
    tesseract?.image = imageView.image
    tesseract?.recognize()
    textFromImage = tesseract?.recognizedText
    print(textFromImage!)
}

我也发现预处理图像也有帮助。我在UIImage中添加了以下扩展名

导入UIKit         导入CoreImage

    extension UIImage {

        func toGrayScale() -> UIImage {

            let greyImage = UIImageView()
            greyImage.image = self
            let context = CIContext(options: nil)
            let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
            currentFilter!.setValue(CIImage(image: greyImage.image!), forKey: kCIInputImageKey)
            let output = currentFilter!.outputImage
            let cgimg = context.createCGImage(output!,from: output!.extent)
            let processedImage = UIImage(cgImage: cgimg!)
            greyImage.image = processedImage

            return greyImage.image!
        }

        func binarise() -> UIImage {

            let glContext = EAGLContext(api: .openGLES2)!
            let ciContext = CIContext(eaglContext: glContext, options: [kCIContextOutputColorSpace : NSNull()])
            let filter = CIFilter(name: "CIPhotoEffectMono")
            filter!.setValue(CIImage(image: self), forKey: "inputImage")
            let outputImage = filter!.outputImage
            let cgimg = ciContext.createCGImage(outputImage!, from: (outputImage?.extent)!)

            return UIImage(cgImage: cgimg!)
        }

        func scaleImage() -> UIImage {

            let maxDimension: CGFloat = 640
            var scaledSize = CGSize(width: maxDimension, height: maxDimension)
            var scaleFactor: CGFloat

            if self.size.width > self.size.height {
                scaleFactor = self.size.height / self.size.width
                scaledSize.width = maxDimension
                scaledSize.height = scaledSize.width * scaleFactor
            } else {
                scaleFactor = self.size.width / self.size.height
                scaledSize.height = maxDimension
                scaledSize.width = scaledSize.height * scaleFactor
            }

            UIGraphicsBeginImageContext(scaledSize)
            self.draw(in: CGRect(x: 0, y: 0, width: scaledSize.width, height: scaledSize.height))
            let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()

            return scaledImage!
        }

        func orientate(img: UIImage) -> UIImage {

            if (img.imageOrientation == UIImageOrientation.up) {
                return img;
            }

            UIGraphicsBeginImageContextWithOptions(img.size, false, img.scale)
            let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
            img.draw(in: rect)

            let normalizedImage : UIImage = UIGraphicsGetImageFromCurrentImageContext()!
            UIGraphicsEndImageContext()

            return normalizedImage

        }

    }

然后在将图像传递给performImageRecognition

之前调用它
func processImage() {

    self.imageView.image! = self.imageView.image!.toGrayScale()
    self.imageView.image! = self.imageView.image!.binarise()
    self.imageView.image! = self.imageView.image!.scaleImage()
}

希望这有帮助

相关问题