使用RenderScript

时间:2019-09-04 18:50:32

标签: android android-camera2 renderscript yuv landscape-portrait

对于视频图像处理项目,我必须旋转传入的YUV图像数据,以使该数据不是水平显示,而是垂直显示。 我使用了this项目,该项目使我对如何将YUV图像数据转换为ARGB以进行实时处理有了深刻的了解。 该项目的唯一缺点是仅在景观中。肖像模式没有选项(我不知道为什么Google的人们会提供一个仅处理横向的示例示例)。我想改变它。

因此,我决定使用自定义YUV到RGB脚本,该脚本旋转数据,使其以纵向模式显示。以下GIF演示了应用旋转之前该应用如何显示数据。

enter image description here

您必须知道,在Android中,即使设备处于纵向模式,YUV图像数据也将以横向显示(我在开始此项目之前并不知道。)同样,我不明白为什么没有可用的方法,可用于一次调用来旋转框架)。 这意味着即使设备处于纵向模式,起点也位于左下角。但是在纵向模式下,每帧的起点应在左上角。我对字段使用矩阵表示法(例如(0,0),(0,1)等)。注意:我从here取得了草图: enter image description here

要旋转面向景观的框架,我们必须重新组织字段。这是我对草图所做的映射(请参见上文),该映射在横向模式下显示了单个框架yuv_420。映射应将框架旋转90度:

first column starting from the bottom-left corner and going upwards:
(0,0) -> (0,5)       // (0,0) should be at (0,5)
(0,1) -> (1,5)       // (0,1) should be at (1,5)
(0,2) -> (2,5)       // and so on ..
(0,3) -> (3,5)
(0,4) -> (4,5)
(0,5) -> (5,5)

2nd column starting at (1,0) and going upwards:
(1,0) -> (0,4)
(1,1) -> (1,4)
(1,2) -> (2,4)
(1,3) -> (3,4)
(1,4) -> (4,4)
(1,5) -> (5,4)

and so on...

实际上,发生的事情是第一列成为新的第一行,第二列成为新的第二行,依此类推。 从映射中可以看到,我们可以进行以下观察:

  • 结果的x坐标始终等于y坐标 从左侧开始。因此,我们可以说x = y
  • 我们始终可以观察到的是,对于结果的y坐标 以下等式必须成立:y = width - 1 - x。 (我对草图中的所有坐标都进行了测试,这始终是正确的。)

因此,我编写了以下renderscript内核函数:

#pragma version(1)
#pragma rs java_package_name(com.jon.condino.testing.renderscript)
#pragma rs_fp_relaxed

rs_allocation gCurrentFrame;
int width;

uchar4 __attribute__((kernel)) yuv2rgbFrames(uint32_t x,uint32_t y)
{

    uint32_t inX = y;             // 1st observation: set x=y
    uint32_t inY = width - 1 - x; // 2nd observation: the equation mentioned above

    // the remaining lines are just methods to retrieve the YUV pixel elements, converting them to RGB and outputting them as result 

    // Read in pixel values from latest frame - YUV color space
    // The functions rsGetElementAtYuv_uchar_? require API 18
    uchar4 curPixel;
    curPixel.r = rsGetElementAtYuv_uchar_Y(gCurrentFrame, inX, inY);
    curPixel.g = rsGetElementAtYuv_uchar_U(gCurrentFrame, inX, inY);
    curPixel.b = rsGetElementAtYuv_uchar_V(gCurrentFrame, inX, inY);

    // uchar4 rsYuvToRGBA_uchar4(uchar y, uchar u, uchar v);
    // This function uses the NTSC formulae to convert YUV to RBG
    uchar4 out = rsYuvToRGBA_uchar4(curPixel.r, curPixel.g, curPixel.b);

    return out;
}

该方法似乎可行,但有一个小错误,如下图所示。如我们所见,相机预览处于纵向模式。但是我的相机预览左侧有这条非常奇怪的颜色线。为什么会这样呢? (请注意,我使用的是背面摄像头): enter image description here

任何解决该问题的建议都很好。自两个星期以来,我一直在处理这个问题(YUV从风景到肖像的旋转),这是迄今为止我能独自获得的最佳解决方案。我希望有人可以帮助改善代码,以便左侧的怪异色线也消失。

更新:

我在代码中所做的分配如下:

// yuvInAlloc will be the Allocation that will get the YUV image data
// from the camera
yuvInAlloc = createYuvIoInputAlloc(rs, x, y, ImageFormat.YUV_420_888);
yuvInAlloc.setOnBufferAvailableListener(this);

// here the createYuvIoInputAlloc() method
public Allocation createYuvIoInputAlloc(RenderScript rs, int x, int y, int yuvFormat) {
    return Allocation.createTyped(rs, createYuvType(rs, x, y, yuvFormat),
            Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
}

// the custom script will convert the YUV to RGBA and put it to this Allocation
rgbInAlloc = RsUtil.createRgbAlloc(rs, x, y);

// here the createRgbAlloc() method
public Allocation createRgbAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y));
}



// the allocation to which we put all the processed image data
rgbOutAlloc = RsUtil.createRgbIoOutputAlloc(rs, x, y);

// here the createRgbIoOutputAlloc() method
public Allocation createRgbIoOutputAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y),
                Allocation.USAGE_IO_OUTPUT | Allocation.USAGE_SCRIPT);
}

其他一些帮助功能:

public Type createType(RenderScript rs, Element e, int x, int y) {
        if (Build.VERSION.SDK_INT >= 21) {
            return Type.createXY(rs, e, x, y);
        } else {
            return new Type.Builder(rs, e).setX(x).setY(y).create();
        }
    }

    @RequiresApi(18)
    public Type createYuvType(RenderScript rs, int x, int y, int yuvFormat) {
        boolean supported = yuvFormat == ImageFormat.NV21 || yuvFormat == ImageFormat.YV12;
        if (Build.VERSION.SDK_INT >= 19) {
            supported |= yuvFormat == ImageFormat.YUV_420_888;
        }
        if (!supported) {
            throw new IllegalArgumentException("invalid yuv format: " + yuvFormat);
        }
        return new Type.Builder(rs, createYuvElement(rs)).setX(x).setY(y).setYuvFormat(yuvFormat)
                .create();
    }

    public Element createYuvElement(RenderScript rs) {
        if (Build.VERSION.SDK_INT >= 19) {
            return Element.YUV(rs);
        } else {
            return Element.createPixel(rs, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV);
        }
    }

调用自定义渲染脚本和分配:

// see below how the input size is determined
customYUVToRGBAConverter.invoke_setInputImageSize(x, y);
customYUVToRGBAConverter.set_inputAllocation(yuvInAlloc);

// receive some frames
yuvInAlloc.ioReceive();


// performs the conversion from the YUV to RGB
customYUVToRGBAConverter.forEach_convert(rgbInAlloc);

// this just do the frame manipulation , e.g. applying a particular filter
renderer.renderFrame(mRs, rgbInAlloc, rgbOutAlloc);


// send manipulated data to output stream
rgbOutAlloc.ioSend();

最后但至少是输入图像的大小。您在上面看到的方法的x和y坐标基于此处表示为mPreviewSize的预览大小:

int deviceOrientation = getWindowManager().getDefaultDisplay().getRotation();
int totalRotation = sensorToDeviceRotation(cameraCharacteristics, deviceOrientation);
// determine if we are in portrait mode
boolean swapRotation = totalRotation == 90 || totalRotation == 270;
int rotatedWidth = width;
int rotatedHeigth = height;

// are we in portrait mode? If yes, then swap the values
if(swapRotation){
      rotatedWidth = height;
      rotatedHeigth = width;
}

// determine the preview size 
mPreviewSize = chooseOptimalSize(
                  map.getOutputSizes(SurfaceTexture.class),
                  rotatedWidth,
                  rotatedHeigth);

因此,根据我的情况,在我的情况下,x将是mPreviewSize.getWidth(),而y将是mPreviewSize.getHeight()

1 个答案:

答案 0 :(得分:0)

请参阅我的YuvConverter。它的灵感来自 android - Renderscript to convert NV12 yuv to RGB

它的rs part非常简单:

#pragma version(1)
#pragma rs java_package_name(whatever)
#pragma rs_fp_relaxed

rs_allocation Yplane;
uint32_t Yline;
uint32_t UVline;
rs_allocation Uplane;
rs_allocation Vplane;
rs_allocation NV21;
uint32_t Width;
uint32_t Height;

uchar4 __attribute__((kernel)) YUV420toRGB(uint32_t x, uint32_t y)
{
    uchar Y = rsGetElementAt_uchar(Yplane, x + y * Yline);
    uchar V = rsGetElementAt_uchar(Vplane, (x & ~1) + y/2 * UVline);
    uchar U = rsGetElementAt_uchar(Uplane, (x & ~1) + y/2 * UVline);
    // https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion
    short R = Y + (512           + 1436 * V) / 1024; //             1.402
    short G = Y + (512 -  352 * U - 731 * V) / 1024; // -0.344136  -0.714136
    short B = Y + (512 + 1815 * U          ) / 1024; //  1.772
    if (R < 0) R == 0; else if (R > 255) R == 255;
    if (G < 0) G == 0; else if (G > 255) G == 255;
    if (B < 0) B == 0; else if (B > 255) B == 255;
    return (uchar4){R, G, B, 255};
}

uchar4 __attribute__((kernel)) YUV420toRGB_180(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Width - 1 - x, Height - 1 - y);
}

uchar4 __attribute__((kernel)) YUV420toRGB_90(uint32_t x, uint32_t y)
{
    return YUV420toRGB(y, Width - x - 1);
}

uchar4 __attribute__((kernel)) YUV420toRGB_270(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Height - 1 - y, x);
}

在Flutter中使用了我的Java包装器,但这并不重要:

public class YuvConverter implements AutoCloseable {

    private RenderScript rs;
    private ScriptC_yuv2rgb scriptC_yuv2rgb;
    private Bitmap bmp;

    YuvConverter(Context ctx, int ySize, int uvSize, int width, int height) {
        rs = RenderScript.create(ctx);
        scriptC_yuv2rgb = new ScriptC_yuv2rgb(rs);
        init(ySize, uvSize, width, height);
    }

    private Allocation allocY, allocU, allocV, allocOut;

    @Override
    public void close() {
        if (allocY != null) allocY.destroy();
        if (allocU != null) allocU.destroy();
        if (allocV != null) allocV.destroy();
        if (allocOut != null) allocOut.destroy();
        bmp = null;
        allocY = null;
        allocU = null;
        allocV = null;
        allocOut = null;
        scriptC_yuv2rgb.destroy();
        scriptC_yuv2rgb = null;
        rs = null;
    }

    private void init(int ySize, int uvSize, int width, int height) {
        if (bmp == null || bmp.getWidth() != width || bmp.getHeight() != height) {
            bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
            if (allocOut != null) allocOut.destroy();
            allocOut = null;
        }
        if (allocY == null || allocY.getBytesSize() != ySize) {
            if (allocY != null) allocY.destroy();
            Type.Builder yBuilder = new Type.Builder(rs, Element.U8(rs)).setX(ySize);
            allocY = Allocation.createTyped(rs, yBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocU == null || allocU.getBytesSize() != uvSize || allocV == null || allocV.getBytesSize() != uvSize ) {
            if (allocU != null) allocU.destroy();
            if (allocV != null) allocV.destroy();
            Type.Builder uvBuilder = new Type.Builder(rs, Element.U8(rs)).setX(uvSize);
            allocU = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
            allocV = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocOut == null || allocOut.getBytesSize() != width*height*4) {
            Type rgbType = Type.createXY(rs, Element.RGBA_8888(rs), width, height);
            if (allocOut != null) allocOut.destroy();
            allocOut = Allocation.createTyped(rs, rgbType, Allocation.USAGE_SCRIPT);
        }
    }

    @Retention(RetentionPolicy.SOURCE)
    // Enumerate valid values for this interface
    @IntDef({Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180, Surface.ROTATION_270})
    // Create an interface for validating int types
    public @interface Rotation {}

    /**
     * Converts an YUV_420 image into Bitmap.
     * @param yPlane  byte[] of Y, with pixel stride 1
     * @param uPlane  byte[] of U, with pixel stride 2
     * @param vPlane  byte[] of V, with pixel stride 2
     * @param yLine   line stride of Y
     * @param uvLine  line stride of U and V
     * @param width   width of the output image (note that it is swapped with height for portrait rotation)
     * @param height  height of the output image
     * @param rotation  rotation to apply. ROTATION_90 is for portrait back-facing camera.
     * @return RGBA_8888 Bitmap image.
     */

    public Bitmap YUV420toRGB(byte[] yPlane, byte[] uPlane, byte[] vPlane,
                              int yLine, int uvLine, int width, int height,
                              @Rotation int rotation) {
        init(yPlane.length, uPlane.length, width, height);

        allocY.copyFrom(yPlane);
        allocU.copyFrom(uPlane);
        allocV.copyFrom(vPlane);
        scriptC_yuv2rgb.set_Width(width);
        scriptC_yuv2rgb.set_Height(height);
        scriptC_yuv2rgb.set_Yline(yLine);
        scriptC_yuv2rgb.set_UVline(uvLine);
        scriptC_yuv2rgb.set_Yplane(allocY);
        scriptC_yuv2rgb.set_Uplane(allocU);
        scriptC_yuv2rgb.set_Vplane(allocV);

        switch (rotation) {
            case Surface.ROTATION_0:
                scriptC_yuv2rgb.forEach_YUV420toRGB(allocOut);
                break;
            case Surface.ROTATION_90:
                scriptC_yuv2rgb.forEach_YUV420toRGB_90(allocOut);
                break;
            case Surface.ROTATION_180:
                scriptC_yuv2rgb.forEach_YUV420toRGB_180(allocOut);
                break;
            case Surface.ROTATION_270:
                scriptC_yuv2rgb.forEach_YUV420toRGB_270(allocOut);
                break;
        }

        allocOut.copyTo(bmp);
        return bmp;
    }
}

性能的关键是renderscript可以初始化一次(这就是YuvConverter.init() public 的原因)并且随后的调用非常快。