横向截图返回纵向模式屏幕

时间:2014-06-17 05:28:17

标签: ios opengl-es avcapturesession vuforia

我正在拍摄openGL截图,如果相机处于纵向模式,则拍摄快照,然后返回纵向模式。但是,如果我从纵向模式将相机旋转到横向模式,然后截屏它只返回纵向模式截图。但我的相机视图显示实时流完整模式和截图节省1024X768。

ImageTargetsEAGLView.mm:

- (BOOL)presentFramebuffer

{

  if (_takePhotoFlag1)

{

   [self glToUIImage1];

     UIImageWriteToSavedPhotosAlbum([self glToUIImage1], nil, nil, nil);

      NSLog(@"Screenshot size: %d, %d", (int)[[self glToUIImage1] size].width, (int)[[self glToUIImage1] size].height);

 _takePhotoFlag1 = NO;

   }     

        // setFramebuffer must have been called before presentFramebuffer, therefore

    // we know the context is valid and has been set for this (render) thread



    // Bind the colour render buffer and present it

    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);



    return [context presentRenderbuffer:GL_RENDERBUFFER];

}





- (UIImage*) glToUIImage1

{ 

UIImage *outputImage = nil;



    UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;

        if (UIInterfaceOrientationIsLandscape(orientation))

        {

            NSLog(@"landscape screen");

            CGRect screenBounds = [[UIScreen mainScreen] bounds];

   //  CGFloat scale = [[UIScreen mainScreen] scale];

            // CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);

              CGRect  s = CGRectMake(0, 0, 1024 , 768);

             uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);

              glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

            CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);

             CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);



            size_t width = CGImageGetWidth(iref);

            size_t height = CGImageGetHeight(iref);

            size_t length = width * height * 4;

            uint32_t *pixels = (uint32_t *)malloc(length);

            CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,

                                                          CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);



            CGAffineTransform transform = CGAffineTransformIdentity;

            transform = CGAffineTransformMakeTranslation(0.0f, height);

            transform = CGAffineTransformScale(transform, 1.0, -1.0);

            CGContextConcatCTM(context1, transform);

            CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);

            CGImageRef outputRef = CGBitmapContextCreateImage(context1);

               outputImage = [UIImage imageWithCGImage: outputRef];

          CGDataProviderRelease(ref);

            CGImageRelease(iref);

            CGContextRelease(context1);

            CGImageRelease(outputRef);

            free(pixels);

            free(buffer);



        }else{



         NSLog(@"portrait screen");



        //    CGFloat scale = [[UIScreen mainScreen] scale];

            // CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);



            CGRect  s = CGRectMake(0, 0, 768, 1024);



            uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);



            glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);



            CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);





            CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);



            size_t width = CGImageGetWidth(iref);

            size_t height = CGImageGetHeight(iref);

            size_t length = width * height * 4;

            uint32_t *pixels = (uint32_t *)malloc(length);







            CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,

                                                          CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);



            CGAffineTransform transform = CGAffineTransformIdentity;

            transform = CGAffineTransformMakeTranslation(0.0f, height);

            transform = CGAffineTransformScale(transform, 1.0, -1.0);

            CGContextConcatCTM(context1, transform);

            CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);

            CGImageRef outputRef = CGBitmapContextCreateImage(context1);





            outputImage = [UIImage imageWithCGImage: outputRef];



            CGDataProviderRelease(ref);

            CGImageRelease(iref);

            CGContextRelease(context1);

            CGImageRelease(outputRef);

            free(pixels);

            free(buffer);





        }





        return outputImage;



    }



}

1 个答案:

答案 0 :(得分:0)

我对你的代码没有同样的问题,但是我确实得到了工件,等等。我建议使用这里找到的方法:http://www.unagames.com/blog/daniele/2011/10/opengl-es-screenshots-ios 它对我来说非常适合作为你的替代品(如果你使用的是GLKViewController,那么你只需将它作为eaglview自我。)它实际上是从OpenGL ES上下文中提取SS的正确大小所以你知道它总是正确的^^