NSOperation或GCD线程与动画

时间:2013-08-19 17:21:59

标签: grand-central-dispatch nsoperation producer-consumer

我正在尝试运行一系列基于磁盘的老式AVI动画(A然后B然后是C ......),背靠背,两者之间有一个很好的过渡。

我正在寻找一个小小的指导,当然没有做过一段时间的线程工作,当然从来没有对NSOperationGCD做过任何事情。

这些动画以30fps的速度运行,持续时间通常不到一分钟,同时还有CoreImage辅助转换。时间紧迫,事情非常紧张,因此需要多线程。由于我使用SSD进行磁盘访问,因此我的读取速率(理论上)大约是我的消耗速率的两倍,但是仍然有大量的后读取和预显示处理将会延迟整个过程,如果单线程 - 更不用说这是一个非常糟糕的主意。

这是流程:首先,读取动画A的起始帧(可能使用序列NSOperation队列进行这些读取)。我们现在在NSMutableArray个对象中有原始数据。然后,对于已读取的每个帧,将数组中的数据转换为CoreImage格式(使用类似但单独的串行“呈现”队列,或者作为从磁盘读取的每个帧上的完成处理程序)。

(皱纹:如果动画不是AVI格式,我将使用AVAssetImageGeneratorgenerateCGImagesAsynchronouslyForTimes代替生成渲染结果。

在整个文件中将此过程作为类似生产者的队列继续,在加载和转换数据2-3秒后进行限制。将得到的图像数据数组视为圆形有界缓冲区。

有一个单独的消费者队列(CVDisplaylink垂直消隐调用),它将项目从渲染队列中拉出。这将是60hz的主线程。我将在关闭周期中绘制渲染图像,并在均匀周期内将其交换为30fps吞吐量。

一旦我们对动画“A”顺利运行(例如5秒后)感到满意,请启动另一个串行队列以开始为即将到来的转换配对帧...如果有“n”帧说动画“A”,然后读取A的帧n-15(从结尾开始半秒),将其与动画“B”的第一帧匹配,并将这两帧发送到CoreImage以通过{转换{1}}。继续将帧n-14(A)与帧2(B)匹配,依此类推。显然,这些帧读取中的每一个都需要转换,并存储在单独的数据结构中。这将创造一个不错的即将到来的1/2秒过渡。

当需要显示动画过渡时,在这些过渡帧中进行显示,然后继续显示动画B的其余部分...旋转动画C进行过渡等...

关于从哪里开始的任何指示?

1 个答案:

答案 0 :(得分:1)

你的情况比Apple's documentation中概述的情况稍微复杂一点,但这有助于阅读(如果你还在说,“嗯?”阅读之后,请阅读{{3回答)了解预期的模式。简而言之,一般的想法是生产者“驱动”链,而GCD在操作系统中的钩子帮助它确保根据内核中各种事物的状态适当地调度事物。

这种方法的问题w / r / t你的问题在于,让生产者方在这里开车是不容易的,因为你的消费者是通过垂直消隐回调实时驱动的,而不是纯粹的可用性消耗性资源。这种情况因工作流的固有串行特性而变得更加复杂 - 例如,即使理论上可以将帧数据的解码并行化为图像,图像仍然必须串行传送到管道中的下一个阶段,这是流媒体案例中GCD API处理得不好的情况(如果您可以同时拥有内存中的所有内容,那么使用dispatch_apply会很容易,但切入问题的核心:您需要这样做在准流媒体环境中。)

在试图考虑如何处理这个问题时,我想出了以下示例,该示例尝试使用文本文件来模拟您的情况,其中文件中的每一行都是视频中的“框架”,并且它“交叉淡化” “这两个片段通过附加字符串。可以使用完整的,有效的(至少对我来说)版本this这段代码旨在说明如何使用GCD原语和(主要)生成器驱动来构建这样的处理管道模式,同时仍然与基于CVDisplayLink的消费者建立联系。

它不是防弹的(即在许多其他方面,它不能容忍其中的帧数少于重叠所需的文件)并且可能完全无法满足您的实时或内存使用的边界要求(这很难)让我复制和测试,而不是做更多的工作,而不是我愿意做的。:))它也没有试图解决我上面提到的问题,你可能能够并行化需要重新工作的工作量在管道的下一个阶段之前序列化。 (代码也假设ARC。)有了所有这些警告,希望这里仍然有一些有趣/相关的想法。这是代码:

static void DieOnError(int error);
static NSString* NSStringFromDispatchData(dispatch_data_t data);
static dispatch_data_t FrameDataFromAccumulator(dispatch_data_t* accumulator);
static CVReturn MyDisplayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp* now, const CVTimeStamp* outputTime, CVOptionFlags flagsIn, CVOptionFlags* flagsOut, void* displayLinkContext);

static const NSUInteger kFramesToOverlap = 15;

@implementation SOAppDelegate
{
    // Display link state
    CVDisplayLinkRef mDisplayLink;

    // State for our file reading process -- protected via mFrameReadQueue
    dispatch_queue_t mFrameReadQueue;
    NSUInteger mFileIndex; // keep track of what file we're reading
    dispatch_io_t mReadingChannel; // channel for reading
    dispatch_data_t mFrameReadAccumulator; // keep track of left-over data across read operations

    // State for processing raw frame data delivered by the read process - protected via mFrameDataProcessingQueue
    dispatch_queue_t mFrameDataProcessingQueue;
    NSMutableArray* mFilesForOverlapping;
    NSMutableArray* mFrameArraysForOverlapping;

    // State for blending frames (or passing them through)
    dispatch_queue_t mFrameBlendingQueue;

    // Delivery state
    dispatch_queue_t mFrameDeliveryQueue; // Is suspended/resumed to deliver one frame at a time
    dispatch_queue_t mFrameDeliveryStateQueue; // Protects access to the iVars
    dispatch_data_t mDeliveredFrame; // Data of the frame that has been delivered, but not yet picked up by the CVDisplayLink
    NSInteger mLastFrameDelivered; // Counter of frames delivered
    NSInteger mLastFrameDisplayed; // Counter of frames displayed
}

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
    mFileIndex = 1;
    mLastFrameDelivered = -1;
    mLastFrameDisplayed = -1;

    mFrameReadQueue = dispatch_queue_create("mFrameReadQueue", DISPATCH_QUEUE_SERIAL);
    mFrameDataProcessingQueue = dispatch_queue_create("mFrameDataProcessingQueue", DISPATCH_QUEUE_SERIAL);
    mFrameBlendingQueue = dispatch_queue_create("mFrameBlendingQueue", DISPATCH_QUEUE_SERIAL);
    mFrameDeliveryQueue = dispatch_queue_create("mFrameDeliveryQueue", DISPATCH_QUEUE_SERIAL);
    mFrameDeliveryStateQueue = dispatch_queue_create("mFrameDeliveryStateQueue", DISPATCH_QUEUE_SERIAL);

    CVDisplayLinkCreateWithActiveCGDisplays(&mDisplayLink);
    CVDisplayLinkSetOutputCallback(mDisplayLink, &MyDisplayLinkCallback, (__bridge void*)self);

    [self readNextFile];
}

- (void)dealloc
{
    if (mDisplayLink)
    {
        if (CVDisplayLinkIsRunning(mDisplayLink))
        {
            CVDisplayLinkStop(mDisplayLink);
        }
        CVDisplayLinkRelease(mDisplayLink);
    }
}

- (void)readNextFile
{
    dispatch_async (mFrameReadQueue, ^{
        NSURL* url = [[NSBundle mainBundle] URLForResource: [NSString stringWithFormat: @"File%lu", mFileIndex++] withExtension: @"txt"];

        if (!url)
            return;

        if (mReadingChannel)
        {
            dispatch_io_close(mReadingChannel, DISPATCH_IO_STOP);
            mReadingChannel = nil;
        }

        // We don't care what queue the cleanup handler gets called on, because we know there's only ever one file being read at a time
        mReadingChannel = dispatch_io_create_with_path(DISPATCH_IO_STREAM, [[url path] fileSystemRepresentation], O_RDONLY|O_NONBLOCK, 0, mFrameReadQueue, ^(int error) {
            DieOnError(error);

            mReadingChannel = nil;

            // Start the next file
            [self readNextFile];
        });

        // We don't care what queue the read handlers get called on, because we know they're inherently serial
        dispatch_io_read(mReadingChannel, 0, SIZE_MAX, mFrameReadQueue, ^(bool done, dispatch_data_t data, int error) {
            DieOnError(error);

            // Grab frames
            dispatch_data_t localAccumulator = mFrameReadAccumulator ? dispatch_data_create_concat(mFrameReadAccumulator, data) : data;
            dispatch_data_t frameData = nil;
            do
            {
                frameData = FrameDataFromAccumulator(&localAccumulator);
                mFrameReadAccumulator = localAccumulator;
                [self processFrameData: frameData fromFile: url];
            } while (frameData);

            if (done)
            {
                dispatch_io_close(mReadingChannel, DISPATCH_IO_STOP);
            }
        });
    });
}

- (void)processFrameData: (dispatch_data_t)frameData fromFile: (NSURL*)file
{
    if (!frameData || !file)
        return;

    // We want the data blobs constituting each frame to be processed serially
    dispatch_async(mFrameDataProcessingQueue, ^{
        mFilesForOverlapping = mFilesForOverlapping ?: [NSMutableArray array];
        mFrameArraysForOverlapping = mFrameArraysForOverlapping ?: [NSMutableArray array];

        NSMutableArray* arrayToAddTo = nil;
        if ([file isEqual: mFilesForOverlapping.lastObject])
        {
            arrayToAddTo = mFrameArraysForOverlapping.lastObject;
        }
        else
        {
            arrayToAddTo = [NSMutableArray array];
            [mFilesForOverlapping addObject: file];
            [mFrameArraysForOverlapping addObject: arrayToAddTo];
        }

        [arrayToAddTo addObject: frameData];

        // We've gotten to file two, and we have enough frames to process the overlap
        if (mFrameArraysForOverlapping.count == 2 && [mFrameArraysForOverlapping[1] count] >= kFramesToOverlap)
        {
            NSMutableArray* fileOneFrames = mFrameArraysForOverlapping[0];
            NSMutableArray* fileTwoFrames = mFrameArraysForOverlapping[1];

            for (NSUInteger i = 0; i < kFramesToOverlap; ++i)
            {
                [self blendOneFrame:fileOneFrames[0] withOtherFrame: fileTwoFrames[0]];
                [fileOneFrames removeObjectAtIndex:0];
                [fileTwoFrames removeObjectAtIndex:0];
            }

            [mFilesForOverlapping removeObjectAtIndex: 0];
            [mFrameArraysForOverlapping removeObjectAtIndex: 0];
        }

        // We're pulling in frames from file 1, haven't gotten to file 2 yet, have more than enough to overlap
        while (mFrameArraysForOverlapping.count == 1 && [mFrameArraysForOverlapping[0] count] > kFramesToOverlap)
        {
            NSMutableArray* frameArray = mFrameArraysForOverlapping[0];
            dispatch_data_t first = frameArray[0];
            [mFrameArraysForOverlapping[0] removeObjectAtIndex: 0];
            [self blendOneFrame: first withOtherFrame: nil];
        }
    });
}

- (void)blendOneFrame: (dispatch_data_t)frameA withOtherFrame: (dispatch_data_t)frameB
{
    dispatch_async(mFrameBlendingQueue, ^{
        NSString* blendedFrame = [NSString stringWithFormat: @"%@%@", [NSStringFromDispatchData(frameA) stringByReplacingOccurrencesOfString: @"\n" withString:@""], NSStringFromDispatchData(frameB)];
        dispatch_data_t blendedFrameData = dispatch_data_create(blendedFrame.UTF8String, blendedFrame.length, NULL, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
        [self deliverFrameForDisplay: blendedFrameData];
    });
}

- (void)deliverFrameForDisplay: (dispatch_data_t)frame
{
    // By suspending the queue from within the block, and by virtue of this being a serial queue, we guarantee that
    // only one task will get called for each call to dispatch_resume on the queue...

    dispatch_async(mFrameDeliveryQueue, ^{
        dispatch_suspend(mFrameDeliveryQueue);
        dispatch_sync(mFrameDeliveryStateQueue, ^{
            mLastFrameDelivered++;
            mDeliveredFrame = frame;
        });

        if (!CVDisplayLinkIsRunning(mDisplayLink))
        {
            CVDisplayLinkStart(mDisplayLink);
        }
    });
}

- (dispatch_data_t)getFrameForDisplay
{
    __block dispatch_data_t frameData = nil;
    dispatch_sync(mFrameDeliveryStateQueue, ^{
        if (mLastFrameDelivered > mLastFrameDisplayed)
        {
            frameData = mDeliveredFrame;
            mDeliveredFrame = nil;
            mLastFrameDisplayed = mLastFrameDelivered;
        }
    });

    // At this point, I've either got the next frame or I dont...
    // resume the delivery queue so it will deliver the next frame
    if (frameData)
    {
        dispatch_resume(mFrameDeliveryQueue);
    }

    return frameData;
}

@end

static void DieOnError(int error)
{
    if (error)
    {
        NSLog(@"Error in %s: %s", __PRETTY_FUNCTION__, strerror(error));
        exit(error);
    }
}

static NSString* NSStringFromDispatchData(dispatch_data_t data)
{
    if (!data || !dispatch_data_get_size(data))
        return @"";

    const char* buf = NULL;
    size_t size = 0;
    dispatch_data_t notUsed = dispatch_data_create_map(data, (const void**)&buf, &size);
#pragma unused(notUsed)
    NSString* str = [[NSString alloc] initWithBytes: buf length: size encoding: NSUTF8StringEncoding];
    return str;
}

// Peel off a frame if there is one, and put the left-overs back.
static dispatch_data_t FrameDataFromAccumulator(dispatch_data_t* accumulator)
{
    __block dispatch_data_t frameData = dispatch_data_create(NULL, 0, NULL, NULL); // empty
    __block dispatch_data_t leftOver = dispatch_data_create(NULL, 0, NULL, NULL); // empty

    __block BOOL didFindFrame = NO;
    dispatch_data_apply(*accumulator, ^bool(dispatch_data_t region, size_t offset, const void *buffer, size_t size) {
        ssize_t newline = -1;
        for (size_t i = 0; !didFindFrame && i < size; ++i)
        {
            if (((const char *)buffer)[i] == '\n')
            {
                newline = i;
                break;
            }
        }

        if (newline == -1)
        {
            if (!didFindFrame)
            {
                frameData = dispatch_data_create_concat(frameData, region);
            }
            else
            {
                leftOver = dispatch_data_create_concat(leftOver, region);
            }
        }
        else if (newline >= 0)
        {
            didFindFrame = YES;
            frameData = dispatch_data_create_concat(frameData, dispatch_data_create_subrange(region, 0, newline + 1));
            leftOver = dispatch_data_create_concat(leftOver, dispatch_data_create_subrange(region, newline + 1, size - newline - 1));
        }

        return true;
    });

    *accumulator = leftOver;

    return didFindFrame ? frameData : nil;
}

static CVReturn MyDisplayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp* now, const CVTimeStamp* outputTime, CVOptionFlags flagsIn, CVOptionFlags* flagsOut, void* displayLinkContext)
{
    SOAppDelegate* self = (__bridge SOAppDelegate*)displayLinkContext;

    dispatch_data_t frameData = [self getFrameForDisplay];

    NSString* dataAsString = NSStringFromDispatchData(frameData);

    if (dataAsString.length == 0)
    {
        NSLog(@"Dropped frame...");
    }
    else
    {
        NSLog(@"Drawing frame in CVDisplayLink. Contents: %@", dataAsString);
    }

    return kCVReturnSuccess;
}

理论上,GCD应该为你平衡这些队列。例如,如果允许“生产者”队列继续进行导致内存使用率上升,GCD(理论上)将开始让其他队列进入,并保持生成器队列。在实践中,这种机制对我们来说是不透明的,所以谁知道它在现实世界的情况下对你有多好,特别是面对你的实时限制。

如果此处有任何具体问题尚不清楚,请发表评论,我会尽力详细说明。

相关问题