OpenCV Python,从命名管道读取视频

时间:2016-02-02 23:56:24

标签: python opencv stream pipe raspberry-pi2

我正在尝试获得视频上显示的结果(方法3使用netcat)  https://www.youtube.com/watch?v=sYGdge3T30o

重点是将视频从raspberry pi流式传输到ubuntu PC并使用openCV和python进行处理。

我使用命令

raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777

将视频流式传输到我的PC,然后在PC上创建了名称管道' fifo'并重定向输出

 nc -l -p 5777 -v > fifo

然后我试图读取管道并在python脚本中显示结果

import cv2
import sys

video_capture = cv2.VideoCapture(r'fifo')
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640);
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480);

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()
    if ret == False:
        pass

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()

但是我最终得到了一个错误

[mp3 @ 0x18b2940]标题丢失此错误由命令video_capture = cv2.VideoCapture(r'fifo')生成

当我将PC上的netcat输出重定向到一个文件,然后在python中读取它时视频有效,但它的速度提高了10倍。

我知道问题出在python脚本上,因为nc传输工作(到文件),但我找不到任何线索。

如何实现所提供视频(方法3)所示的结果?

2 个答案:

答案 0 :(得分:3)

我也希望在该视频中获得相同的结果。最初我尝试了类似的方法,但似乎cv2.VideoCapture()无法从命名管道读取,需要更多的预处理。

ffmpeg 是要走的路!您可以按照此链接中的说明安装和编译ffmpeg: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

安装完成后,您可以更改代码:

library(stringr)
str_extract(string, "(?<=[(])[^)]+")
#[1] "7/4/2011"

无需更改raspberry pi端脚本中的任何其他内容。

这对我来说就像一个魅力。视频延迟可以忽略不计。 希望它有所帮助。

答案 1 :(得分:0)

我遇到了类似的问题,我进行了一些研究,最终偶然发现了以下问题:

跳至解决方案:https://stackoverflow.com/a/48675107/2355051

我最终调整了picamera python recipe

在Raspberry Pi上:( createStream.py)

import io
import socket
import struct
import time
import picamera

# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('10.0.0.3', 777))

# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
    with picamera.PiCamera() as camera:
        camera.resolution = (1024, 768)
        # Start a preview and let the camera warm up for 2 seconds
        camera.start_preview()
        time.sleep(2)

        # Note the start time and construct a stream to hold image data
        # temporarily (we could write it directly to connection but in this
        # case we want to find out the size of each capture first to keep
        # our protocol simple)
        start = time.time()
        stream = io.BytesIO()
        for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
            # Write the length of the capture to the stream and flush to
            # ensure it actually gets sent
            connection.write(struct.pack('<L', stream.tell()))
            connection.flush()

            # Rewind the stream and send the image data over the wire
            stream.seek(0)
            connection.write(stream.read())

            # Reset the stream for the next capture
            stream.seek(0)
            stream.truncate()
    # Write a length of zero to the stream to signal we're done
    connection.write(struct.pack('<L', 0))
finally:
    connection.close()
    client_socket.close()

在处理流的机器上:(processStream.py)

import io
import socket
import struct
import cv2
import numpy as np

# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 777))
server_socket.listen(0)

# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
    while True:
        # Read the length of the image as a 32-bit unsigned int. If the
        # length is zero, quit the loop
        image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
        if not image_len:
            break
        # Construct a stream to hold the image data and read the image
        # data from the connection
        image_stream = io.BytesIO()
        image_stream.write(connection.read(image_len))
        # Rewind the stream, open it as an image with opencv and do some
        # processing on it
        image_stream.seek(0)
        image = Image.open(image_stream)

        data = np.fromstring(image_stream.getvalue(), dtype=np.uint8)
        imagedisp = cv2.imdecode(data, 1)

        cv2.imshow("Frame",imagedisp)
        cv2.waitKey(1)  #imshow will not output an image if you do not use waitKey
        cv2.destroyAllWindows() #cleanup windows 
finally:
    connection.close()
    server_socket.close()

此解决方案与我在原始问题中引用的视频具有类似的结果。较大分辨率的帧会增加Feed的延迟,但出于我的应用目的,这是可以容忍的。

首先,您需要运行processStream.py,然后在Raspberry Pi上执行createStream.py

相关问题