Skip to content Skip to sidebar Skip to footer

How To Share Opencv Images In Two Python Programs?

I have three python files: glob_var.py, read_cam.py, read_globVar.py. Their contents are as below: glob_var.py: globVar = {} def set(name, val): globVar[name] = val def get(na

Solution 1:

You could use Redis to do this. It is a very fast, in-memory data structure server that can serve strings, integers, hashes, lists, queues, sets, ordered sets, images. It is free and simple to install on macOS, Linux and Windows.

Also, you can read or write Redis values with bash, Python, PHP, C/C++ or many other languages. Furthermore, you can read or write to or from a server across the network or across the world, just change the IP address in the initial connection. So, effectively you could acquire images in Python on your Raspberry Pi under Linux and store them and process them on your PC under Windows in C/C++.

Then you just put your images into Redis, named as Camera1 or Entrance or put them in a sorted hash so you can buffer images by frame number. You can also give images (or other data structures) a "Time-To-Live" so that your RAM doesn't fill up.

Here's the bones of your code roughly rewritten to use Redis. No serious error checking or flexibility built in for the moment. It all runs fine.

Here is read_cam.py:

#!/usr/bin/env python3import cv2
import struct
import redis
import numpy as np

deftoRedis(r,a,n):
   """Store given Numpy array 'a' in Redis under key 'n'"""
   h, w = a.shape[:2]
   shape = struct.pack('>II',h,w)
   encoded = shape + a.tobytes()

   # Store encoded data in Redis
   r.set(n,encoded)
   returnif __name__ == '__main__':

    # Redis connection
    r = redis.Redis(host='localhost', port=6379, db=0)

    cam = cv2.VideoCapture(0)
    key = 0while key != 27:
        ret, img = cam.read()
        cv2.imshow('img', img)

        key = cv2.waitKey(1) & 0xFF
        toRedis(r, img, 'image')

And here is read_globvar.py:

#!/usr/bin/env python3import cv2
from time import sleep
import struct
import redis
import numpy as np

deffromRedis(r,n):
   """Retrieve Numpy array from Redis key 'n'"""
   encoded = r.get(n)
   h, w = struct.unpack('>II',encoded[:8])
   a = np.frombuffer(encoded, dtype=np.uint8, offset=8).reshape(h,w,3)
   return a

if __name__ == '__main__':
    # Redis connection
    r = redis.Redis(host='localhost', port=6379, db=0)

    key = 0while key != 27:
        img = fromRedis(r,'image')

        print(f"read image with shape {img.shape}")
        cv2.imshow('image', img)
        key = cv2.waitKey(1) & 0xFF

Note that you could equally store the image height and width in a JSON and store that in Redis instead of the struct.pack and struct.unpack stuff I did.

Note too that you could encode your image as a JPEG in memory and store the JPEG in Redis (instead of a Numpy array) and that might save memory and network bandwidth.

Either way, the concept of using Redis is the same.

Solution 2:

You can use a shared array from Python's multiprocessing module to quickly share large volumes of data between processes. I don't have any completed, tested code for you like the Redis answer I suggested, but I have enough to hopefully get you started.

So you would use:

from multiprocessing importProcess, Queuefrom multiprocessing.sharedctypesimportArrayfrom ctypes import c_uint8

Then in your main, you would declare a large Array, probably big enough for say 2-4 of your large images:

bufShape = (1080, 1920,3) # 1080p

and

# Create zeroed out shared array
buffer = Array(c_uint8, bufShape[0] * bufShape[1] * bufShape[2])
# Make into numpy array
buf_arr = np.frombuffer(buffer.get_obj(), dtype=c_uint8)
buf_arr.shape = bufShape

# Create a list of workers
workers = [Worker(1, buffer, str(i)) for i in range(2)]

# Start the workers
for worker in workers:
    worker.start()

Then you would derive your workers from the Process class like this:

classWorker(Process):def__init__(self, q_size, buffer, name=''):
        super().__init__()
        self.queue = Queue(q_size)
        self.buffer = buffer
        self.name = name

    defrun(self,):
        buf_arr = np.frombuffer(self.buffer.get_obj(), dtype=c_uint8)
        buf_arr.shape = bufShape
        whileTrue:
            item = self.queue.get()
            ...

You can see at the start of run() that the worker just makes a Numpy Array from the big shared buffer, so the worker is reading what the main program is writing but hopefully you synchronise it so that while main is writing frames 2-4, a worker is reading frame 1.

Then hopefully, you can see that the main program can tell a worker that there is a frame of data by writing a simple frame index into the worker's queue (rather than sending the whole frame itself) by using:

worker.queue.put(i)

Solution 3:

I have written an example of how to share images using memory-mapped file here: https://github.com/off99555/python-mmap-ipc

It's a feature that's already available in most languages. The basic idea is that we will write the image to a virtual file and then read it on another process. It has latency around 3-4ms which is minimal compared to the latency that is inherent in the camera. This approach is faster than internet protocols like TCP/IP, HTTP, etc. I've already tested with gRPC and ZeroMQ. They are all slower than the memory-mapped file approach.

Post a Comment for "How To Share Opencv Images In Two Python Programs?"