Skip to content Skip to sidebar Skip to footer

Preferred Block Size When Reading/writing Big Binary Files

I need to read and write huge binary files. Is there a preferred or even optimal number of bytes (what I call BLOCK_SIZE) I should read() at a time? One byte is certainly too littl

Solution 1:

Let the OS make the decision for you. Use the mmap module:

https://docs.python.org/3/library/mmap.html

It uses your OS's underlying memory mapping mechanism for mapping the contents of a file into RAM.

Be aware that there's a 2GB file size limit if you're using 32-bit Python, so be sure to use the 64-bit version if you decide to go this route.

For example:

f1 = open('input_file', 'r+b')
m1 = mmap.mmap(f1.fileno(), 0)
f2 = open('out_file', 'a+b') # out_file must be >0 bytes on windows
m2 = mmap.mmap(f2.fileno(), 0)
m2.resize(len(m1))
m2[:] = m1 # copy input_file to out_file
m2.flush() # flush results

Note that you never had to call any read() functions and decide how many bytes to bring into RAM. This example just copies one file into another, but as you said in your example, you can do whatever processing you need in between. Note that while the entire file is mapped to an address space in RAM, that doesn't mean it has actually been copied there. It will be copied piecewise, at the discretion of the OS.

Post a Comment for "Preferred Block Size When Reading/writing Big Binary Files"