-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory of copied PIL Images is not released #7935
Comments
Pillow's memory allocator doesn't necessarily release the memory in the pool back as soon as an image is destroyed, as it uses that memory pool for future allocations. See Storage.c (https://github.com/python-pillow/Pillow/blob/main/src/libImaging/Storage.c#L310) for the implementation. If you repeatedly open and close an image, you should not see the memory increase, but it won't necessarily drop between destruction and allocation again. |
It looks like it caches 0 blocks by default though. Pillow/src/libImaging/Storage.c Lines 260 to 271 in aeeb596
And you can set the number of blocks to cache with the Lines 3624 to 3656 in aeeb596
There's a docs page for this actually: https://pillow.readthedocs.io/en/stable/reference/block_allocator.html |
Thanks for the quick answers! Indeed, when I set the |
This may or may not help - in your original code you open an image and don't close it. It is recommended instead that you either call Lines 560 to 565 in e8ab564
Edit: I see you've mentioned 'closing images' in your comments, so this remark can just be for reference to others. |
I didn't catch this before but what you're doing is basically opening 50 copies of an image and keeping them all. Can you show us a flow where you expect constant memory usage? |
Yes, in the first loop I open the image 50 times and hold 50 copies so the memory usage increases which is ok. Manually closing the image copy via import gc
import os
import sys
import time
import PIL.Image
import psutil
FILE = './test_image.png'
def LogMemory():
pid = os.getpid()
rss = 0
for mmap in psutil.Process(pid).memory_maps():
# All memory that this process holds in RAM. RSS = USS + Shared.
rss += mmap.rss
return rss
class Container:
def __init__(self):
self.value = None
def SetValue(self, value):
self.value = self._copyValue(value)
def GetValue(self):
return self._copyValue(self.value)
def _copyValue(self, value):
return value.copy()
if __name__ == '__main__':
print(f'Using Python {sys.version}, PIL {PIL.__version__}')
print(f'PILLOW_ALIGNMENT: {PIL.Image.core.get_alignment()}')
print(f'PILLOW_BLOCK_SIZE: {PIL.Image.core.get_block_size()}')
print(f'PILLOW_BLOCKS_MAX: {PIL.Image.core.get_blocks_max()}')
containers = []
for i in range(50):
before = LogMemory()
img = PIL.Image.open(FILE)
# img = [1] * (1920*1080*3) # this works!
container = Container()
container.SetValue(img)
containers.append(container)
after = LogMemory()
print(f'Loaded image {i} took {after-before} bytes')
for i in range(len(containers)):
before = LogMemory()
containers.pop()
time.sleep(0.1)
after = LogMemory()
print(f'popped container {i} released {before-after} bytes')
print('Delete list')
before = LogMemory()
containers = None
gc.collect()
after = LogMemory()
print(f'Finally released {before-after} bytes') Running the code with
The memory usage decreases with every containers.pop() But when I run with
and the used memory doesn't decrease while popping the containers from the list. So setting |
I have a similar problem when loading and rotating around 2k images. My code is similar to the following: frames: list[PIL.Image.Image] = load_images(...)
# After loading the frames, my memory usage is under 500MB
for idx, frame in enumerate(frames):
# This is were a lot of memory is allocated
rotated_frame = frame.rotate(-90, expand=True)
rotated_frame.save(work_dir / f"{idx:0>4d}.jpeg")
# I expect the memory to be freed for ever loop iteration (per image) Below is a flamegraph of my program created with memray. As you can see, the memory is not freed and reaches over 6GB at the end. |
I think you've got something else going on, depending on what's in load_images. If you do this: frames: list[PIL.Image.Image] = load_images(...)
for idx, frame in enumerate(frames):
frame.load() Does the memory profile look the same? |
Essentially,
Good point, it looks the same: So I guess this means that Either way, it seems that I can only mitigate the high memory usage by not having all images in a list, right? |
I was able to bring down the memory usage to ~500MB by calling frames: list[PIL.Image.Image] = load_images(...)
for idx, frame in enumerate(frames):
rotated_frame = frame.rotate(-90, expand=True)
rotated_frame.save(work_dir / f"{idx:0>4d}.jpeg")
# UPDATE: Free the memory
frame.close()
rotated_frame.close() Thanks for the clarification and your help, @wiredfool ! 🙂 |
The other thing you could to is make load images return a generator instead of a list, so that you're only actually opening one image at a time. |
Sorry, I also have to dig out the topic again because we still have some problems on Linux. I created this simple script which loads an image (4000x2250 pixels) 100 times and saves a copy to a list. Afterwards the elements are popped from the list one after another. On Windows the used memory for the process is decreasing slowly while the image copies are deleted in the second loop (as intended). On Linux the memory is still in use and released when the Python process is finished: import gc
import time
import PIL
from PIL import Image
FILE = './test_image_medium.png'
if __name__ == '__main__':
print(f'Using Pillow version: {PIL.__version__} from {PIL.__file__}')
buffer = []
for i in range(100):
with Image.open(FILE) as img:
img.load() # open is lazy operation
buffer.append(img.copy())
print(f'Loaded image {i}')
for i in range(len(buffer)):
item = buffer.pop()
# item.close()
print(f'Popped {i}')
gc.collect()
time.sleep(0.1)
print('Delete buffer')
del buffer
time.sleep(1)
gc.collect()
time.sleep(5)
print('Done') The really weird things is: When I delete the Could this be a problem of the CPython byte code compiler or does Pillow some special stuff when loaded from the *.pyc byte code? |
What did you do?
Our application works with PIL images and holds a list of containers. Every container has a copy of the last image to track manipulations of the image data. When we delete the containers, the memory reserved by the PIL images is not released. Even closing the image manually via
image.close()
in the container's desctructor and calling the garbage collector does not release the memory.If I replace the PIL image with a Python list (line: 55) the memory gets freed when a container is popped from the list.
What did you expect to happen?
The memory, taken by a PIL image copy, should be released after each
containers.pop()
.What actually happened?
The memory isn't released.
Script output
What are your OS, Python and Pillow versions?
The text was updated successfully, but these errors were encountered: