Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory of copied PIL Images is not released #7935

Open
TTMaDe opened this issue Apr 2, 2024 · 12 comments
Open

Memory of copied PIL Images is not released #7935

TTMaDe opened this issue Apr 2, 2024 · 12 comments
Labels

Comments

@TTMaDe
Copy link

TTMaDe commented Apr 2, 2024

What did you do?

Our application works with PIL images and holds a list of containers. Every container has a copy of the last image to track manipulations of the image data. When we delete the containers, the memory reserved by the PIL images is not released. Even closing the image manually via image.close() in the container's desctructor and calling the garbage collector does not release the memory.

If I replace the PIL image with a Python list (line: 55) the memory gets freed when a container is popped from the list.

import gc
import os
import sys
import time

import PIL.Image
import psutil

FILE = './test_image.png'


def LogMemory():
    pid = os.getpid()
    rss = 0
    for mmap in psutil.Process(pid).memory_maps():
        # All memory that this process holds in RAM. RSS = USS + Shared.
        rss += mmap.rss
    print(f'RSS: {rss}')


class Container:

    def __init__(self):
        self.value = None

    def __del__(self):
        if isinstance(self.value, PIL.Image.Image):
            LogMemory()
            self.value.close()
            self.value = None
            gc.collect()
            print('closed image in destructor')
            LogMemory()

    def SetValue(self, value):
        self.value = self._copyValue(value)

    def GetValue(self):
        return self._copyValue(self.value)

    def _copyValue(self, value):
        return value.copy()


if __name__ == '__main__':

    print(f'Using Python {sys.version}, PIL {PIL.__version__}')

    containers = []

    for i in range(50):
        print(f'Load image {i}')
        LogMemory()
        img = PIL.Image.open(FILE)
        # img = [1] * (1920*1080*3)  # this works!
        container = Container()
        container.SetValue(img)
        containers.append(container)
        LogMemory()

    for i in range(len(containers)):
        print(f'pop container {i}')
        LogMemory()
        containers.pop()
        time.sleep(0.1)
        LogMemory()

    print('Delete list')
    LogMemory()
    containers = None
    gc.collect()
    LogMemory()

test_image

What did you expect to happen?

The memory, taken by a PIL image copy, should be released after each containers.pop().

What actually happened?

The memory isn't released.

Script output
```text
Using Python 3.11.8 (main, Feb 25 2024, 16:41:26) [GCC 9.4.0], PIL 10.3.0
Load image 0
RSS: 21696512
RSS: 40095744
Load image 1
RSS: 40108032
RSS: 48123904
Load image 2
RSS: 48123904
RSS: 56512512
Load image 3
RSS: 56512512
RSS: 64679936
Load image 4
RSS: 64679936
RSS: 72974336
Load image 5
RSS: 72974336
RSS: 81268736
Load image 6
RSS: 81268736
RSS: 89563136
Load image 7
RSS: 89563136
RSS: 97857536
Load image 8
RSS: 97857536
RSS: 106151936
Load image 9
RSS: 106151936
RSS: 114446336
Load image 10
RSS: 114446336
RSS: 122740736
Load image 11
RSS: 122740736
RSS: 131035136
Load image 12
RSS: 131035136
RSS: 139329536
Load image 13
RSS: 139329536
RSS: 147623936
Load image 14
RSS: 147623936
RSS: 155918336
Load image 15
RSS: 155918336
RSS: 164212736
Load image 16
RSS: 164212736
RSS: 172507136
Load image 17
RSS: 172507136
RSS: 180801536
Load image 18
RSS: 180801536
RSS: 189095936
Load image 19
RSS: 189095936
RSS: 197390336
Load image 20
RSS: 197390336
RSS: 205684736
Load image 21
RSS: 205684736
RSS: 213979136
Load image 22
RSS: 213979136
RSS: 222273536
Load image 23
RSS: 222273536
RSS: 230576128
Load image 24
RSS: 230576128
RSS: 238964736
Load image 25
RSS: 238964736
RSS: 247156736
Load image 26
RSS: 247156736
RSS: 255451136
Load image 27
RSS: 255451136
RSS: 263745536
Load image 28
RSS: 263745536
RSS: 272039936
Load image 29
RSS: 272039936
RSS: 280334336
Load image 30
RSS: 280334336
RSS: 288628736
Load image 31
RSS: 288628736
RSS: 296923136
Load image 32
RSS: 296923136
RSS: 305217536
Load image 33
RSS: 305217536
RSS: 313511936
Load image 34
RSS: 313511936
RSS: 321806336
Load image 35
RSS: 321806336
RSS: 330100736
Load image 36
RSS: 330100736
RSS: 338395136
Load image 37
RSS: 338395136
RSS: 346689536
Load image 38
RSS: 346689536
RSS: 354983936
Load image 39
RSS: 354983936
RSS: 363278336
Load image 40
RSS: 363278336
RSS: 371572736
Load image 41
RSS: 371572736
RSS: 379867136
Load image 42
RSS: 379867136
RSS: 388161536
Load image 43
RSS: 388161536
RSS: 396455936
Load image 44
RSS: 396455936
RSS: 404750336
Load image 45
RSS: 404750336
RSS: 413044736
Load image 46
RSS: 413044736
RSS: 421416960
Load image 47
RSS: 421416960
RSS: 429633536
Load image 48
RSS: 429633536
RSS: 437927936
Load image 49
RSS: 437927936
RSS: 446222336
pop container 0
RSS: 446222336
RSS: 446222336
pop container 1
RSS: 446222336
RSS: 446222336
closed image in destructor
RSS: 446222336
RSS: 446226432
pop container 2
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 3
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 4
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 5
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 6
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 7
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 8
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 9
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 10
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 11
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 12
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 13
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 14
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 15
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 16
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 17
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 18
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 19
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 20
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 21
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 22
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 23
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 24
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 25
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 26
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 27
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 28
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 29
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 30
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 31
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 32
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 33
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 34
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 35
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 36
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 37
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 38
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 39
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 40
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 41
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 42
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 43
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 44
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 45
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 46
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 47
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 48
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 446226432
RSS: 446226432
pop container 49
RSS: 446226432
RSS: 446226432
closed image in destructor
RSS: 437927936
RSS: 437927936
Delete list
RSS: 437927936
RSS: 437927936
```

What are your OS, Python and Pillow versions?

  • OS: Ubuntu 20.04 (WSL2)
  • Python: Python 3.11.8
  • Pillow: 10.3.0
--------------------------------------------------------------------
Pillow 10.3.0
Python 3.11.8 (main, Feb 25 2024, 16:41:26) [GCC 9.4.0]
--------------------------------------------------------------------
Python executable is /home/kolbe/.cache/pypoetry/virtualenvs/tts-vtKShpo8-py3.11/bin/python3
Environment Python files loaded from /home/kolbe/.cache/pypoetry/virtualenvs/tts-vtKShpo8-py3.11
System Python files loaded from /usr
--------------------------------------------------------------------
Python Pillow modules loaded from /home/kolbe/.cache/pypoetry/virtualenvs/tts-vtKShpo8-py3.11/lib/python3.11/site-packages/PIL
Binary Pillow modules loaded from /home/kolbe/.cache/pypoetry/virtualenvs/tts-vtKShpo8-py3.11/lib/python3.11/site-packages/PIL
--------------------------------------------------------------------
--- PIL CORE support ok, compiled for 10.3.0
*** TKINTER support not installed
--- FREETYPE2 support ok, loaded 2.13.2
--- LITTLECMS2 support ok, loaded 2.16
--- WEBP support ok, loaded 1.3.2
--- WEBP Transparency support ok
--- WEBPMUX support ok
--- WEBP Animation support ok
--- JPEG support ok, compiled for libjpeg-turbo 3.0.2
--- OPENJPEG (JPEG2000) support ok, loaded 2.5.2
--- ZLIB (PNG/ZIP) support ok, loaded 1.2.11
--- LIBTIFF support ok, loaded 4.6.0
--- RAQM (Bidirectional Text) support ok, loaded 0.10.1, fribidi 1.0.8, harfbuzz 8.4.0
*** LIBIMAGEQUANT (Quantization method) support not installed
--- XCB (X protocol) support ok
--------------------------------------------------------------------
(tts-py3.11) kolbe@tt-ddm429-01:/mnt/c/dev/TTS$
@wiredfool
Copy link
Member

wiredfool commented Apr 2, 2024

Pillow's memory allocator doesn't necessarily release the memory in the pool back as soon as an image is destroyed, as it uses that memory pool for future allocations. See Storage.c (https://github.com/python-pillow/Pillow/blob/main/src/libImaging/Storage.c#L310) for the implementation.

If you repeatedly open and close an image, you should not see the memory increase, but it won't necessarily drop between destruction and allocation again.

(edit: related: #5401, #3610)

@Yay295
Copy link
Contributor

Yay295 commented Apr 2, 2024

It looks like it caches 0 blocks by default though.

struct ImagingMemoryArena ImagingDefaultArena = {
1, // alignment
16 * 1024 * 1024, // block_size
0, // blocks_max
0, // blocks_cached
NULL, // blocks_pool
0,
0,
0,
0,
0 // Stats
};

And you can set the number of blocks to cache with the PILLOW_BLOCKS_MAX environment variable.

Pillow/src/PIL/Image.py

Lines 3624 to 3656 in aeeb596

def _apply_env_variables(env=None) -> None:
if env is None:
env = os.environ
for var_name, setter in [
("PILLOW_ALIGNMENT", core.set_alignment),
("PILLOW_BLOCK_SIZE", core.set_block_size),
("PILLOW_BLOCKS_MAX", core.set_blocks_max),
]:
if var_name not in env:
continue
var = env[var_name].lower()
units = 1
for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]:
if var.endswith(postfix):
units = mul
var = var[: -len(postfix)]
try:
var = int(var) * units
except ValueError:
warnings.warn(f"{var_name} is not int")
continue
try:
setter(var)
except ValueError as e:
warnings.warn(f"{var_name}: {e}")
_apply_env_variables()

There's a docs page for this actually: https://pillow.readthedocs.io/en/stable/reference/block_allocator.html

@TTMaDe
Copy link
Author

TTMaDe commented Apr 3, 2024

Thanks for the quick answers!

Indeed, when I set the PILLOW_BLOCKS_MAX=5 environment variable the used memory decreases when releasing/closing the images. But after reading the linked docs page, I would expect that if I manually set the environment variable to 0 (or just leave it unset), the memory pools will be disabled, no block caching will occur and the memory of closed images will be freed immediately.
But with this default settings our application ran out of memory on a 16 GB Linux system after reading, modifying and closing images in a loop.

@radarhere
Copy link
Member

radarhere commented Apr 3, 2024

This may or may not help - in your original code you open an image and don't close it. It is recommended instead that you either call img.close() when you are done or use a context manager for the image. See https://pillow.readthedocs.io/en/stable/deprecations.html#image-del and

Pillow/src/PIL/Image.py

Lines 560 to 565 in e8ab564

def close(self) -> None:
"""
Closes the file pointer, if possible.
This operation will destroy the image core and release its memory.
The image data will be unusable afterward.

Edit: I see you've mentioned 'closing images' in your comments, so this remark can just be for reference to others.

@wiredfool
Copy link
Member

I didn't catch this before but what you're doing is basically opening 50 copies of an image and keeping them all.

Can you show us a flow where you expect constant memory usage?

@TTMaDe
Copy link
Author

TTMaDe commented Apr 3, 2024

Yes, in the first loop I open the image 50 times and hold 50 copies so the memory usage increases which is ok.
In the second loop, I delete a container containing an image copy in each iteration, so I would expect memory usage to decrease after each iteration. But the used memory only decreased if I manually set PILLOW_BLOCKS_MAX to a value > 0.

Manually closing the image copy via self.value.close() in the destructor of the Container class doesn't make a difference so I removed it:

import gc
import os
import sys
import time

import PIL.Image
import psutil

FILE = './test_image.png'


def LogMemory():
    pid = os.getpid()
    rss = 0
    for mmap in psutil.Process(pid).memory_maps():
        # All memory that this process holds in RAM. RSS = USS + Shared.
        rss += mmap.rss
    return rss


class Container:

    def __init__(self):
        self.value = None

    def SetValue(self, value):
        self.value = self._copyValue(value)

    def GetValue(self):
        return self._copyValue(self.value)

    def _copyValue(self, value):
        return value.copy()


if __name__ == '__main__':

    print(f'Using Python {sys.version}, PIL {PIL.__version__}')
    print(f'PILLOW_ALIGNMENT: {PIL.Image.core.get_alignment()}')
    print(f'PILLOW_BLOCK_SIZE: {PIL.Image.core.get_block_size()}')
    print(f'PILLOW_BLOCKS_MAX: {PIL.Image.core.get_blocks_max()}')

    containers = []

    for i in range(50):
        before = LogMemory()
        img = PIL.Image.open(FILE)
        # img = [1] * (1920*1080*3)  # this works!
        container = Container()
        container.SetValue(img)
        containers.append(container)
        after = LogMemory()
        print(f'Loaded image {i} took {after-before} bytes')

    for i in range(len(containers)):
        before = LogMemory()
        containers.pop()
        time.sleep(0.1)
        after = LogMemory()
        print(f'popped container {i} released {before-after} bytes')

    print('Delete list')
    before = LogMemory()
    containers = None
    gc.collect()
    after = LogMemory()
    print(f'Finally released {before-after} bytes')

Running the code with PILLOW_BLOCKS_MAX=1 prints:

Using Python 3.11.8 (main, Feb 25 2024, 16:41:26) [GCC 9.4.0], PIL 10.2.0
PILLOW_ALIGNMENT: 1
PILLOW_BLOCK_SIZE: 16777216
PILLOW_BLOCKS_MAX: 1
Loaded image 0 took 17731584 bytes
Loaded image 1 took 8339456 bytes
Loaded image 2 took 8298496 bytes
Loaded image 3 took 8298496 bytes
...
Loaded image 49 took 8298496 bytes
popped container 0 released -12288 bytes
popped container 1 released 0 bytes
popped container 2 released 8298496 bytes
popped container 3 released 8298496 bytes
popped container 4 released 8298496 bytes
popped container 5 released 8298496 bytes
popped container 6 released 8298496 bytes
popped container 7 released 8298496 bytes
...
popped container 48 released 8298496 bytes
popped container 49 released 8298496 bytes
Delete list
Finally released 0 bytes

The memory usage decreases with every containers.pop()

But when I run with PILLOW_BLOCKS_MAX=0 or just leave the environment variable unset I get:

Using Python 3.11.8 (main, Feb 25 2024, 16:41:26) [GCC 9.4.0], PIL 10.2.0
PILLOW_ALIGNMENT: 1
PILLOW_BLOCK_SIZE: 16777216
PILLOW_BLOCKS_MAX: 0
Loaded image 0 took 17735680 bytes
Loaded image 1 took 8114176 bytes
Loaded image 2 took 8069120 bytes
Loaded image 3 took 8036352 bytes
Loaded image 4 took 8044544 bytes
Loaded image 5 took 8052736 bytes
Loaded image 6 took 8052736 bytes
...
Loaded image 48 took 8052736 bytes
Loaded image 49 took 8065024 bytes
popped container 0 released 0 bytes
popped container 1 released 0 bytes
popped container 2 released 0 bytes
popped container 3 released 0 bytes
popped container 4 released 0 bytes
popped container 5 released 0 bytes
popped container 6 released 0 bytes
popped container 7 released 0 bytes
...
popped container 48 released 0 bytes
popped container 49 released 8298496 bytes
Delete list
Finally released 0 bytes

and the used memory doesn't decrease while popping the containers from the list.

So setting PILLOW_BLOCKS_MAX to a value > 0 fixes my problem because the memory is freed but after reading the linked doc I would expect setting PILLOW_BLOCKS_MAX to 0 disables caches and memory will also be freed on each iteration.

@RafaelWO
Copy link

RafaelWO commented Nov 18, 2024

I have a similar problem when loading and rotating around 2k images. My code is similar to the following:

frames: list[PIL.Image.Image] = load_images(...)
# After loading the frames, my memory usage is under 500MB

for idx, frame in enumerate(frames):
    # This is were a lot of memory is allocated
    rotated_frame = frame.rotate(-90, expand=True)
    rotated_frame.save(work_dir / f"{idx:0>4d}.jpeg")
    # I expect the memory to be freed for ever loop iteration (per image)

Below is a flamegraph of my program created with memray. As you can see, the memory is not freed and reaches over 6GB at the end.

image

@wiredfool
Copy link
Member

wiredfool commented Nov 18, 2024

@RafaelWO

I think you've got something else going on, depending on what's in load_images.

If you do this:

frames: list[PIL.Image.Image] = load_images(...)

for idx, frame in enumerate(frames):
    frame.load()

Does the memory profile look the same?

@RafaelWO
Copy link

@wiredfool

I think you've got something else going on, depending on what's in load_images.

Essentially, load_images(...) downloads data (read: bytes) HTTP GET and creates and image via PIL.Image.open(io.BytesIO(data)) for every image.

If you do this (...) Does the memory profile look the same?

Good point, it looks the same:

image

So I guess this means that PIL.Image.open is lazy and the image is only loaded if you first "touch" it?


Either way, it seems that I can only mitigate the high memory usage by not having all images in a list, right?

@RafaelWO
Copy link

RafaelWO commented Nov 18, 2024

Either way, it seems that I can only mitigate the high memory usage by not having all images in a list, right?

I was able to bring down the memory usage to ~500MB by calling .close() on the frames in the loop:

frames: list[PIL.Image.Image] = load_images(...)

for idx, frame in enumerate(frames):
    rotated_frame = frame.rotate(-90, expand=True)
    rotated_frame.save(work_dir / f"{idx:0>4d}.jpeg")

    # UPDATE: Free the memory
    frame.close()
    rotated_frame.close()
Click for memory profile
Old New
image image

Thanks for the clarification and your help, @wiredfool ! 🙂

@wiredfool
Copy link
Member

The other thing you could to is make load images return a generator instead of a list, so that you're only actually opening one image at a time.

@TTMaDe
Copy link
Author

TTMaDe commented Nov 22, 2024

Sorry, I also have to dig out the topic again because we still have some problems on Linux.

I created this simple script which loads an image (4000x2250 pixels) 100 times and saves a copy to a list. Afterwards the elements are popped from the list one after another. On Windows the used memory for the process is decreasing slowly while the image copies are deleted in the second loop (as intended). On Linux the memory is still in use and released when the Python process is finished:

import gc
import time

import PIL
from PIL import Image

FILE = './test_image_medium.png'

if __name__ == '__main__':
    print(f'Using Pillow version: {PIL.__version__} from {PIL.__file__}')
    buffer = []

    for i in range(100):
        with Image.open(FILE) as img:
            img.load()  # open is lazy operation
            buffer.append(img.copy())
            print(f'Loaded image {i}')

    for i in range(len(buffer)):
        item = buffer.pop()
        # item.close()
        print(f'Popped {i}')
        gc.collect()
        time.sleep(0.1)

    print('Delete buffer')
    del buffer
    time.sleep(1)
    gc.collect()
    time.sleep(5)
    print('Done')

The really weird things is: When I delete the .venv/lib/python3.12/site-packages/PIL/__pycache__/ directory with all the *.pyc files in my Linux Python environment, the script will work as intended and the memory is freed in the second loop. When I run the same script again, the problem occurs again until I delete the pyc files.

Could this be a problem of the CPython byte code compiler or does Pillow some special stuff when loaded from the *.pyc byte code?
I tested with Python 3.11 and Python 3.12 on Ubuntu 20.04 (WSL) and Pillow 11.0.0 from pypi (same behavior when I compile the lib myself).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants