Added caching decorator#554
Conversation
|
Why don't you use directly |
Yes. I started out with using that. But apparently there are some issues with memory leaks. |
|
Then maybe include that link in the function doc
|
|
Did those leaks show up for you? Or why did you start implementing this on your own in the first place? |
Ruff complains about this. The link leads to the ruff documentation of things they check against. So then I told ChatGPT to write sth, as one does, and it was quickly implemented. I added the link to the ruff documentation about why I don't just use |
|
But if I understood correctly, the problem comes when you use the Although (just to be safe), can you add the following test that ensure that the cache does not keep a reference to the object, making it not de-alocated ? track = [0]
class KeepTrack:
def __init__(self):
track[0] += 1
@cache
def method(self, a, b, c=1, d=2):
return f"{a},{b},c={c},d={d}"
def __del__(self):
track[0] -= 1
def function():
obj = KeepTrack()
for i in range(10):
obj.method(1, 2, d=2)
for i in range(3):
function()
assert track[0] == 0, "possible memory leak with the @cache"I've tested it and it work with your decorator, while not with |
* Implemented caching wrapper for spectral helper * Added test for caching decorator
This is one feature that I added during the large refactor and one that can be merged independently. It is not particularly useful as used in this PR, but is used extensively in the refactored version.
The caching decorator returns a cached result if the function arguments match a previous call and can be used like this: