We've added a check in v4 (https://github.com/TkTech/pysimdjson/blob/master/simdjson/csimdjson.pyx#L437) that prevents parsing new documents while references continue to exist to the old one. This is correct, in that it ensures no errors. I wasn't terribly happy with this, but it's better then segfaulting.
It has downsides:
Brainstorming welcome. Alternatives:
- Probably the easiest approach would be for a Parser to keep a list of Object and Array proxies that hold a reference to it, and set a dirty bit on them when
parse() is called with a different document. The performance of this would probably be unacceptable - I might be wrong.
- Use the new
parse_into_document() and create a new document for every parse. This is potentially both slow and very wasteful with memory, but would let us keep a document around and valid for as long as Object or Array reference it.
We've added a check in v4 (https://github.com/TkTech/pysimdjson/blob/master/simdjson/csimdjson.pyx#L437) that prevents parsing new documents while references continue to exist to the old one. This is correct, in that it ensures no errors. I wasn't terribly happy with this, but it's better then segfaulting.
It has downsides:
delthe old objects, even if you didn't intend to use them again. Very un-pythonic.delis unreliable. The objects may not be garbage collected until much later.Brainstorming welcome. Alternatives:
parse()is called with a different document. The performance of this would probably be unacceptable - I might be wrong.parse_into_document()and create a new document for every parse. This is potentially both slow and very wasteful with memory, but would let us keep a document around and valid for as long as Object or Array reference it.