the readme only suggests
synapse_auto_compressor -p postgresql://user:pass@localhost/synapse -c 500 -n 100
This means, that it will only check in 100 rooms and quit as I understand is as a layman.
I got the result after 100 chunks:
INFO synapse_auto_compressor::manager] Finished running compressor. Saved 15891 rows. Skipped 4/100 chunks
this seems to have worked.
Then I started again running with -n 10000 now to try to compress my 56GB database.
Is this the best option? or could you provide more informatinon in the readme please?
the readme only suggests
This means, that it will only check in 100 rooms and quit as I understand is as a layman.
I got the result after 100 chunks:
this seems to have worked.
Then I started again running with
-n 10000now to try to compress my 56GB database.Is this the best option? or could you provide more informatinon in the readme please?