Skip to content

PREFACE training set size + best FFY estimation #17

@leonorpalmeira

Description

@leonorpalmeira

Dear colleagues,

we have trained PREFACE on a set of 496 NIPT samples (250 male fœtuses, 246 female fœtuses based on Yfrac) using the following FFY estimation:

FFY = 2 * median bin reads chrY / median bin reads autosomes

Our ${sample}_infile.bed files were prepared from the ${sample}_bins.bed issued by WiseCondorX predict. The training seemed to converge, but performance isn't as good as expected:

Image

We then ran the trained PREFACE model on an independent set of 119 NIPT samples (56 male fœtuses, 63 female fœtuses based on Yfrac), but our Pearson correlation for male fœtuses between FFY and PREFACE_FF is only of 0.64:

Image

We are trying to understand how to improve PREFACE's training. Would you have any recommendations for the training set size? Is there a specific FFY estimation formula that you recommend over the one we used?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions