You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/pages/gsoc_ideas.mdx
+16-9Lines changed: 16 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -177,13 +177,19 @@ Medium (175hr) or Large (350 hr) depending on number of deliverables
177
177
Medium
178
178
179
179
---
180
-
### 5. LLM-Assisted Extraction of Agronomic Experiments into BETYdb{#llm-betydb}
181
180
182
-
Manual extraction of agronomic and ecological experiments from scientific literature into BETYdb is slow, error-prone, and labor-intensive. Researchers must interpret complex experimental designs, reconstruct management timelines, identify treatments and controls, handle factorial structures, and link outcomes with correct covariates and uncertainty estimates—tasks that require scientific judgment beyond simple text extraction. Current manual workflows can take hours per paper and introduce inconsistencies that compromise downstream data quality and meta-analyses.
181
+
### 5. LLM-Assisted Extraction of Agronomic and Ecological Experiments into Structured Data {#llm-betydb}
183
182
184
-
This project proposes a human-supervised, LLM-based system to accelerate BETYdb data entry while preserving scientific rigor and traceability. The system will ingest PDFs of scientific papers and produce upload-ready BETYdb entries (sites, treatments, management time series, traits, and yields) with every field labeled as extracted, inferred, or unresolved and linked to provenance evidence in the source document. The system leverages existing labeled training data (scientific papers with ground-truth BETYdb entries).
183
+
Manual extraction of agronomic and ecological experiments from scientific literature into a structured format that can be used to calibrate and validate models is slow, error-prone, and labor-intensive. Researchers must interpret complex experimental designs, reconstruct management timelines, identify treatments and controls, handle factorial structures, and link outcomes with correct covariates and uncertainty estimates. Data are often reported as summary statistics (for example mean and standard error) in text, tables, or figures and require additional context from disturbance or management time series. These tasks require scientific judgment beyond simple text extraction.
184
+
Current manual workflows can take hours per paper and introduce inconsistencies that compromise downstream data quality and meta-analyses.
185
185
186
-
The architecture follows a two-layer design: (1) a schema-validated intermediate representation (IR) preserving evidence links, confidence scores, and flagged conflicts, and (2) a BETYdb materialization layer that enforces BETYdb semantics, validation rules, and generates upload-ready CSVs or API payloads with full audit trails. Implementation is flexible—ranging from agentic LLM workflows to fine-tuned specialist models to an adaptive hybrid—and should be informed by empirical evaluation during the project.
186
+
This project proposes a human-supervised, LLM-based system to accelerate data extraction while preserving scientific rigor and traceability. It will leverage existing labeled training data (scientific papers with ground‑truth entries), including aligned PDF‑to‑structured‑data records from BETYdb and ForC, which represent expert‑curated, production‑quality datasets. Combined, these resources include over 80,000 plant and ecosystem observations from more than 1,000 sources and provide high-quality supervision for extraction from text, tables, and figures. Evaluation should include held-out, out-of-sample papers. The system will ingest PDFs of scientific papers and produce tables compatible with the [spreadsheet used to upload data to BETYdb](https://docs.google.com/spreadsheets/d/e/2PACX-1vSAa7jBHSaas-bH0ARxQjVLKhz3Iq03t97wrxMZrgVVi98L5bYQi5ZUC0b57xIZBlHEkPH9qYf22xQS/pubhtml) (sites, treatments, management time series, traits+yields bulk upload table) with every field labeled as extracted, inferred, or unresolved and linked to provenance evidence in the source document.
187
+
188
+
The architecture follows a two-layer design: (1) a schema-validated intermediate representation (IR) preserving evidence links, confidence scores, and flagged conflicts, and (2) a materialization layer that enforces semantics, validation rules, and generates upload-ready CSVs or API payloads with full audit trails. Implementation is flexible—ranging from agentic LLM workflows to fine-tuned specialist models to an adaptive hybrid—and should be informed by empirical evaluation during the project.
189
+
190
+
The architecture follows a two‑layer design: (1) a schema‑validated intermediate representation (IR) preserving evidence links, confidence scores, and flagged conflicts, and (2) a BETYdb materialization layer that enforces BETYdb semantics and validation rules and generates upload‑ready CSVs or API payloads with full audit trails.
191
+
192
+
Implementation is flexible—ranging from agentic LLM workflows to fine‑tuned specialist models to an adaptive hybrid—and should be informed by empirical evaluation during the project.
187
193
188
194
**Expected outcomes:**
189
195
@@ -194,15 +200,15 @@ A successful project would complete the following tasks:
194
200
* Independent validators for BETYdb semantics, unit consistency, temporal logic, and required fields
195
201
* BETYdb export module producing upload-ready management CSVs and bulk trait upload formats with full provenance preservation
196
202
* Scientist-in-the-loop review interface for approving, correcting, or rejecting extracted entries with inline evidence and confidence scores
197
-
* Evaluation harness with automated metrics for extraction accuracy, inference quality, coverage, and time savings on held-out test papers
203
+
* Evaluation harness with automated metrics for extraction accuracy, inference quality, coverage, and time savings relative to manual curation on held‑out test papers
198
204
* Documentation covering IR schema specification, developer guidance for adding new extraction components, and user guidance for the review interface
199
205
200
206
**Prerequisites:**
201
207
202
-
- Required: R Shiny, Python (familiarity with scientific literature and experimental design concepts)
203
-
- Helpful: experience with LLM APIs (Anthropic, OpenAI) or fine-tuning frameworks, knowledge of BETYdb schema and workflows, familiarity with agronomic or ecological experimental designs
208
+
- Required: Python; familiarity with natural language processing, information extraction, and machine learning
209
+
- Helpful: experience with LLM APIs and fine-tuning frameworks, knowledge of BETYdb schema and workflows, familiarity with scientific writing and agronomic or ecological experimental design/analysis
204
210
205
-
**Contact person:**
211
+
**Contact persons:**
206
212
207
213
Nihar Sanda (@koolgax99), David LeBauer (@dlebauer)
0 commit comments