Skip to content

Commit 68d26c3

Browse files
committed
docs: demo
1 parent eeca43c commit 68d26c3

2 files changed

Lines changed: 171 additions & 3 deletions

File tree

demo/readme.md

Lines changed: 17 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,27 @@ https://github.com/user-attachments/assets/6c20924d-1eb6-44d5-95b0-207bd08b718b
2121
<details>
2222
<summary>Connect PaperDebugger to Model Context Protocol servers</summary>
2323

24-
- Paper Scoring Agents: https://github.com/PaperDebugger/paperdebugger-mcp
25-
- XtraMCP: https://github.com/4ndrelim/academic-paper-mcp-server
24+
### Paper Scoring
2625

27-
Prebuilt Docker images are published at https://github.com/orgs/PaperDebugger/packages for quick deployment.
26+
https://github.com/PaperDebugger/paperdebugger-mcp
27+
28+
Paper Scoring is a Multi-Agent Tool
29+
30+
- **Prebuilt Docker image:** https://github.com/orgs/PaperDebugger/packages/container/package/paperdebugger-mcp-server
31+
- **Prompt template:** https://github.com/PaperDebugger/paperdebugger-mcp/tree/main/src/templates
32+
- **Agent Flow:** https://github.com/PaperDebugger/paperdebugger-mcp/blob/main/src/agents/paper-score.ts#L23
33+
34+
### XtraMCP
35+
36+
https://github.com/4ndrelim/academic-paper-mcp-server
37+
38+
- **Prebuilt Docker image:** https://github.com/orgs/PaperDebugger/packages/container/package/xtragpt-mcp-server
39+
- **Prompt template:** [./xtramcp/readme.md](./xtramcp/readme.md) (Please inspect the docker image to find the prompt templates in detail.)
2840

2941
</details>
3042

43+
44+
3145
## Kubernetes
3246

3347
<details>

demo/xtramcp/readme.md

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
# XtraMCP Server - Orchestration Prompts
2+
3+
This directory contains MCP prompts that orchestrate complex workflows by guiding the AI on how to use multiple tools together effectively.
4+
5+
## Available Prompts
6+
7+
### 1. `analyze_paper_find_similar`
8+
**Purpose**: Analyze existing research papers (PDF/LaTeX) and find similar work in the academic literature.
9+
10+
**Use Cases**:
11+
- Finding papers similar to your own research
12+
- Identifying related work for a paper you're writing
13+
- Comparing your approach with existing methods in the literature
14+
- Building a collection of papers related to a specific source paper
15+
16+
**Arguments**:
17+
- `paper_path` (required): Path to PDF or LaTeX file to analyze
18+
- `analysis_focus` (optional): Focus area - 'methodology', 'application domain', 'theoretical contributions', or 'all' (default: 'all')
19+
- `comparison_type` (optional): Type of comparison - 'similar_methods', 'related_problems', 'same_domain', 'theoretical_connections' (default: 'related_problems')
20+
- `venues` (optional): Conference venues to search (default: ICLR.cc, NeurIPS.cc, ICML.cc)
21+
- `years` (optional): Years to search (default: last 3 years)
22+
- `max_papers` (optional): Maximum papers to find (default: 12)
23+
24+
**Example Usage**:
25+
```
26+
paper_path: "./papers/my_research_paper.pdf"
27+
analysis_focus: "methodology"
28+
comparison_type: "similar_methods"
29+
max_papers: 15
30+
```
31+
32+
### 2. `literature_review`
33+
**Purpose**: Conduct comprehensive and systematic literature reviews with topic-based discovery.
34+
35+
**Use Cases**:
36+
- Systematic literature reviews for research proposals
37+
- Comprehensive coverage of a research area
38+
- Finding papers on a specific topic or research question
39+
- Multi-faceted topic exploration with related areas
40+
- Building reference collections for academic writing
41+
42+
**Arguments**:
43+
- `main_topic` (required): Main research topic, research question, or paper description to investigate
44+
- `source_context` (optional): Context from existing work, abstracts, or specific research focus to guide keyword extraction
45+
- `related_topics` (optional): Comma-separated list of related topics, subtopics, or alternative terms to explore
46+
- `research_scope` (optional): 'focused' (10 papers, specific), 'standard' (15 papers, balanced), 'comprehensive' (25 papers, broad coverage) (default: 'standard')
47+
- `venues` (optional): Conference venues to search (default: ICLR.cc, NeurIPS.cc, ICML.cc)
48+
- `time_range` (optional): 'recent' (2 years), 'standard' (3 years), 'comprehensive' (5 years) (default: 'standard')
49+
50+
**Example Usage**:
51+
```
52+
main_topic: "multimodal machine learning for medical imaging"
53+
related_topics: "vision-language models, medical AI, cross-modal attention"
54+
research_scope: "comprehensive"
55+
time_range: "comprehensive"
56+
```
57+
58+
## Key Differences
59+
60+
| Aspect | `analyze_paper_find_similar` | `literature_review` |
61+
|--------|------------------------------|---------------------|
62+
| **Input** | Existing paper file (PDF/LaTeX) | Research topic/question |
63+
| **Approach** | Paper content analysis → keyword extraction | Topic analysis → keyword strategy |
64+
| **Focus** | Finding work similar to specific paper | Comprehensive topic coverage |
65+
| **Output** | Papers similar to source paper | Systematic literature collection |
66+
| **Tools Used** | `search_papers_on_openreview``export_papers` | `search_papers_on_openreview``export_papers` |
67+
| **Export Dir** | `./papers/openreview_exports/similar_papers/` | `./papers/openreview_exports/literature_review/` |
68+
| **Search Strategy** | High precision (min_score 0.8) | Balanced coverage (min_score 0.75) |
69+
| **Loop Prevention** | Allowed to run more than once but avoid loops, proceed with results | Allowed to run more than once but avoid loops, proceed with results |
70+
71+
## Workflow Overview
72+
73+
Both prompts follow a structured approach:
74+
75+
### `analyze_paper_find_similar` Workflow:
76+
1. **Source Paper Analysis**: Extract content from PDF/LaTeX file
77+
2. **Keyword Extraction**: Identify key concepts based on analysis focus
78+
3. **Strategic Search**: Use `search_papers_on_openreview` tool with extracted keywords
79+
4. **Export Collection**: Use `export_papers` tool for organized download
80+
5. **Similarity Report**: Analyze how found papers relate to source
81+
82+
### `literature_review` Workflow:
83+
1. **Topic Analysis**: Extract effective search terms from research topic
84+
2. **Keyword Strategy**: Develop comprehensive search approach
85+
3. **Systematic Search**: Use `search_papers_on_openreview` tool with strategic keywords
86+
4. **Export Organization**: Use `export_papers` tool with systematic naming
87+
5. **Research Synthesis**: Provide structured literature analysis
88+
89+
## Default Configuration
90+
91+
The prompts use these optimized defaults:
92+
93+
| Parameter | `analyze_paper_find_similar` | `literature_review` |
94+
|-----------|------------------------------|---------------------|
95+
| **Venues** | ICLR.cc, NeurIPS.cc, ICML.cc | ICLR.cc, NeurIPS.cc, ICML.cc |
96+
| **Search Fields** | title, abstract | title, abstract |
97+
| **Match Mode** | threshold | threshold |
98+
| **Match Threshold** | 0.6 | 0.5 |
99+
| **Min Score** | 0.8 (high precision) | 0.75 (balanced) |
100+
| **Max Papers** | 12 | 10-25 (scope dependent) |
101+
| **Years** | Last 3 years | 2-5 years (time_range dependent) |
102+
| **Search Strategy** | Allowed to run more than once but avoid loops | ONE Allowed to run more than once but avoid loops |
103+
104+
## Output Structure
105+
106+
Each workflow creates:
107+
108+
- **JSON Files**: Structured metadata about found papers
109+
- **PDF Downloads**: Full paper downloads for offline reading
110+
- **Organized Exports**: Papers saved to specific subdirectories
111+
- **Analysis Reports**: Key findings and research insights
112+
113+
### File Organization:
114+
```
115+
papers/openreview_exports/
116+
├── similar_papers/ # analyze_paper_find_similar outputs
117+
│ └── [source_paper]_similar_[comparison_type].json
118+
└── literature_review/ # literature_review outputs
119+
└── [topic]_review_[scope].json
120+
```
121+
122+
## Integration with Tools
123+
124+
These prompts orchestrate the following MCP tools in a two-step workflow:
125+
126+
1. **`search_papers_on_openreview`**: Find relevant papers based on keywords and venues, returning paper IDs
127+
2. **`export_papers`**: Download PDFs and create organized JSON collections using the paper IDs from search results
128+
129+
The prompts provide precise instructions on:
130+
- Sequential tool execution (search first, then export)
131+
- Paper ID extraction from search results
132+
- Tool parameter configuration
133+
- Error handling and validation
134+
- Output organization and naming
135+
136+
## Tips for Effective Use
137+
138+
### For `analyze_paper_find_similar`:
139+
1. **File Access**: Ensure the paper path is accessible and readable
140+
2. **Analysis Focus**: Choose specific focus for more targeted results
141+
3. **Comparison Type**: Select based on what aspect of similarity you want
142+
4. **File Formats**: Works with both PDF and LaTeX source files
143+
144+
### For `literature_review`:
145+
1. **Topic Clarity**: Use precise, technical terminology in your main topic
146+
2. **Scope Selection**: Match scope to your research needs (focused/standard/comprehensive)
147+
3. **Related Topics**: Include synonyms and alternative terms for broader coverage
148+
4. **Context Utilization**: Provide source context to guide keyword extraction
149+
150+
### General Best Practices:
151+
1. **Venue Selection**: Add domain-specific venues for specialized topics
152+
2. **Time Range**: Adjust based on field evolution and research currency
153+
3. **Quality Thresholds**: Higher min_score for more precise results
154+
4. **Export Organization**: Use descriptive names for easy file management

0 commit comments

Comments
 (0)