feat(scraping): add 8 new profile sections for comprehensive reading#311
feat(scraping): add 8 new profile sections for comprehensive reading#311Gabrcodes wants to merge 768 commits into
Conversation
docs: sync manifest.json tools and features with current capabilities
…ance chore(deps): lock file maintenance
Lock file already has 3.1.0 since #166; align pyproject.toml floor to prevent accidental downgrades to v2. Resolves: #190
Lock file already has 3.1.0 since #166; align pyproject.toml
floor to prevent accidental downgrades to v2.
Resolves: #190
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
This PR tightens the `fastmcp` minimum version constraint from `>=2.14.0` to `>=3.0.0` in `pyproject.toml` (and the corresponding `uv.lock` metadata), preventing any future resolver from backtracking to the incompatible v2 series. The lock file has already been pinning `fastmcp==3.1.0` since PR #166, so there is no runtime impact — this is purely a spec/metadata alignment.
- `pyproject.toml`: `fastmcp` floor raised to `>=3.0.0`
- `uv.lock`: `package.metadata.requires-dist` updated to match; the resolved package entry (`3.1.0`) is unchanged
- No upper-bound cap (`<4.0.0`) is set, which is consistent with the project's existing open-ended constraints for all other dependencies
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge — it is a pure metadata alignment with no functional or runtime impact.
- The locked version was already `3.1.0` before this PR; the only change is raising the declared floor to match. Both modified lines are trivially correct, consistent with each other, and have no side-effects on the installed environment.
- No files require special attention.
<h3>Important Files Changed</h3>
| Filename | Overview |
|----------|----------|
| pyproject.toml | Single-line change updating the `fastmcp` floor constraint from `>=2.14.0` to `>=3.0.0`, aligning with the already-resolved version in the lock file. |
| uv.lock | Auto-generated lock file metadata updated to reflect the new `>=3.0.0` specifier; the resolved `fastmcp` version (3.1.0) was already correct and unchanged. |
</details>
<h3>Flowchart</h3>
```mermaid
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A["pyproject.toml\nfastmcp >=3.0.0"] -->|uv resolves| B["uv.lock\nfastmcp 3.1.0 (pinned)"]
B --> C["Installed environment\nfastmcp 3.1.0"]
D["Old constraint\nfastmcp >=2.14.0"] -. "could resolve to" .-> E["fastmcp 2.x\n(incompatible)"]
style D fill:#f9d0d0,stroke:#c00
style E fill:#f9d0d0,stroke:#c00
style A fill:#d0f0d0,stroke:#060
style B fill:#d0f0d0,stroke:#060
style C fill:#d0f0d0,stroke:#060
```
<sub>Last reviewed commit: 7d2363e</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Replace dict-returning handle_tool_error() with raise_tool_error() that raises FastMCP ToolError for known exceptions. Unknown exceptions re-raise as-is for mask_error_details=True to handle. Resolves: #185
Add logger.error with exc_info for unknown exceptions before re-raising, and add test coverage for AuthenticationError and ElementNotFoundError.
Re-add optional context parameter to raise_tool_error() for log correlation, and add test for base LinkedInScraperException branch.
Add catch-all comment on base exception branch and NoReturn inline comments on all raise_tool_error() call sites.
…mcp_constraint_to_3.0.0 refactor(error-handler): replace handle_tool_error with ToolError
Replace repeated ensure_authenticated/get_or_create_browser/ LinkedInExtractor boilerplate in all 6 tool functions with FastMCP Depends()-based dependency injection via a single get_extractor() factory in dependencies.py. Resolves: #186
Updated the get_extractor function to route errors through raise_tool_error, ensuring that MCP clients receive structured ToolError responses for authentication failures. Added a test to verify that authentication errors are correctly handled and produce the expected ToolError response.
…epends_to_inject_extractor refactor(tools): Use Depends() to inject extractor
Replace ToolAnnotations(...) with plain dicts, move title to top-level @mcp.tool() param, and add category tags to all tools. Resolves: #189
Replace ToolAnnotations(...) with plain dicts, move title to
top-level @mcp.tool() param, and add category tags to all tools.
Resolves: #189
<!-- greptile_comment -->
<h3>Greptile Summary</h3>
This PR is a clean, well-scoped refactoring that modernises tool metadata across all four changed files to align with the FastMCP 3.x API. It introduces no functional or behavioural changes.
Key changes:
- Removes the `ToolAnnotations(...)` Pydantic wrapper in `company.py`, `job.py`, and `person.py`, replacing it with plain `dict` syntax for the `annotations` parameter — the simpler form supported by FastMCP 3.x.
- Moves `title` from inside `ToolAnnotations` to a top-level keyword argument on `@mcp.tool()`, matching the updated FastMCP 3.x decorator signature.
- Drops the now-redundant `destructiveHint=False` from all read-only tools. Per the MCP spec, `destructiveHint` is only meaningful when `readOnlyHint` is `false`, so omitting it from tools that already declare `readOnlyHint=True` is semantically equivalent.
- Adds `tags` (as Python `set` literals) to every tool for categorisation (`"company"`, `"job"`, `"person"`, `"scraping"`, `"search"`, `"session"`).
- Enriches the previously unannotated `close_session` tool in `server.py` with a title, `destructiveHint=True`, and the `"session"` tag — accurately describing its destructive nature.
The existing test suite in `tests/test_tools.py` covers all tool functions but does not assert on annotation metadata, so no test changes are required. The refactoring is consistent across all tool files and fits naturally within the project's layered registration pattern.
<h3>Confidence Score: 5/5</h3>
- This PR is safe to merge — it is a pure metadata/annotation refactoring with no changes to tool logic, inputs, outputs, or error handling.
- All changes are limited to decorator parameters (`title`, `annotations`, `tags`). The `annotations` dict values are semantically equivalent to the removed `ToolAnnotations` objects, `destructiveHint=False` is correctly dropped only for `readOnlyHint=True` tools, and the new `close_session` annotations accurately reflect its destructive nature. No business logic, scraping behaviour, or error paths were altered.
- No files require special attention.
<h3>Flowchart</h3>
```mermaid
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A["@mcp.tool() decorator"] --> B{Annotation style}
B -->|Before| C["ToolAnnotations(title=..., readOnlyHint=..., destructiveHint=False, openWorldHint=...)"]
B -->|After| D["title='...' (top-level param)\nannotations={'readOnlyHint': True, 'openWorldHint': True}\ntags={'category', 'type'}"]
D --> E["person tools\n(get_person_profile, search_people)"]
D --> F["company tools\n(get_company_profile, get_company_posts)"]
D --> G["job tools\n(get_job_details, search_jobs)"]
D --> H["session tool\n(close_session)\nannotations={'destructiveHint': True}"]
```
<sub>Last reviewed commit: c5bf554</sub>
<!-- greptile_other_comments_section -->
<!-- /greptile_comment -->
Use lowercase dict instead of Dict, add auth validation log line
…t_lifespan_into_composable_browser_auth_lifespans refactor(server): Split lifespan into composable browser + auth lifespans
# Conflicts: # linkedin_mcp_server/server.py # linkedin_mcp_server/tools/company.py # linkedin_mcp_server/tools/job.py # linkedin_mcp_server/tools/person.py
# Conflicts: # linkedin_mcp_server/server.py # linkedin_mcp_server/tools/company.py # linkedin_mcp_server/tools/job.py # linkedin_mcp_server/tools/person.py
# Conflicts: # linkedin_mcp_server/server.py
…_timeouts feat(tools): add global 90s tool timeouts
…_jobs Extract job IDs from href attributes (the one thing innerText can't capture), scroll the job sidebar instead of the main page, and paginate through multiple result pages with dynamic offsets. Resolves: #195
chore(deps): update ci dependencies
- Replace custom _secure_profile_dirs/_set_private_mode with thin _harden_linkedin_tree that uses secure_mkdir from common_utils - Fix export_storage_state: chmod 0o600 after Playwright writes - Add test for export_storage_state permission hardening - Add test for no-op outside .linkedin-mcp tree - Revert unrelated loaders.py change
Harden .linkedin-mcp profile/cookie permissions
- Remove unused selector constants (_MESSAGING_THREAD_LINK_SELECTOR, _MESSAGING_RESULT_ITEM_SELECTOR, _MESSAGING_SEND_SELECTOR) - Remove dead _conversation_thread_cache (new extractor per tool call) - Add AuthenticationError handling to get_sidebar_profiles and all messaging tools - Pass CSS selector as evaluate() arg instead of f-string interpolation - Replace deprecated execCommand with press_sequentially - Guard sidebar container walk against depth-limit exhaustion - Update scrape_person docstring to document profile_urn return key - Add messaging tools to README tool-status table
LinkedIn redirects /messaging/ to the most recent thread; capture baseline_thread_id after the SPA settles so search-selected threads can be distinguished from the auto-opened one.
feat: linkedin messaging, get sidebar profiles
…IDs (#300) * fix(scraping): Respect --timeout for messaging, recognize thread URLs Remove all hardcoded timeout=5000 from the send_message flow and messaging helpers so they fall through to the page-level default set from BrowserConfig.default_timeout (configurable via --timeout). Also add /messaging/thread/ URL recognition to classify_link so conversation thread references are captured when they appear in search results or conversation detail views. Raise inbox reference cap to 30 and add proper section context labels. Resolves: #296 See also: #297 * fix(scraping): Extract conversation thread IDs from inbox via click-and-capture LinkedIn's conversation sidebar uses JS click handlers instead of <a> tags, so anchor extraction cannot capture thread IDs. Click each conversation item and read the resulting SPA URL change to build conversation references with thread_id and participant name. Before: get_inbox returned 2 references (active conversation only) After: get_inbox returns all conversation thread IDs (10+ refs) Resolves: #297 * fix(scraping): Respect --timeout across all remaining scraping methods Remove the remaining 10 hardcoded timeout=5000 from profile scraping, connection flow, modal detection, sidebar profiles, conversation resolution, and job search. All Playwright calls now use the page-level default from BrowserConfig.default_timeout. Resolves: #299 * fix: Address PR review feedback - Use saved inbox URL instead of self._page.url (P1: wrong URL after clicks) - Fix docstring to clarify 2s recipient-picker probe is intentional - Replace class-name selectors with aria-label discovery + minimal class fallback - Dedupe references after merging conversation and anchor refs
First-time uvx runs download ~77 Python packages including the 39MB patchright wheel. On slow connections, uv's default 30s HTTP timeout can cause silent failures before the server process starts. Co-authored-by: Daniel Sticker <sticker@ngenn.net>
Move UV_HTTP_TIMEOUT=300 into the main uvx config example so it's the default, not an optional troubleshooting step. Fix grammar in the troubleshooting note. Co-authored-by: Daniel Sticker <sticker@ngenn.net>
* docs: use @latest tag in uvx config for auto-updates Without @latest, uvx caches the first downloaded version forever. Adding @latest ensures uvx checks PyPI on each client launch and pulls new versions automatically. * docs: apply @latest consistently to all uvx invocations Update --login examples in README.md and docs/docker-hub.md to use linkedin-scraper-mcp@latest for consistency with the MCP config. --------- Co-authored-by: Daniel Sticker <sticker@ngenn.net>
Add skills, certifications, volunteer, projects, publications, courses,
recommendations, and organizations to PERSON_SECTIONS. These map to
LinkedIn's /details/{section}/ URLs and follow the existing extraction
pattern — no new extractor methods needed.
Also exports ALL_PERSON_SECTION_NAMES for convenience when scraping
every section at once.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greptile SummaryThis PR adds 8 new entries to Key points:
Confidence Score: 5/5This PR is safe to merge — it is a purely additive, low-risk change with no logic modifications and comprehensive test coverage. All changes follow the existing No files require special attention. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[get_person_profile\ncalled with sections string] --> B[parse_person_sections]
B --> C{section name\nin PERSON_SECTIONS?}
C -- yes --> D[add to requested set]
C -- no --> E[add to unknown list\nlog warning]
D --> F{is_overlay?}
F -- False --> G[extract_page\n/in/username/suffix/]
F -- True --> H[_extract_overlay\n/in/username/overlay/contact-info/]
G --> I[sections result dict]
H --> I
subgraph PERSON_SECTIONS [PERSON_SECTIONS - 16 entries]
P0[main_profile]
P1[experience]
P2[education]
P3[skills NEW]
P4[certifications NEW]
P5[volunteer NEW]
P6[projects NEW]
P7[publications NEW]
P8[courses NEW]
P9[recommendations NEW]
P10[organizations NEW]
P11[interests]
P12[honors]
P13[languages]
P14[contact_info overlay=True]
P15[posts]
end
Reviews (5): Last reviewed commit: "style: move PERSON_SECTIONS import to mo..." | Re-trigger Greptile |
… test - test_all_sections now derives section names from ALL_PERSON_SECTION_NAMES so future additions only need a single update in fields.py - Add test_all_person_section_names_excludes_main_profile to verify the constant excludes main_profile and matches PERSON_SECTIONS Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Uses PERSON_SECTIONS/ALL_PERSON_SECTION_NAMES to derive the full set dynamically. Updates page count from 7 to 15 and adds URL assertions for all 8 new sections. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Adds 8 new entries to
PERSON_SECTIONSso thatget_person_profilecan scrape every profile section LinkedIn offers. No new extractor methods needed — they follow the existing/details/{section}/URL pattern./details/skills//details/certifications//details/volunteering-experiences//details/projects//details/publications//details/courses//details/recommendations//details/organizations/Also exports
ALL_PERSON_SECTION_NAMESfromscraping/__init__.pyfor convenience.Changes
scraping/fields.py— 8 new section entries +ALL_PERSON_SECTION_NAMESlistscraping/__init__.py— export the new constanttools/person.py— updated docstring listing available sectionstests/test_fields.py— updated expected keys and all-sections testTest plan
ruff checkandruff formatpass