Skip to content

fix Incorrect overload selection for itertools.accumulate #2876#2947

Draft
asukaminato0721 wants to merge 2 commits intofacebook:mainfrom
asukaminato0721:2876
Draft

fix Incorrect overload selection for itertools.accumulate #2876#2947
asukaminato0721 wants to merge 2 commits intofacebook:mainfrom
asukaminato0721:2876

Conversation

@asukaminato0721
Copy link
Copy Markdown
Contributor

@asukaminato0721 asukaminato0721 commented Mar 27, 2026

Summary

Fixes #2876

when matching a generic callable target that still has inference vars, Pyrefly now checks the return type before the parameter list

Test Plan

add test

@meta-cla meta-cla Bot added the cla signed label Mar 27, 2026
@asukaminato0721 asukaminato0721 marked this pull request as ready for review March 27, 2026 14:37
Copilot AI review requested due to automatic review settings March 27, 2026 14:37
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes pyrefly’s overload selection so that failed tentative overload candidates don’t leave behind solver state that biases later candidates (e.g., "{}/{}".format used with itertools.accumulate), and adjusts callable subtyping to avoid prematurely pinning inferred vars when the target callable still contains unsolved vars.

Changes:

  • Add solver snapshot/restore support and use it to roll back state after failed tentative overload checks.
  • Update callable subtyping to compare return types before parameters when the target callable contains inferred vars.
  • Add a regression test covering itertools.accumulate with a bound str.format.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
pyrefly/lib/test/overload.rs Adds regression test for bound-method overload behavior via itertools.accumulate.
pyrefly/lib/solver/subset.rs Wraps overload candidate checks with rollback and adjusts callable subtyping comparison order when vars are present.
pyrefly/lib/solver/solver.rs Introduces solver snapshot/restore (variables + instantiation errors) to support rollback.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread pyrefly/lib/solver/subset.rs Outdated
Comment thread pyrefly/lib/solver/solver.rs Outdated
Comment thread pyrefly/lib/solver/solver.rs Outdated
Comment on lines +1515 to +1517
let want_has_vars =
Type::Callable(Box::new(u.clone())).may_contain_quantified_var();
if want_has_vars {
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

want_has_vars currently builds a fresh Type::Callable(Box::new(u.clone())) just to call may_contain_quantified_var(), which allocates and clones the full callable signature. If this check is on a hot path, consider a cheaper predicate (e.g., a helper that inspects u.params/u.ret directly, or adding a Callable::may_contain_quantified_var() method) to avoid the extra clone/allocation.

Copilot uses AI. Check for mistakes.
@github-actions github-actions Bot added size/s and removed size/m labels Mar 27, 2026
@github-actions
Copy link
Copy Markdown

Diff from mypy_primer, showing the effect of this PR on open source code:

anyio (https://github.com/agronholm/anyio)
- ERROR src/anyio/_core/_eventloop.py:77:34-38: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[@_]` in function `anyio.abc._eventloop.AsyncBackend.run` [bad-argument-type]
+ ERROR src/anyio/_core/_eventloop.py:77:34-38: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[T_Retval]` in function `anyio.abc._eventloop.AsyncBackend.run` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:480:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:480:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:522:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:522:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> str` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:551:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:551:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:557:41-59: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:557:41-59: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:637:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:637:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> str` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/from_thread.py:91:9-13: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[@_]` in function `anyio.abc._eventloop.AsyncBackend.run_async_from_thread` [bad-argument-type]
+ ERROR src/anyio/from_thread.py:91:9-13: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[T_Retval]` in function `anyio.abc._eventloop.AsyncBackend.run_async_from_thread` [bad-argument-type]
- ERROR src/anyio/from_thread.py:119:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.abc._eventloop.AsyncBackend.run_sync_from_thread` [bad-argument-type]
+ ERROR src/anyio/from_thread.py:119:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> T_Retval` in function `anyio.abc._eventloop.AsyncBackend.run_sync_from_thread` [bad-argument-type]
- ERROR src/anyio/from_thread.py:378:38-42: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval] | T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[@_] | @_` in function `BlockingPortal._spawn_task_from_thread` [bad-argument-type]
+ ERROR src/anyio/from_thread.py:378:38-42: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval] | T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[T_Retval] | T_Retval` in function `BlockingPortal._spawn_task_from_thread` [bad-argument-type]
- ERROR src/anyio/to_thread.py:64:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.abc._eventloop.AsyncBackend.run_sync_in_worker_thread` [bad-argument-type]
+ ERROR src/anyio/to_thread.py:64:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> T_Retval` in function `anyio.abc._eventloop.AsyncBackend.run_sync_in_worker_thread` [bad-argument-type]

bokeh (https://github.com/bokeh/bokeh)
+ ERROR src/bokeh/layouts.py:575:34-68: Argument `list[LayoutDOM | grid.col | grid.row]` is not assignable to parameter `children` with type `list[grid.col | grid.row]` in function `col.__init__` [bad-argument-type]
+ ERROR src/bokeh/layouts.py:575:34-68: Argument `list[LayoutDOM | grid.col | grid.row]` is not assignable to parameter `children` with type `list[grid.col | grid.row]` in function `row.__init__` [bad-argument-type]
- ERROR src/bokeh/layouts.py:575:43-51: Argument `((children: list[LayoutDOM], level: int = 0) -> grid.col | grid.row) | ((item: LayoutDOM, top_level: bool = False) -> LayoutDOM | grid.col | grid.row)` is not assignable to parameter `func` with type `(list[LayoutDOM]) -> grid.col | grid.row` in function `map.__new__` [bad-argument-type]
+ ERROR src/bokeh/layouts.py:575:43-51: Argument `((children: list[LayoutDOM], level: int = 0) -> grid.col | grid.row) | ((item: LayoutDOM, top_level: bool = False) -> LayoutDOM | grid.col | grid.row)` is not assignable to parameter `func` with type `(list[LayoutDOM]) -> LayoutDOM | grid.col | grid.row` in function `map.__new__` [bad-argument-type]

Expression (https://github.com/cognitedata/Expression)
- ERROR expression/collections/block.py:268:30-37: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*@_]) -> @_` in function `starmap` [bad-argument-type]
+ ERROR expression/collections/block.py:268:30-37: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*@_]) -> _TResult` in function `starmap` [bad-argument-type]
- ERROR expression/core/option.py:449:27-33: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*_P]) -> @_` in function `Option.starmap` [bad-argument-type]
+ ERROR expression/core/option.py:449:27-33: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*_P]) -> _TResult` in function `Option.starmap` [bad-argument-type]
- ERROR tests/test_array.py:47:36-50: Argument `(TypedArray[object]) -> TypedArray[str]` is not assignable to parameter `fn1` with type `(TypedArray[int]) -> @_` in function `expression.core.pipe.pipe` [bad-argument-type]
+ ERROR tests/test_array.py:47:36-50: Argument `(TypedArray[object]) -> TypedArray[str]` is not assignable to parameter `fn1` with type `(TypedArray[int]) -> TypedArray[str]` in function `expression.core.pipe.pipe` [bad-argument-type]

freqtrade (https://github.com/freqtrade/freqtrade)
+ ERROR freqtrade/data/metrics.py:333:12-40: Returned type `tuple[Series[float] | Series | float, Series[float] | Series | float]` is not assignable to declared return type `tuple[float, float]` [bad-return]
+ ERROR freqtrade/data/metrics.py:411:20-65: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/data/metrics.py:434:12-24: Returned type `Literal[-100] | Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/freqtradebot.py:844:26-48: `/` is not supported between `Series[str]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/freqtradebot.py:844:26-48: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/freqtradebot.py:844:26-48: `/` is not supported between `Series` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown.py:44:20-33: Returned type `Series[str] | Series` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown.py:45:16-57: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown.py:45:16-57: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown_relative.py:40:24-37: Returned type `Series[str] | Series` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown_relative.py:43:20-33: Returned type `Series[str] | Series` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:67:25-69: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:94:32-96:10: `-` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:94:48-88: `*` is not supported between `Literal[0]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:94:48-88: `*` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_onlyprofit.py:26:16-33: `*` is not supported between `Literal[-1]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_onlyprofit.py:26:16-33: Returned type `Series | int` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:36:16-38:10: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:37:13-92: `-` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:37:29-69: `*` is not supported between `Literal[0]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:37:29-69: `*` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_short_trade_dur.py:49:34-68: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_short_trade_dur.py:52:16-22: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:87:20-65: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:89:21-66: `+` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:93:21-56: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:128:65-78: Argument `Series | float` is not assignable to parameter `final_balance` with type `float` in function `freqtrade.data.metrics.calculate_cagr` [bad-argument-type]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:273:21-56: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:560:21-56: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:582:25-68: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:583:30-99: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/rpc/rpc.py:1523:45-55: Argument `Series[str] | Series | int` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR freqtrade/templates/sample_hyperopt_loss.py:54:34-68: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/templates/sample_hyperopt_loss.py:57:16-22: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]

more-itertools (https://github.com/more-itertools/more-itertools)
- ERROR more_itertools/more.py:950:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> @_` in function `map.__new__` [bad-argument-type]
+ ERROR more_itertools/more.py:950:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> map[Unknown]` in function `map.__new__` [bad-argument-type]

pwndbg (https://github.com/pwndbg/pwndbg)
- ERROR pwndbg/commands/onegadget.py:27:1-63: Argument `(((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> (show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) | ((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None)` is not assignable to parameter with type `((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> (show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None` [bad-argument-type]
+ ERROR pwndbg/commands/onegadget.py:27:1-63: Argument `(((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> (show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) | ((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None)` is not assignable to parameter with type `((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> ((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) | None` [bad-argument-type]

setuptools (https://github.com/pypa/setuptools)
- ERROR setuptools/_vendor/more_itertools/more.py:917:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> @_` in function `map.__new__` [bad-argument-type]
+ ERROR setuptools/_vendor/more_itertools/more.py:917:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> map[Unknown]` in function `map.__new__` [bad-argument-type]

steam.py (https://github.com/Gobot1234/steam.py)
+ ERROR steam/ext/tf2/currency.py:91:18-45: Object of class `int` has no attribute `as_tuple` [missing-attribute]

schemathesis (https://github.com/schemathesis/schemathesis)
- ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1139:45-60: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> @_` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]
+ ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1139:45-60: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> dict[str, Any]` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]
- ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1152:45-74: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> @_` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]
+ ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1152:45-74: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> dict[str, Any]` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]

pip (https://github.com/pypa/pip)
- ERROR src/pip/_vendor/rich/text.py:1285:16-27: Returned type `Literal[1] | SupportsIndex` is not assignable to declared return type `int` [bad-return]

openlibrary (https://github.com/internetarchive/openlibrary)
- ERROR openlibrary/core/imports.py:144:24-34: Argument `type[ImportItem]` is not assignable to parameter `func` with type `(@_) -> @_` in function `map.__new__` [bad-argument-type]
+ ERROR openlibrary/core/imports.py:144:24-34: Argument `type[ImportItem]` is not assignable to parameter `func` with type `(@_) -> ImportItem` in function `map.__new__` [bad-argument-type]

egglog-python (https://github.com/egraphs-good/egglog-python)
- ERROR python/egglog/thunk.py:50:20-49: Argument `Unresolved[@_, *TS]` is not assignable to parameter `state` with type `Error | Resolved[@_] | Resolving | Unresolved[@_, *TS]` in function `Thunk.__init__` [bad-argument-type]
+ ERROR python/egglog/thunk.py:50:20-49: Argument `Unresolved[T, *TS]` is not assignable to parameter `state` with type `Error | Resolved[T] | Resolving | Unresolved[T, *TS]` in function `Thunk.__init__` [bad-argument-type]
- ERROR python/egglog/thunk.py:50:31-33: Argument `(**tuple[*TS]) -> T` is not assignable to parameter `fn` with type `(**tuple[*TS]) -> @_` in function `Unresolved.__init__` [bad-argument-type]
+ ERROR python/egglog/thunk.py:50:31-33: Argument `(**tuple[*TS]) -> T` is not assignable to parameter `fn` with type `(**tuple[*TS]) -> T` in function `Unresolved.__init__` [bad-argument-type]

antidote (https://github.com/Finistere/antidote)
- ERROR tests/core/test_inject.py:549:13-27: Argument `() -> object` is not assignable to parameter `__arg` with type `(Any, ParamSpec(@_)) -> @_` in function `antidote.core.Inject.method` [bad-argument-type]
+ ERROR tests/core/test_inject.py:549:13-27: Argument `() -> object` is not assignable to parameter `__arg` with type `(Any, ParamSpec(@_)) -> object` in function `antidote.core.Inject.method` [bad-argument-type]

aiohttp (https://github.com/aio-libs/aiohttp)
- ERROR aiohttp/cookiejar.py:410:37-80: No matching overload found for function `itertools.accumulate.__new__` called with arguments: (type[accumulate[_T]], list[str], Overload[
-   (self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString
-   (self: LiteralString, *args: object, **kwargs: object) -> str
- ]) [no-matching-overload]

pandas-stubs (https://github.com/pandas-dev/pandas-stubs)
+ ERROR tests/series/test_agg.py:26:22-44: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:26:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:27:22-46: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:27:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:28:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:28:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:29:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:29:33-35: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:34:22-44: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:34:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:35:22-46: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:35:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:36:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:36:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:37:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:37:33-35: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:42:22-44: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:42:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:43:22-46: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:43:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:44:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:44:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:45:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:45:33-35: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:52:22-46: assert_type(Unknown, complex) failed [assert-type]
+ ERROR tests/series/test_agg.py:52:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:97:22-51: assert_type(Unknown, Timedelta) failed [assert-type]
+ ERROR tests/series/test_agg.py:97:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:98:22-53: assert_type(Unknown, Timedelta) failed [assert-type]
+ ERROR tests/series/test_agg.py:98:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:99:22-50: assert_type(Unknown, Timedelta) failed [assert-type]
+ ERROR tests/series/test_agg.py:99:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_cumul.py:35:20-60: assert_type(Series[Series | complex], Series[complex]) failed [assert-type]

bandersnatch (https://github.com/pypa/bandersnatch)
- ERROR src/bandersnatch/mirror.py:814:70-81: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
+ ERROR src/bandersnatch/mirror.py:814:70-81: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]

hydpy (https://github.com/hydpy-dev/hydpy)
+ ERROR hydpy/core/seriestools.py:326:31-36: Argument `Index` is not assignable to parameter `date` with type `Date | datetime | str` in function `hydpy.core.timetools.Date.__new__` [bad-argument-type]

pandas (https://github.com/pandas-dev/pandas)
- ERROR pandas/core/computation/align.py:86:12-19: Returned type `(terms: Unknown) -> tuple[partial[Unknown] | type[NDFrame], dict[str, Index] | None] | tuple[Unknown, None] | Unknown` is not assignable to declared return type `(F) -> F` [bad-return]
+ ERROR pandas/core/computation/align.py:86:12-19: Returned type `(terms: Unknown) -> tuple[dtype | Unknown, None] | tuple[partial[Unknown] | type[NDFrame], dict[str, Index] | None] | Unknown` is not assignable to declared return type `(F) -> F` [bad-return]
- ERROR pandas/core/computation/common.py:46:36-60: No matching overload found for function `pandas.core.dtypes.cast.find_common_type` called with arguments: (list[_Buffer | _HasDType[dtype] | _HasNumPyDType[dtype] | _NestedSequence[bytes | complex | str] | _NestedSequence[_SupportsArray[dtype]] | _SupportsArray[dtype] | bytes | complex | dtype | list[Any] | str | _DTypeDict | tuple[Any, Any] | type[Any] | Unknown | None]) [no-matching-overload]
- ERROR pandas/core/computation/expr.py:172:9-173:41: Argument `Generator[object]` is not assignable to parameter `iterable` with type `Iterable[_Token]` in function `tokenize.untokenize` [bad-argument-type]
+ ERROR pandas/core/computation/expr.py:172:9-173:41: Argument `Generator[object | Unknown]` is not assignable to parameter `iterable` with type `Iterable[_Token]` in function `tokenize.untokenize` [bad-argument-type]
+ ERROR pandas/core/computation/ops.py:266:27-28: Argument `type[bool] | dtype | Unknown` is not assignable to parameter `cls` with type `type` in function `issubclass` [bad-argument-type]

pytest-robotframework (https://github.com/detachhead/pytest-robotframework)
- ERROR tests/type_tests.py:64:5-40: Argument `() -> None` is not assignable to parameter `fn` with type `() -> AbstractContextManager[@_]` in function `pytest_robotframework._WrappedContextManagerKeywordDecorator.__call__` [bad-argument-type]
+ ERROR tests/type_tests.py:64:5-40: Argument `() -> None` is not assignable to parameter `fn` with type `(ParamSpec(@_)) -> AbstractContextManager[@_]` in function `pytest_robotframework._WrappedContextManagerKeywordDecorator.__call__` [bad-argument-type]

mypy (https://github.com/python/mypy)
- ERROR mypy/typeshed/stdlib/builtins.pyi:227:5-17: Argument `(metacls: Self@type, name: str, bases: tuple[type[Any], ...], /, **kwds: Any) -> MutableMapping[str, object]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:227:5-17: Argument `(metacls: Self@type, name: str, bases: tuple[type[Any], ...], /, **kwds: Any) -> MutableMapping[str, object]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> MutableMapping[str, object]` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:277:9-21: Argument `(cls: Self@int, bytes: Buffer | Iterable[SupportsIndex] | SupportsBytes, byteorder: Literal['big', 'little'] = 'big', *, signed: bool = False) -> Self@int` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:277:9-21: Argument `(cls: Self@int, bytes: Buffer | Iterable[SupportsIndex] | SupportsBytes, byteorder: Literal['big', 'little'] = 'big', *, signed: bool = False) -> Self@int` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@int` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:370:5-17: Argument `(cls: Self@float, string: str, /) -> Self@float` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:370:5-17: Argument `(cls: Self@float, string: str, /) -> Self@float` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@float` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:640:9-21: Argument `(cls: Self@bytes, string: str, /) -> Self@bytes` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:640:9-21: Argument `(cls: Self@bytes, string: str, /) -> Self@bytes` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@bytes` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:750:9-21: Argument `(cls: Self@bytearray, string: str, /) -> Self@bytearray` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:750:9-21: Argument `(cls: Self@bytearray, string: str, /) -> Self@bytearray` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@bytearray` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1117:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: None = None, /) -> builtins.dict[_T, Any | None]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1117:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: None = None, /) -> builtins.dict[_T, Any | None]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> builtins.dict[_T, Any | None]` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1119:9-17: `fromkeys` has type `classmethod[Unknown, Ellipsis, Unknown]` after decorator application, which is not callable [invalid-overload]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1119:9-17: `fromkeys` has type `classmethod[Unknown, Ellipsis, dict[_T, Any | None]]` after decorator application, which is not callable [invalid-overload]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1120:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: _S, /) -> builtins.dict[_T, _S]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1120:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: _S, /) -> builtins.dict[_T, _S]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> builtins.dict[_T, _S]` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1122:9-17: `fromkeys` has type `classmethod[@_, @_, @_]` after decorator application, which is not callable [invalid-overload]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1122:9-17: `fromkeys` has type `classmethod[@_, @_, dict[_T, _S]]` after decorator application, which is not callable [invalid-overload]

werkzeug (https://github.com/pallets/werkzeug)
- ERROR tests/test_utils.py:285:5-27: Argument `() -> Literal[42]` is not assignable to parameter `fget` with type `(Any) -> @_` in function `werkzeug.utils.cached_property.__init__` [bad-argument-type]
+ ERROR tests/test_utils.py:285:5-27: Argument `() -> Literal[42]` is not assignable to parameter `fget` with type `(Any) -> int` in function `werkzeug.utils.cached_property.__init__` [bad-argument-type]

rich (https://github.com/Textualize/rich)
- ERROR rich/text.py:1287:16-27: Returned type `Literal[1] | SupportsIndex` is not assignable to declared return type `int` [bad-return]

core (https://github.com/home-assistant/core)
- ERROR homeassistant/components/backup/backup.py:101:56-74: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/backup/backup.py:101:56-74: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> bool` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/image_upload/__init__.py:223:54-73: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/image_upload/__init__.py:223:54-73: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> bool` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/media_source/local_source.py:305:49-67: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/media_source/local_source.py:305:49-67: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> bool` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/panasonic_viera/__init__.py:251:62-66: Argument `(**tuple[*_Ts]) -> _R` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/panasonic_viera/__init__.py:251:62-66: Argument `(**tuple[*_Ts]) -> _R` is not assignable to parameter `target` with type `(**tuple[*@_]) -> _R` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/rfxtrx/entity.py:123:48-51: Argument `(**tuple[Unknown, *_Ts]) -> None` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/rfxtrx/entity.py:123:48-51: Argument `(**tuple[Unknown, *_Ts]) -> None` is not assignable to parameter `target` with type `(**tuple[*@_]) -> None` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/core.py:842:48-54: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
+ ERROR homeassistant/core.py:842:48-54: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> _T` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
- ERROR homeassistant/core.py:859:64-70: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
+ ERROR homeassistant/core.py:859:64-70: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> _T` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]

trio (https://github.com/python-trio/trio)
- ERROR src/trio/_tests/test_signals.py:73:38-57: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_signals.py:73:38-57: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Unknown | None], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:921:44-86: Unpacked argument `tuple[() -> Task]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> @_, *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:921:44-86: Unpacked argument `tuple[() -> Task]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Task, *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:922:44-81: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, Task]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:922:44-81: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, Task]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Task], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:931:31-72: Unpacked argument `tuple[() -> int]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> @_, *Unknown]` in function `trio._threads.from_thread_run` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:990:33-67: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:990:33-67: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Unknown | None], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:1121:37-89: Unpacked argument `tuple[(seconds: float) -> Coroutine[Unknown, Unknown, None], Literal[0]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:1121:37-89: Unpacked argument `tuple[(seconds: float) -> Coroutine[Unknown, Unknown, None], Literal[0]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Unknown | None], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]

xarray (https://github.com/pydata/xarray)
- ERROR xarray/core/groupby.py:544:28-37: Object of class `bytes` has no attribute `data`
- Object of class `complex` has no attribute `data`
- Object of class `str` has no attribute `data` [missing-attribute]
- ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Any]` is not assignable to parameter `indexers` with type `Mapping[Any, Any] | None` in function `xarray.core.dataarray.DataArray.reindex` [bad-argument-type]
+ ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Index]` is not assignable to parameter `indexers` with type `Mapping[Any, Any] | None` in function `xarray.core.dataarray.DataArray.reindex` [bad-argument-type]
- ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Any]` is not assignable to parameter `labels` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.frame.DataFrame.reindex` [bad-argument-type]
+ ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Index]` is not assignable to parameter `labels` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.frame.DataFrame.reindex` [bad-argument-type]
- ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Any]` is not assignable to parameter `index` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.series.Series.reindex` [bad-argument-type]
+ ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Index]` is not assignable to parameter `index` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.series.Series.reindex` [bad-argument-type]

spark (https://github.com/apache/spark)
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:204:47-49: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:206:45-47: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:207:51-59: No matching overload found for function `pandas.core.series.Series.var` called with arguments: (ddof=Literal[0]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:208:51-59: No matching overload found for function `pandas.core.series.Series.var` called with arguments: (ddof=Literal[2]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:209:45-47: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:210:51-59: No matching overload found for function `pandas.core.series.Series.std` called with arguments: (ddof=Literal[0]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:211:51-59: No matching overload found for function `pandas.core.series.Series.std` called with arguments: (ddof=Literal[2]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/series/test_compute.py:155:63-67: No matching overload found for function `pandas.core.series.Series.diff` called with arguments: (Literal[-1]) [no-matching-overload]
- ERROR python/pyspark/pandas/tests/series/test_cumulative.py:31:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/pandas/tests/series/test_cumulative.py:31:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[Series[str] | Series | float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
- ERROR python/pyspark/pandas/tests/series/test_cumulative.py:44:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/pandas/tests/series/test_cumulative.py:44:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[Series[str] | Series | float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
- ERROR python/pyspark/pandas/tests/series/test_cumulative.py:57:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/pandas/tests/series/test_cumulative.py:57:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[Series[str] | Series | float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1028:17-42: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1046:17-42: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1354:17-49: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1455:17-42: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:551:17-38: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:552:20-25: Returned type `Series | int` is not assignable to declared return type `int` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:940:17-40: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:942:20-53: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:972:17-41: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:973:20-64: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:997:17-38: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:998:20-25: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:1006:17-33: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:1007:20-25: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:397:17-40: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:399:20-53: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:420:17-41: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:421:20-64: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/test_udf_profiler.py:551:17-35: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/test_udf_profiler.py:553:20-53: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
- ERROR python/pyspark/sql/types.py:2760:25-2768:26: Argument `Generator[DataType]` is not assignable to parameter `iterable` with type `Iterable[StructType]` in function `functools.reduce` [bad-argument-type]

attrs (https://github.com/python-attrs/attrs)
- ERROR tests/test_converters.py:25:23-26: Argument `type[int]` is not assignable to parameter `converter` with type `(ConvertibleToInt, AttrsInstance, Attribute[Unknown]) -> @_` in function `attr.Converter.__init__` [bad-argument-type]
+ ERROR tests/test_converters.py:25:23-26: Argument `type[int]` is not assignable to parameter `converter` with type `(ConvertibleToInt, AttrsInstance, Attribute[Unknown]) -> int` in function `attr.Converter.__init__` [bad-argument-type]
- ERROR tests/test_converters.py:104:23-30: Argument `(_: Unknown, __: Unknown, ___: Unknown) -> float` is not assignable to parameter `converter` with type `(Unknown) -> @_` in function `attr.Converter.__init__` [bad-argument-type]
+ ERROR tests/test_converters.py:104:23-30: Argument `(_: Unknown, __: Unknown, ___: Unknown) -> float` is not assignable to parameter `converter` with type `(Unknown) -> float` in function `attr.Converter.__init__` [bad-argument-type]

prefect (https://github.com/PrefectHQ/prefect)
- ERROR src/prefect/server/database/_migrations/env.py:190:35-52: Argument `(connection: AsyncEngine) -> None` is not assignable to parameter with type `(Connection, ParamSpec(@_)) -> @_` in function `sqlalchemy.ext.asyncio.engine.AsyncConnection.run_sync` [bad-argument-type]
+ ERROR src/prefect/server/database/_migrations/env.py:190:35-52: Argument `(connection: AsyncEngine) -> None` is not assignable to parameter with type `(Connection, ParamSpec(@_)) -> None` in function `sqlalchemy.ext.asyncio.engine.AsyncConnection.run_sync` [bad-argument-type]

jax (https://github.com/google/jax)
+ ERROR jax/_src/pallas/mosaic/interpret/interpret_pallas_call.py:2032:53-67: Argument `DynamicGridDim | int | Unknown` is not assignable to parameter `y` with type `Array | builtins.bool | numpy.bool | complex | float | int | ndarray | number` in function `jax.numpy.BinaryUfunc.__call__` [bad-argument-type]

scipy-stubs (https://github.com/scipy/scipy-stubs)
+ ERROR tests/sparse/test_construct.pyi:291:12-130: assert_type(coo_array[_Numeric, tuple[int, int]] | coo_matrix, coo_array[floating[_32Bit], tuple[int, int]] | coo_matrix[floating[_32Bit]]) failed [assert-type]

@github-actions
Copy link
Copy Markdown

Primer Diff Classification

❌ 9 regression(s) | ✅ 2 improvement(s) | ➖ 18 neutral | 29 project(s) total | +153, -61 errors

9 regression(s) across bokeh, freqtrade, steam.py, openlibrary, pandas-stubs, hydpy, spark, jax, scipy-stubs. error kinds: bad-argument-type, unsupported-operation on pandas Series arithmetic, bad-return from pandas computations. caused by as_tuple(). 2 improvement(s) across pip, rich.

Project Verdict Changes Error Kinds Root Cause
anyio ➖ Neutral +10, -10 bad-argument-type pyrefly/lib/solver/subset.rs
bokeh ❌ Regression +2, -1 bad-argument-type pyrefly/lib/solver/subset.rs
Expression ➖ Neutral +3, -3 bad-argument-type pyrefly/lib/solver/subset.rs
freqtrade ❌ Regression +30 unsupported-operation on pandas Series arithmetic pyrefly/lib/solver/subset.rs
more-itertools ➖ Neutral +1, -1 bad-argument-type
pwndbg ➖ Neutral +1, -1 bad-argument-type
setuptools ➖ Neutral +1, -1 bad-argument-type
steam.py ❌ Regression +1 missing-attribute as_tuple()
schemathesis ➖ Neutral +2, -2 bad-argument-type
pip ✅ Improvement -1 bad-return pyrefly/lib/solver/subset.rs
openlibrary ❌ Regression +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
egglog-python ➖ Neutral +2, -2 bad-argument-type
antidote ➖ Neutral +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
pandas-stubs ❌ Regression +33 no-matching-overload on Series aggregation methods pyrefly/lib/solver/subset.rs
bandersnatch ➖ Neutral +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
hydpy ❌ Regression +1 bad-argument-type pyrefly/lib/solver/subset.rs
pandas ➖ Neutral +3, -3 bad-argument-type, bad-return pyrefly/lib/solver/subset.rs
pytest-robotframework ➖ Neutral +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
mypy ➖ Neutral +9, -9 bad-argument-type, invalid-overload pyrefly/lib/solver/subset.rs
werkzeug ➖ Neutral +1, -1 bad-argument-type
rich ✅ Improvement -1 bad-return pyrefly/lib/solver/subset.rs
core ➖ Neutral +7, -7 bad-argument-type with @_ inference failure (new) pyrefly/lib/solver/subset.rs
trio ➖ Neutral +5, -6 bad-argument-type
xarray ➖ Neutral +1, -1 bad-argument-type
spark ❌ Regression +31, -4 no-matching-overload on Series.mean/var pyrefly/lib/solver/subset.rs
attrs ➖ Neutral +2, -2 bad-argument-type
prefect ➖ Neutral +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
jax ❌ Regression +1 bad-argument-type pyrefly/lib/solver/subset.rs
scipy-stubs ❌ Regression +1 assert-type pyrefly/lib/solver/subset.rs
Detailed analysis

❌ Regression (9)

bokeh (+2, -1)

The PR improved overload resolution for generic callables. The new error on col.__init__ (line 575:34-68) correctly identifies that list(map(traverse, item.children)) produces list[LayoutDOM | grid.col | grid.row] but col expects list[grid.col | grid.row]. This is a genuine type issue — traverse can return a bare LayoutDOM in its else branch, which doesn't match row | col. The map.__new__ error is essentially unchanged (same root cause, slightly different inferred target type). Pyright confirms both errors. The removed error was replaced by a more accurate version.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for callable types now checks return type before parameter list when the target has inference variables. This changes which overload of map.__new__ is selected, leading to a more accurate return type inference (LayoutDOM | grid.col | grid.row instead of grid.col | grid.row), which in turn reveals the col.__init__ children type mismatch.

freqtrade (+30)

unsupported-operation on pandas Series arithmetic: 18 errors claiming operations like / are not supported between Series[str] and float. These are false positives — the pandas-stubs type DataFrame.__getitem__ with a str key as returning Series[str], where the type parameter incorrectly reflects the key type rather than the column's value dtype. The PR's overload resolution change causes pyrefly to now select this overload where it previously selected a different one, surfacing this pandas-stubs limitation. At runtime, trades["profit_abs"] returns a numeric Series that fully supports division by float. 0/18 co-reported by mypy/pyright.
bad-return from pandas computations: 10 errors where functions returning computed pandas values are flagged as returning Series[float] | Series | float instead of float. Inside conditional blocks, variables are reassigned to results of pandas arithmetic where intermediate values come from pandas Series operations. The changed overload resolution now causes pyrefly to infer these intermediate results as union types including Series, which propagates to the final return type. At runtime these are scalar floats. 0/10 co-reported by mypy/pyright.
bad-argument-type passing pandas results as float: 2 errors where pandas computation results are passed to functions expecting float. Same root cause — incorrect overload resolution makes pyrefly think the values are Series | float instead of float. 0/2 co-reported by mypy/pyright.

Overall: The PR fixes a real bug (#2876) for itertools.accumulate overload selection, but as a side effect, it changes overload resolution for pandas operations in a way that selects incorrect overloads. The 30 new errors are all false positives — pandas operations like Series.sum() / float work fine at runtime and are not flagged by mypy or pyright. The root cause is that checking return type before params for generic callables with inference vars causes different (wrong) overload matches for pandas stubs.

Per-category reasoning:

  • unsupported-operation on pandas Series arithmetic: 18 errors claiming operations like / are not supported between Series[str] and float. These are false positives — the pandas-stubs type DataFrame.__getitem__ with a str key as returning Series[str], where the type parameter incorrectly reflects the key type rather than the column's value dtype. The PR's overload resolution change causes pyrefly to now select this overload where it previously selected a different one, surfacing this pandas-stubs limitation. At runtime, trades["profit_abs"] returns a numeric Series that fully supports division by float. 0/18 co-reported by mypy/pyright.
  • bad-return from pandas computations: 10 errors where functions returning computed pandas values (e.g., expectancy, expectancy_ratio, calmar_ratio) are flagged as returning Series[float] | Series | float instead of float. Inside conditional blocks, these variables are reassigned to results of pandas arithmetic (e.g., (winrate * average_win) - (loserate * average_loss)) where intermediate values like winrate and average_win come from pandas Series operations. The changed overload resolution now causes pyrefly to infer these intermediate results as union types including Series, which propagates to the final return type. At runtime these are scalar floats. 0/10 co-reported by mypy/pyright.
  • bad-argument-type passing pandas results as float: 2 errors where pandas computation results are passed to functions expecting float. Same root cause — incorrect overload resolution makes pyrefly think the values are Series | float instead of float. 0/2 co-reported by mypy/pyright.

Attribution: The change to is_subset_eq ordering in pyrefly/lib/solver/subset.rs (the Callable matching branch) now checks return type before params when the target has inference variables (want_has_vars). This changes which overloads get selected for pandas stub methods like __getitem__, sum(), __truediv__, etc. The new order causes pyrefly to select overloads that return Series[str] instead of numeric types, cascading into unsupported-operation, bad-return, and bad-argument-type errors throughout freqtrade's pandas-heavy code.

steam.py (+1)

The analysis is factually correct. Decimal implements __round__ returning Decimal, so round(fractional, 2) where fractional: Decimal should return Decimal via the round(__number: SupportsRound[_T], __ndigits: int) -> _T overload. Pyrefly incorrectly infers the return type as int (likely selecting the wrong overload), causing a false positive when .as_tuple() is called on the result. The code is correct at runtime and other type checkers do not flag it.
Attribution: The change to is_subset_params/is_subset_eq ordering in pyrefly/lib/solver/subset.rs altered how generic callable matching works when the target has inference variables. This changed the overload resolution order (return type checked before parameters). The round() builtin has overloads: round(number, ndigits: int) -> ... — the overload that takes ndigits should return the same type as the input (for Decimal, it returns Decimal), but the changed matching order likely causes pyrefly to select the wrong overload, resolving round(Decimal, int) to int instead of Decimal. This causes the downstream missing-attribute error since int doesn't have as_tuple().

openlibrary (+1, -1)

This is a false positive that improved but was not fully resolved. The old error said type[ImportItem] is not assignable to (@_) -> @_, and the new error says it's not assignable to (@_) -> ImportItem. Both are false positives because type[ImportItem] IS a valid callable that returns ImportItem — calling a class constructor produces an instance of that class. The @_ in the parameter position indicates an unresolved inference variable for the iterable element type. The PR improved inference: the return type now correctly resolves to ImportItem instead of remaining as the unresolved @_. This is a meaningful improvement — the new error message is more informative and shows that return type inference is working correctly. The remaining issue is that the parameter type @_ is still not being matched against type[ImportItem]'s constructor signature. Pyright correctly accepts this code (as noted in the error annotation), confirming this is a false positive. Overall this represents an improvement: the false positive persists but with better type inference (partially resolved rather than fully unresolved).
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset callable matching logic reorders the checking: when the target callable has quantified variables (want_has_vars), it now checks return type before parameters. This changed the overload resolution for map.__new__, resulting in a different overload being selected (one where the return type is now ImportItem instead of @_), but the parameter inference still fails, producing the @_ in the parameter position.

pandas-stubs (+33)

no-matching-overload on Series aggregation methods: 16 new no-matching-overload errors on series.mean(), series.median(), series.std(), series.var() calls. These are valid calls on pd.Series with bool/int/float/complex/Timestamp/Timedelta element types. The pandas-stubs define overloads for these methods that mypy and pyright resolve correctly. The PR's change to check return type before params when inference vars are present causes pyrefly to fail overload matching. These are false positives — regression.
assert-type failures cascading from wrong overload: 17 new assert-type errors like assert_type(Unknown, float) failed. These cascade from the no-matching-overload failures — when pyrefly can't find a matching overload, it infers Unknown as the return type, which then fails the assert_type check against float. The Unknown type in the error message is a direct consequence of the overload resolution failure, not a real type issue. These are false positives — regression.

Overall: This is a type stubs project (pandas-stubs) that is extensively tested against mypy and pyright. The test code is correct — pd.Series([True, False, True]).mean() should return float, and the stubs are designed to express this. The PR's change to overload resolution order (checking return type before params when inference vars are present) fixes the specific itertools.accumulate case from issue #2876 but breaks overload resolution for pandas Series aggregation methods. The 16 no-matching-overload errors indicate pyrefly can no longer find a valid overload for these well-typed calls, and the 17 assert-type failures are cascading from the wrong type being inferred. Since 0/33 errors are confirmed by mypy or pyright, these are all false positives introduced by the PR.

Per-category reasoning:

  • no-matching-overload on Series aggregation methods: 16 new no-matching-overload errors on series.mean(), series.median(), series.std(), series.var() calls. These are valid calls on pd.Series with bool/int/float/complex/Timestamp/Timedelta element types. The pandas-stubs define overloads for these methods that mypy and pyright resolve correctly. The PR's change to check return type before params when inference vars are present causes pyrefly to fail overload matching. These are false positives — regression.
  • assert-type failures cascading from wrong overload: 17 new assert-type errors like assert_type(Unknown, float) failed. These cascade from the no-matching-overload failures — when pyrefly can't find a matching overload, it infers Unknown as the return type, which then fails the assert_type check against float. The Unknown type in the error message is a direct consequence of the overload resolution failure, not a real type issue. These are false positives — regression.

Attribution: The change in pyrefly/lib/solver/subset.rs at the is_subset_params/is_subset_eq block reverses the order of checking when the target callable (want) contains quantified variables — it now checks return type before parameters. This change in overload resolution order causes pyrefly to select the wrong overload for pandas Series methods like mean(), median(), std(), and var(). These methods have multiple overloads in pandas-stubs (e.g., overloads for different Series element types like bool, int, float, complex, Timestamp, Timedelta). The new checking order causes pyrefly to fail to match the correct overload, producing both no-matching-overload errors (can't find any matching overload) and assert-type errors (the inferred return type doesn't match float).

hydpy (+1)

The variable date1 comes from iterating over reversed(dataframe_resampled.index) where dataframe_resampled is a pandas DataFrame. The .index property returns a DatetimeIndex (since the index was set via pandas.date_range), and iterating over it yields pandas.Timestamp objects, which are subclasses of datetime.datetime. The Date.__new__ method accepts datetime, so Timestamp should be assignable. Pyrefly is reporting the type as Index (the base pandas Index class) rather than recognizing the more specific DatetimeIndex/Timestamp type. This is a type inference regression — the PR's change to check return types before parameters in generic callable subset checking has caused a different (worse) type to be inferred for the iteration variable. Neither mypy nor pyright flag this, confirming it's a false positive.
Attribution: The PR changed the order of subset checking in pyrefly/lib/solver/subset.rs — when the target callable has inference variables, it now checks the return type before parameters. This reordering of constraint solving could change how types are inferred in complex generic contexts. The reversed() call on a pandas DatetimeIndex likely involves generic type resolution (e.g., reversed returns Iterator[T] where T is inferred from the input). The changed order of return-type-first checking may have caused pyrefly to infer a less specific type (Index instead of Timestamp) for the iteration variable date1, leading to this false positive.

spark (+31, -4)

no-matching-overload on Series.mean/var: False positives. Calling .mean() or .var() on a boolean pd.Series is valid and works at runtime. The changed overload resolution fails to match any overload.
unsupported-operation += int and Series[str]: Cascade false positives. The root cause is wrong overload selection for .sum() which now returns Series[str] instead of a scalar, making += with int appear invalid.
bad-return Series | int not assignable to int: Cascade false positives. Same root cause — .sum() resolving to wrong overload that includes Series in return type.
bad-argument-type on .sum() self parameter: Changed false positives. The old errors were already false positives but the new ones have worse type inference (Series[str] | Series | float instead of float).

Overall: The PR fixes a real bug with itertools.accumulate overload selection but introduces 31 new false positives across pyspark's pandas test files. The changed overload resolution order (return-type-first when inference vars present) causes pyrefly to select wrong overloads for pandas Series statistical methods, producing cascading errors. All 31 new errors are pyrefly-only (0 confirmed by mypy/pyright), and the code is clearly correct at runtime.

Per-category reasoning:

  • no-matching-overload on Series.mean/var: False positives. Calling .mean() or .var() on a boolean pd.Series is valid and works at runtime. The changed overload resolution fails to match any overload.
  • unsupported-operation += int and Series[str]: Cascade false positives. The root cause is wrong overload selection for .sum() which now returns Series[str] instead of a scalar, making += with int appear invalid.
  • bad-return Series | int not assignable to int: Cascade false positives. Same root cause — .sum() resolving to wrong overload that includes Series in return type.
  • bad-argument-type on .sum() self parameter: Changed false positives. The old errors were already false positives but the new ones have worse type inference (Series[str] | Series | float instead of float).

Attribution: The change in pyrefly/lib/solver/subset.rs at the is_subset_params/is_subset_eq reordering (lines ~1481-1490) causes pyrefly to check return types before parameters when the target callable has quantified variables. This changes which overload gets selected for pandas Series methods like .sum(), .mean(), .var(), leading to incorrect overload selection that cascades into all 31 new errors.

jax (+1)

This is a false positive introduced by the PR. The variable num_iterations at line 2032 is the result of functools.reduce(jnp.multiply, grid) (line 1972). The grid tuple is constructed on lines 1799-1803 where DynamicGridDim sentinels are replaced with actual dynamic values, but the resulting tuple type still includes DynamicGridDim in the union because the type checker can't track the conditional replacement. At runtime, num_iterations will always be a valid numeric type (JAX array or int). The jnp.minimum call is perfectly valid. The PR's change to check return types before parameters in callable subset checking (in subset.rs) caused a different overload of BinaryUfunc.__call__ to be selected, one that is more restrictive about the argument type. Neither mypy nor pyright flag this, confirming it's a pyrefly-specific false positive.
Attribution: The change in pyrefly/lib/solver/subset.rs modifies the order of subset checking for callable types when the target has quantified variables — it now checks return type before parameters. This change affects how jnp.minimum (which is a BinaryUfunc.__call__ with overloads) resolves its overload. The new resolution order likely selects a different (more restrictive) overload for jnp.minimum, causing it to reject DynamicGridDim | int | Unknown as the second argument where previously a more permissive overload was selected. The num_iterations variable has type DynamicGridDim | int | Unknown because it comes from functools.reduce(jnp.multiply, grid) where grid contains DynamicGridDim | int elements (line 1799-1803), and the reduce doesn't fully narrow the type. The Unknown likely comes from the untyped functools.reduce return.

scipy-stubs (+1)

This is a regression. The error shows that pyrefly now infers coo_array[_Numeric, tuple[int, int]] | coo_matrix instead of the expected coo_array[floating[_32Bit], tuple[int, int]] | coo_matrix[floating[_32Bit]]. Two problems are visible: (1) _Numeric appears instead of the properly resolved scalar type — _Numeric is likely an internal type variable from the scipy stubs that should have been resolved to the concrete scalar type (here floating[_32Bit], which is the instantiation of ScalarType for the test variables any_arr and any_mat). This indicates a type variable resolution failure during overload matching. (2) coo_matrix appears without type parameters, whereas the expected type is coo_matrix[floating[_32Bit]] (i.e., coo_matrix[ScalarType] in the source). This is a type stubs project where the assert_type tests define the expected behavior that mypy/pyright agree with. The PR's change to check return types before parameters when the target has inference variables fixes the itertools.accumulate case but breaks overload resolution for this scipy sparse function, specifically the block_diag function when called with a mix of array and matrix types. The fix improved one case but regressed another.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for Type::Callable cases reverses the order of checking: when the target callable has quantified variables (want_has_vars), it now checks return type before parameters. This reordering changes how type inference variables get resolved during overload selection, which causes sparse.block_diag([any_arr, any_mat]) to resolve to a different overload. Previously pyrefly inferred coo_matrix[ScalarType] | coo_array[ScalarType, tuple[int, int]] (matching the assert_type), but now it infers coo_array[_Numeric, tuple[int, int]] | coo_matrix — note the _Numeric instead of ScalarType and the unparameterized coo_matrix, indicating the type variables are not being properly resolved for this particular overload.

✅ Improvement (2)

pip (-1)

The PR fixes overload/generic callable resolution by checking return types before parameters when the target has inference variables. This corrects the inference for functools.reduce(math.gcd, list[int]) from SupportsIndex to int, removing a false positive bad-return error. The method detect_indentation correctly returns int.
Attribution: The change to is_subset_eq ordering in pyrefly/lib/solver/subset.rs — when matching a generic callable target with inference variables, pyrefly now checks the return type before the parameter list. This improved inference for reduce(gcd, ...) by allowing the return type constraint to guide type variable resolution, correctly yielding int instead of SupportsIndex.

rich (-1)

The removed error was a false positive caused by incorrect overload/generic callable resolution. reduce(gcd, list[int]) should return int, but pyrefly was incorrectly inferring SupportsIndex due to wrong overload selection order. The PR fix correctly resolves this by checking return types first when the target has inference variables.
Attribution: The change in pyrefly/lib/solver/subset.rs reorders subset checking for callable types: when the target callable has quantified (inference) variables, the return type is checked before parameters. This fixes overload/generic resolution for cases like reduce(gcd, list[int]), where checking the return type first allows correct inference of the type variable, yielding int instead of SupportsIndex.

➖ Neutral (18)

anyio (+10, -10)

This is a net-neutral change in terms of error count (10 added, 10 removed), but the errors themselves are all false positives — both old and new. The @_ in the error messages like (**tuple[*@_]) -> Awaitable[T_Retval] indicates inference failure for TypeVarTuple parameters. The code is correct: to_thread.run_sync is called with bound methods like self._path.exists (which takes no args) and abandon_on_cancel=True as a keyword argument, and async_backend.run is called with properly typed arguments. Neither mypy nor pyright flags any of these. The PR improved inference for the return type (previously showed @_, now shows T_Retval — meaning the TypeVar is properly propagated rather than failing entirely) but still fails on the parameter type with @_, so the false positives persist with slightly different messages. The old errors showed (**tuple[*@_]) -> Awaitable[@_] while the new errors show (**tuple[*@_]) -> Awaitable[T_Retval] — a marginal improvement in TypeVarTuple handling but the end result is the same number of incorrect errors. Both old and new errors are false positives being replaced by slightly better-diagnosed false positives.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for callable types reorders the checking: when the target callable (want) has quantified variables, it now checks is_subset_eq on return types first, then is_subset_params on parameters. Previously it checked parameters first, then return types. This change successfully resolves the return type variable (changing Awaitable[@_] to Awaitable[T_Retval] in the error messages) but the parameter inference still fails (still showing tuple[*@_]), producing slightly different but equally wrong error messages.

Expression (+3, -3)

This is a net-neutral change. 3 false positives were removed and 3 equivalent false positives were added at the exact same locations. The error messages changed slightly (return type is now resolved to _TResult instead of @_), but the fundamental issue — pyrefly's inability to unify TypeVarTuple inference variables — remains. The code is correct per the variadic generics spec. Since neither mypy nor pyright flags these, and the types genuinely should unify, these remain false positives.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for callable types now checks return types before parameters when the target has inference vars. This resolves the return type (changing @_ to _TResult in the error message) but the parameter matching still fails for TypeVarTuple patterns, producing slightly different but equivalent false positive errors.

more-itertools (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

pwndbg (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

setuptools (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

schemathesis (+2, -2)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

egglog-python (+2, -2)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

antidote (+1, -1)

This is essentially a message wording change. Both before and after the PR, pyrefly reports a bad-argument-type error at the same location (line 549). The only difference is that the expected type in the error message changed from (Any, ParamSpec(@_)) -> @_ to (Any, ParamSpec(@_)) -> object. The @_ inference variable in the return type position got resolved to object due to the PR's change in checking order (return type checked before params when the target has inference vars). The underlying error is the same: a zero-argument function () -> object cannot satisfy a method signature that expects a self parameter. This is a correct error — the test code intentionally passes an invalid function to inject.method inside a pytest.raises(TypeError) block. The slight improvement in error message clarity (showing object instead of @_) is neutral-to-positive.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for callable types reverses the order of checking: when the target callable has quantified variables, it now checks the return type before the parameter list. This causes the return type to be resolved first (yielding object instead of @_), which changes the error message but not the error itself. The error was present both before and after the PR.

bandersnatch (+1, -1)

Both the old and new errors are false positives caused by pyrefly's inability to properly handle the run_in_executor overloads when passed a bound method like path.exists. The @_ types in both error messages confirm inference failures. The PR changed the order of checking (return type before params for generic callables with inference vars), which resolved the return type from @_ to bool but didn't fix the underlying parameter inference issue. The net effect is neutral — one false positive was replaced by a slightly different false positive with a marginally better error message (at least the return type is now concrete). Neither mypy nor pyright flag this code.
Attribution: The change in pyrefly/lib/solver/subset.rs in is_subset_params/is_subset_eq ordering (checking return type before parameters when the target has inference vars) caused the return type to be resolved to bool instead of @_, while the parameter types remain unresolved (@_). This is why the error message changed from -> @_ to -> bool — the return type is now checked first and successfully resolved, but the parameter matching still fails due to the inference variables.

pandas (+3, -3)

This is a mixed bag. The removal of the no-matching-overload false positive in common.py is a clear improvement. The bad-argument-type in expr.py changed from Generator[object] to Generator[object | Unknown] — essentially the same error with slightly worse inference (adding Unknown). The bad-return in align.py similarly changed slightly in wording (the union member tuple[Unknown, None] became tuple[dtype | Unknown, None]). The new error in ops.py at line 266 is about issubclass(t, (datetime, np.datetime64)) where t is inferred to potentially include Unknown, causing a bad-argument-type because issubclass expects a type as its first argument. This is a new false positive — at runtime t is always a type (obtained from self.return_type.type or self.return_type which are dtype/type objects), but the checker cannot verify this due to Unknown propagation. Overall: 1 false positive removed, 1 new false positive added, 2 errors changed wording with slightly degraded inference (Unknown contamination). Roughly neutral with a slight lean toward regression due to the Unknown contamination in inferred types.
Attribution: The change to is_subset_eq/is_subset_params ordering in pyrefly/lib/solver/subset.rs when want_has_vars is true (checking return type before params) changed how generic callables are resolved. This altered type inference for result_type_many and related functions, fixing the no-matching-overload false positive but slightly changing inferred types elsewhere, producing object | Unknown instead of object in generator expressions.

pytest-robotframework (+1, -1)

Both the old and new errors correctly flag line 64 as a type error (the code comments confirm this is intentional). However, the error message quality has degraded. The old message showed () -> AbstractContextManager[@_] — the parameters were resolved, making it clear the issue is the return type mismatch. The new message shows (ParamSpec(@_)) -> AbstractContextManager[@_] — the ParamSpec is unresolved, making the error message less informative and harder to understand. The @_ in the ParamSpec is an inference artifact that shouldn't be exposed to users. While the error is still correctly reported, the message quality is worse. This is a minor regression in error message quality, though the error detection itself remains correct. Since the error was correct before and is still correct now (just with a worse message), and the structural signal shows @_ types indicating inference issues, this leans toward a neutral-to-slight-regression change. The error is still caught, but the message is less clear.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for callable types reversed the order of checking: when the target callable has quantified variables, it now checks return type before parameters. This changed how the error message is generated for line 64 — instead of resolving the ParamSpec first and showing () -> AbstractContextManager[@_], it now fails on the return type first and shows the unresolved (ParamSpec(@_)) -> AbstractContextManager[@_].

mypy (+9, -9)

This is a neutral change with respect to this project. The PR swapped 9 false positive errors for 9 different false positive errors on the exact same locations in builtins.pyi. All errors involve pyrefly's inability to correctly handle @classmethod decorated methods being matched against classmethod.__init__'s generic Callable[Concatenate[type[_T], _P], _R_co] parameter. Neither mypy nor pyright flag any of these. The old errors had @_/Unknown in the return type position; the new errors have the concrete return type resolved but still fail on parameter matching. The invalid-overload errors similarly changed from classmethod[Unknown, Ellipsis, Unknown] to classmethod[Unknown, Ellipsis, dict[_T, Any | None]] — slightly better inference but still wrong. The root cause (pyrefly can't properly handle classmethod + Concatenate + ParamSpec together) remains unfixed.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for callable types now checks return type before parameters when the target callable has inference variables (want_has_vars). This changes the order of type variable resolution when matching functions passed to classmethod.__init__. Previously, parameters were checked first (failing to resolve _R_co properly, showing @_/Unknown), now return type is checked first (resolving _R_co to the concrete return type like MutableMapping[str, object] or dict[_T, Any | None], but then failing on parameter matching with a different error message). The net effect is the same number of errors (9 added, 9 removed) on the same locations, just with different type representations in the error messages.

werkzeug (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

core (+7, -7)

bad-argument-type with @_ inference failure (new): All 7 new errors contain @_ in the parameter type (**tuple[*@_]) -> bool, indicating unresolved inference variables. The code is correct in terms of type compatibility — Path.exists, Path.is_file, etc. are valid callables for async_add_executor_job. 0/7 co-reported by mypy or pyright. These are false positives. Note: one location (local_source.py:305) has a real bug (missing await) but pyrefly's error is about argument type, not the missing await.
bad-argument-type with @_ inference failure (removed): All 7 removed errors had the same pattern but with (**tuple[*@_]) -> @_ — the return type was also unresolved. These were also false positives. Removing them is good, but they were replaced by equally-wrong errors with slightly different messages.

Overall: This is a neutral change. Both the old and new errors are false positives — pyrefly incorrectly rejects valid code where bound methods like Path.exists and Path.is_file are passed to async_add_executor_job. The PR changed the order of return-type vs parameter checking for generic callables with inference variables, which changed the error message (the return type now resolves to bool instead of @_) but didn't fix or worsen the underlying inference failure. The @_ in (**tuple[*@_]) confirms this is still an inference failure. Neither mypy nor pyright flags any of these locations for the reported error type. The count of errors (7 added, 7 removed) is identical, affecting the same files and lines, with only the error message wording changing.

Note: One location (homeassistant/components/media_source/local_source.py:305) has a real bug — self.hass.async_add_executor_job(media_path.is_file) is missing await, so it checks the truthiness of a coroutine object rather than the result. However, pyrefly's reported error is about argument type compatibility, not the missing await, so the error is still a false positive with respect to what it's actually reporting.

Attribution: The change in pyrefly/lib/solver/subset.rs modified the order of subset checking for callable types when the target has inference variables (want_has_vars). Previously, params were checked before return type; now when the target has inference vars, return type is checked first. This reordering changed the inference failure mode: the return type constraint now resolves differently (from @_ to bool), but the parameter constraint still fails, producing (**tuple[*@_]) -> bool instead of (**tuple[*@_]) -> @_. The fundamental inference failure persists — pyrefly still can't properly match Path.exists (which has signature (self: Path, *, follow_symlinks: bool = True) -> bool) against the generic callable parameter of async_add_executor_job.

trio (+5, -6)

Most errors at same locations with same error kinds — message wording changed with minor residual noise, no significant behavioral impact.

xarray (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

attrs (+2, -2)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

prefect (+1, -1)

This is essentially a message quality improvement on an existing error. Both the old and new errors correctly flag the same real bug: do_run_migrations is typed as taking AsyncEngine but run_sync passes a Connection. The only change is that the expected type in the error message went from (Connection, ParamSpec(@_)) -> @_ to (Connection, ParamSpec(@_)) -> None — the return type inference improved from an unresolved variable to the concrete None type. The error was correct before and remains correct now, with a slightly clearer message. The ParamSpec(@_) still appears in both versions, which is a remaining inference limitation but doesn't affect correctness of the diagnostic.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset method for callable types now checks the return type before parameters when the target type contains quantified variables (want_has_vars). This reordering allows the return type to be resolved first (yielding None instead of @_), which then provides better context for parameter checking. The error message improved from -> @_ to -> None because the return type is now checked and resolved first.

Suggested fixes

Summary: The PR's change to check return type before parameters when the target callable has inference variables fixes itertools.accumulate but breaks overload resolution for pandas operations and other generic callables, causing 97+ pyrefly-only false positives across 7 projects.

1. In the is_subset method for Type::Callable in pyrefly/lib/solver/subset.rs, the current fix unconditionally reverses checking order when want_has_vars is true. This is too broad — it fixes itertools.accumulate (where the callable argument's return type constrains a type variable) but breaks pandas stubs and other complex overloaded methods. The fix should be narrowed: only check return type first when the got (left) callable is concrete (no inference variables) AND the want (right) callable has inference variables. When BOTH sides have complexity (e.g., overloaded pandas methods where got also involves generic resolution), keep the original params-first order. Alternatively, a more targeted approach: check return type first only when the got callable's return type is fully concrete (no type variables or unknowns), which would handle the str.format bound method case from the test while preserving the original behavior for pandas overloads where the got callable's return type involves unresolved generics. Pseudo-code: let got_has_vars = Type::Callable(Box::new(l.clone())).[may_contain_quantified_var()](https://github.com/facebook/pyrefly/blob/main/pyrefly/lib/solver/subset.rs); if want_has_vars && !got_has_vars { /* return first, then params */ } else { /* params first, then return (original order) */ }

Files: pyrefly/lib/solver/subset.rs
Confidence: medium
Affected projects: freqtrade, pandas-stubs, spark, steam.py, hydpy, jax, scipy-stubs
Fixes: unsupported-operation, bad-return, bad-argument-type, no-matching-overload, assert-type, missing-attribute
The itertools.accumulate case works because str.format is a concrete bound method with a known return type str, so checking return type first correctly constrains the type variable. But for pandas overloads, the got side (e.g., Series.sum()) also involves generic type resolution, and checking return type first causes wrong overload selection. By only applying the new order when got is concrete, we fix the accumulate case while preserving correct pandas behavior. This should eliminate ~97 pyrefly-only errors across freqtrade (30), pandas-stubs (33), spark (31), steam.py (1), hydpy (1), and jax (1).

2. Alternative approach: In the is_subset method for Type::Callable in pyrefly/lib/solver/subset.rs, instead of a simple if/else on want_has_vars, try both orders and pick the one that succeeds. First try params-then-return (original order); if that fails, try return-then-params. This is more expensive but would handle both the accumulate fix and preserve pandas behavior. Pseudo-code: if want_has_vars { let params_first = self.clone().is_subset_params(&l.params, &u.params).and_then(|_| self.clone().is_subset_eq(&l.ret, &u.ret)); if params_first.[is_ok()](https://github.com/facebook/pyrefly/blob/main/pyrefly/lib/solver/subset.rs) { return self.is_subset_params(&l.params, &u.params)?; self.is_subset_eq(&l.ret, &u.ret) } else { self.is_subset_eq(&l.ret, &u.ret)?; self.is_subset_params(&l.params, &u.params) } }

Files: pyrefly/lib/solver/subset.rs
Confidence: low
Affected projects: freqtrade, pandas-stubs, spark, steam.py, hydpy, jax, scipy-stubs
Fixes: unsupported-operation, bad-return, bad-argument-type, no-matching-overload, assert-type, missing-attribute
This fallback approach would preserve the original behavior for all cases where params-first works (including pandas) while enabling the return-first path only when needed (like itertools.accumulate). However, it may have performance implications and could mask real errors. It's a safer but less principled fix.


Was this helpful? React with 👍 or 👎

Classification by primer-classifier (9 heuristic, 20 LLM)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Incorrect overload selection for itertools.accumulate

2 participants