ON-5590: Sub-sector impact mapping and clearer prioritization explanations#2569
ON-5590: Sub-sector impact mapping and clearer prioritization explanations#2569mircorudolph wants to merge 2 commits into
Conversation
… be more readable for intended users
piotrnowakowski
left a comment
There was a problem hiding this comment.
Two review comments from the branch review.
| total_reduction_amount = sum( | ||
| city_emissions_by_gpc_ref[gpc_ref] * reduction_multiplier | ||
| for gpc_ref in matched_city_gpc_refs | ||
| city_emissions_by_subsector_key[subsector_key] * reduction_multiplier |
There was a problem hiding this comment.
This makes Impact scores leave the documented [0,1] range when valid AFOLU negative emissions are present. Because AFOLU negatives are allowed and counted as matchable, a matched V.* subsector contributes a negative total_reduction_amount, while the same negative value is also included in total_city_emissions. For example, with {"I.1": 100, "V.2": -50}, an AFOLU-targeting action can score -0.64, and an action targeting I.1 can score 1.48. We should normalize the denominator and reduction contribution so removals do not produce negative or >1 Impact scores before this ranking change ships.
| - Do not mention raw numeric scores, equations, decimals, or hidden scoring logic. | ||
| - Do not invent city preferences, legal constraints, policy support, or implementation facts that are not present. | ||
| - Do not infer extra benefits or implementation facts from the action ID or action theme alone. | ||
| - Avoid meta phrases like `the evidence shows`, `the payload indicates`, `mixed profile`, or `grounded in`. |
There was a problem hiding this comment.
The example output later in this file still conflicts with this new instruction because it uses the evidence shows and the payload indicates. Since examples often steer model style strongly, we should rewrite that sample explanation using the plain-language style requested by the prompt so it does not teach the model to use the exact phrases we are trying to suppress.
piotrnowakowski
left a comment
There was a problem hiding this comment.
One inline note from Codex.
| strengths.append("Fits the city's preferred implementation timeframe.") | ||
|
|
||
| policy_signals_count = int(alignment_evidence.get("policy_signals_count", 0)) | ||
| if policy_signals_count > 0: |
There was a problem hiding this comment.
This turns any non-zero policy_signals_count into "Has supportive policy context...", but the alignment block derives actual support from policy_support_score, not from raw signal count. That means we can generate a positive explanation even when policy support is 0.0, which contradicts the scored evidence. I'd gate this on policy_component_score > 0 (or another explicit support threshold) instead of the presence of policy rows alone.
| </input> | ||
|
|
||
| <output> | ||
| Return exactly the fields requested in the user prompt. |
There was a problem hiding this comment.
can't we specify the fields here ? they change on each request ?
piotrnowakowski
left a comment
There was a problem hiding this comment.
One doc note from Codex.
|
|
||
| - `impact_block_score = (0.80 × reduction_share_of_city_emissions) + (0.20 × timeline_score)` | ||
| - `reduction_share_of_city_emissions` is computed from matched action `gpc_reference_number` keys only. | ||
| - `reduction_share_of_city_emissions` is computed from matched action `sector.subsector` keys. |
There was a problem hiding this comment.
This hunk updates the active Impact join to sector.subsector, but the output description a few lines above still says "top contributing activities and multipliers". The branch now emits subsector-level evidence (subsector_contributors), not activity-level contributors, so that bullet looks stale relative to the current payload.
piotrnowakowski
left a comment
There was a problem hiding this comment.
One doc note from Codex.
| CC->>API: POST /v1/prioritize PrioritizerApiRequest (meta + requestData.cityDataList) | ||
| Note over API: FastAPI validates request body (Pydantic) | ||
| API->>Orch: run_prioritization(locode, city_emissions_by_gpc_ref, clients, per_city_options...) | ||
| API->>Orch: run_prioritization(locode, city_emissions_context, clients, per_city_options...) |
There was a problem hiding this comment.
This parameter rename is correct, but the same sequence block still documents CityCatalyst calling POST /v1/prioritize with PrioritizerApiRequest. The app code in this repo still posts the legacy /prioritizer/v1/start_prioritization and /prioritizer/v1/start_prioritization_bulk payloads from app/src/backend/hiap/HiapApiService.ts, so if this diagram is meant to describe current end-to-end behavior it is ahead of the actual integration. If this is target-state documentation, it would help to label it that way explicitly.
|
Doc note: |
Summary
This work aligns MEED prioritization impact scoring with sub-sector level inventory mapping, tightens emissions validation, and refreshes ranking explanations so they read more clearly for city users. It also removes committed
logs_temprequest artifacts, updates mocks and docs, and adds or extends tests (including E2E mock coverage).Changes
impact.py,subsector_mapping.py), model/API/orchestrator updates, explanation prompts andexplanationsservice, config and READMElogs_tempJSON payloads from the treeCommits (2)