ci: LLM code review#1429
Conversation
📊 Package size report
|
|
llama3 model Bugs:
Performance:
Bad Practices:
Recommendations:
|
|
mistral
|
|
mistral + new prompt
LGTM for the rest of the code. |
|
Gemma4 + thinking Thinking...
Let's analyze the workflow step by step. Workflow:
diff.patch)") This appears functionally correct for simple text, but it makes the co�[2D�[K The most glaring bug related to execution flow is in Step 3 and **St�[4D�[K Focusing on the most critical bug (High Confidence):
Revisiting the goal constraints: I must find a real, high-confidence �[K The background process management in Step 3 is a classic race condition/lif�[13D�[K Formulating the issue: The workflow starts Ollama's server in the background and then proceeds imm�[3D�[K I will report the service lifecycle issue. Self-Correction/Refinement: Should I combine the service start and pullin�[6D�[K I will report the service management bug.
diff --git a/.github/workflows/ai-review.yml b/.github/workflows/ai-review.�[30D�[K
|
diff --git a/.github/workflows/ai-review.yml b/.github/workflows/ai-review.yml
|
|
null |
|
[Gemma4]
|
|
[ChatGPT] What caused the regression? - if (behavior !== "height") {
- frame = { x: left, y: top, width, height: bottom };
- }Because the component has no const bottom = interpolate(
bottom,
inputRange: [0, frame.value.height], // <- frame.value.height is 0
outputRange: [0, 1],
extrapolateRight: Extrapolate.CLAMP,
);With a Proposed fixKeep the original guard or make the interpolation robust to a // KeyboardAvoidingView.tsx
onLayoutWorklet((x, y, width, height, bottom, top, left) => {
// The original guard – keep it!
if (behavior !== "height") {
frame = { x: left, y: top, width, height: bottom };
}
// ... rest of the code unchanged
});If you want to keep the more concise version, add a fallback to avoid zero‑height interpolation: // KeyboardAvoidingView.tsx
onLayoutWorklet((x, y, width, height, bottom, top, left) => {
// always update frame, but avoid zero height
if (bottom > 0) {
frame = { x: left, y: top, width, height: bottom };
}
// Guard against zero height in interpolations
const h = frame.value.height || 1; // avoid 0
const bottom = interpolate(
bottom,
inputRange: [0, h],
outputRange: [0, 1],
extrapolateRight: Extrapolate.CLAMP,
);
// ... rest of the code unchanged
});Either solution will stop the regression by ensuring that the interpolation never receives a zero‐height input range. |
|
[DeepSeek] Here are the key issues identified:
diff --git a/src/components/KeyboardAvoidingView/index.tsx b/src/components/KeyboardAvoidingView/index.tsx
index ec2ee2961c..c58fba5831 100644
--- a/src/components/KeyboardAvoidingView/index.tsx
+++ b/src/components/KeyboardAvoidingView/index.tsx
@@ -120,7 +120,7 @@ const KeyboardAvoidingView = forwardRef<
(layout: LayoutRectangle) => {
"worklet";
- if (keyboard.isClosed.value || initialFrame.value === null) {
+ if (keyboard.isClosed.value || initialFrame.value === null || behavior !== "height") {
// eslint-disable-next-line react-compiler/react-compiler
initialFrame.value = layout;
} |
d8c5994 to
259bb6b
Compare
|
[Gemma4 + 500LoC context] LGTM. Here is a review of the changes. 🟢 Review SummaryThe changes are minor and seem to involve fixing a redundant condition in |
|
[DeepSeek] The code updates aim to enhance the handling of window insets and edge-to-edge display behavior in a React Native keyboard controller module. Here's a concise summary:
These changes improve responsiveness and layout stability when handling keyboard inputs across devices. |
259bb6b to
305828a
Compare
LGTM |
## 📜 Description Improving AI reviews introduced in #1429 ## 💡 Motivation and Context Tweaked context creation, propmp, code for sending comments - see changelog section for more details. ## 📢 Changelog <!-- High level overview of important changes --> <!-- For example: fixed status bar manipulation; added new types declarations; --> <!-- If your changes don't affect one of platform/language below - then remove this platform/language --> ### CI - added concurrency; - re-worked prompt; - move repeated constants onto variables; - don't duplicate content if file has been added in this PR; - don't duplicate content if file has been removed in this PR; - ignore example/FabricExample code; - attach only single comment; ## 🤔 How Has This Been Tested? Tested via this PR. ## 📸 Screenshots (if appropriate): <img width="886" height="401" alt="image" src="https://github.com/user-attachments/assets/2ebc20f7-c437-4551-92b0-230daee8eccf" /> ## 📝 Checklist - [x] CI successfully passed - [x] I added new mocks and corresponding unit-tests if library API was changed
📜 Description
Run Ollama
DeepSeek:14bmodel to review changes by LLM.💡 Motivation and Context
This is basic PoC implementation:
Future improvements:
This baseline doesn't show really good review output but acts more like a "second" eyes. Let's enable if tor now and see how it behaves. Things that I like is that I can literally customize many aspects of this flow (context construction, prompts, model selection, etc.) and don't pay for 3rd party LLM integration.
Let's see how far can it go.
📢 Changelog
CI
🤔 How Has This Been Tested?
Tested via this PR. DeepSeek showed the best results so far.
📸 Screenshots (if appropriate):
📝 Checklist