You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
addHintText(panel, gbc, "When enabled, DevoxxGenie sends Ollama num_ctx from discovered model metadata; when disabled, Ollama keeps its own runtime default.");
Copy file name to clipboardExpand all lines: src/main/resources/META-INF/plugin.xml
+10Lines changed: 10 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -45,6 +45,16 @@
45
45
]]></description>
46
46
47
47
<change-notes><![CDATA[
48
+
<h2>v1.4.0</h2>
49
+
<UL>
50
+
<LI>Feat: Add Exo distributed AI cluster as new local LLM provider — run large AI models across multiple Apple Silicon devices via Thunderbolt (#1002)</LI>
51
+
<LI>Feat: Exo auto-discovery of downloaded models from /state API</LI>
52
+
<LI>Feat: Automatic Exo instance creation with placement preview across cluster</LI>
53
+
<LI>Feat: Background progress bar during Exo model loading with cancellation support</LI>
54
+
<LI>Feat: Auto-recovery when Exo instances disconnect or get recycled</LI>
55
+
<LI>Feat: Collapsible cluster status panel showing nodes, memory, GPU usage, temperature, and active instance status</LI>
0 commit comments