Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 25 additions & 3 deletions lan-mouse-gtk/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -98,12 +98,34 @@ fn setup_menu(app: &adw::Application) {
}

fn build_ui(app: &Application) {
// If a window already exists (re-activation), just present it
if let Some(window) = app.windows().first() {
window.present();
return;
}

log::debug!("connecting to lan-mouse-socket");
let (mut frontend_rx, frontend_tx) = match lan_mouse_ipc::connect() {
let (mut frontend_rx, frontend_tx) = match lan_mouse_ipc::try_connect() {
Ok(conn) => conn,
Err(e) => {
log::error!("{e}");
process::exit(1);
log::warn!("could not connect to daemon ({e}), spawning a new one");
if let Err(spawn_err) = process::Command::new(
env::current_exe().expect("could not determine executable path"),
)
.args(env::args().skip(1))
.arg("daemon")
.spawn()
{
log::error!("failed to spawn daemon: {spawn_err}");
process::exit(1);
}
Comment on lines +108 to +121
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When try_connect() fails, the GTK frontend spawns a daemon process but discards the Child handle, so it cannot be terminated on GUI exit. In the race where main.rs decides not to spawn (because the daemon was running) but the daemon dies before GTK connects, this path will leave a newly spawned daemon running after the GUI closes. Consider centralizing daemon spawning/lifecycle management in main.rs (or returning the child handle/ownership information) so that only daemons spawned by the GUI are cleaned up.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is by design. The fallback path only triggers in a rare race condition (daemon dies between main.rs probe and GTK connect). The spawned daemon runs independently, same as if the user had run lan-mouse daemon separately — it manages its own lifecycle and cleans up its socket on drop. Centralizing the lifecycle would require threading a Child handle back through the GTK crate boundary, adding complexity for an edge case where the intended behavior (daemon keeps running) is already correct.

match lan_mouse_ipc::connect() {
Ok(conn) => conn,
Err(e) => {
log::error!("{e}");
process::exit(1);
}
}
}
};
log::debug!("connected to lan-mouse-socket");
Expand Down
32 changes: 32 additions & 0 deletions lan-mouse-ipc/src/connect.rs
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,40 @@ impl FrontendRequestWriter {
}
}

/// Try to connect to the service once without retrying.
pub fn try_connect() -> Result<(FrontendEventReader, FrontendRequestWriter), ConnectionError> {
#[cfg(unix)]
let rx = {
let socket_path = crate::default_socket_path()?;
UnixStream::connect(&socket_path)?
};
#[cfg(windows)]
let rx = TcpStream::connect("127.0.0.1:5252")?;
make_connection(rx)
}

pub fn connect() -> Result<(FrontendEventReader, FrontendRequestWriter), ConnectionError> {
let rx = wait_for_service()?;
make_connection(rx)
}

#[cfg(unix)]
fn make_connection(
rx: UnixStream,
) -> Result<(FrontendEventReader, FrontendRequestWriter), ConnectionError> {
let tx = rx.try_clone()?;
let buf_reader = BufReader::new(rx);
let lines = buf_reader.lines();
let line_writer = LineWriter::new(tx);
let reader = FrontendEventReader { lines };
let writer = FrontendRequestWriter { line_writer };
Ok((reader, writer))
}

#[cfg(windows)]
fn make_connection(
rx: TcpStream,
) -> Result<(FrontendEventReader, FrontendRequestWriter), ConnectionError> {
let tx = rx.try_clone()?;
let buf_reader = BufReader::new(rx);
let lines = buf_reader.lines();
Expand Down
17 changes: 16 additions & 1 deletion lan-mouse-ipc/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ mod connect;
mod connect_async;
mod listen;

pub use connect::{FrontendEventReader, FrontendRequestWriter, connect};
pub use connect::{FrontendEventReader, FrontendRequestWriter, connect, try_connect};
pub use connect_async::{AsyncFrontendEventReader, AsyncFrontendRequestWriter, connect_async};
pub use listen::AsyncFrontendListener;

Expand Down Expand Up @@ -299,3 +299,18 @@ pub fn default_socket_path() -> Result<PathBuf, SocketPathError> {
.join("Caches")
.join(LAN_MOUSE_SOCKET_NAME))
}

/// Check if a lan-mouse service is already running by probing the IPC socket.
#[cfg(unix)]
pub fn is_service_running() -> bool {
let Ok(socket_path) = default_socket_path() else {
return false;
};
std::os::unix::net::UnixStream::connect(socket_path).is_ok()
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is_service_running() establishes a real IPC connection to probe liveness. In AsyncFrontendListener::poll_next any accepted connection triggers a FrontendRequest::Sync, so this probe will cause the daemon to broadcast a full sync to existing frontends every time it’s called (even though the probe immediately disconnects). Consider using a side-effect-free single-instance mechanism (e.g., lockfile/named mutex) or changing the listener/protocol so that a liveness probe doesn’t register as a frontend connection that triggers Sync.

Suggested change
std::os::unix::net::UnixStream::connect(socket_path).is_ok()
// Avoid establishing a real IPC connection, which would be interpreted as a
// frontend connection by the daemon and trigger a sync broadcast. Checking
// for the existence of the socket path is a side-effect-free approximation.
std::fs::metadata(socket_path).is_ok()

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The observation about the spurious Sync is valid, but the suggested fix (std::fs::metadata) would break single-instance detection — a stale socket file left behind by a crashed daemon would give a false positive, and we'd skip spawning when no daemon is actually running. The existing listener code in listen.rs already handles this case correctly by attempting a real connection to distinguish stale sockets from live daemons.

The extra Sync is harmless — it's one redundant state broadcast at startup. Eliminating the double-connect would require passing the connection across the crate boundary (main.rs → GTK), which is a larger refactor for marginal gain.

Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is exactly, why I am explicitly making a connection! A sync request also shouldn't cause any issues.

}

/// Check if a lan-mouse service is already running by probing the IPC socket.
#[cfg(windows)]
pub fn is_service_running() -> bool {
std::net::TcpStream::connect("127.0.0.1:5252").is_ok()
}
26 changes: 17 additions & 9 deletions src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -73,18 +73,26 @@ fn run() -> Result<(), LanMouseError> {
// run a frontend
#[cfg(feature = "gtk")]
{
let mut service = start_service()?;
// Only spawn a new daemon if one isn't already running
let mut service = if lan_mouse_ipc::is_service_running() {
log::info!("daemon already running, connecting to existing instance");
None
} else {
Some(start_service()?)
};
let res = lan_mouse_gtk::run();
Comment on lines +76 to 83
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This path probes the daemon via is_service_running() and then the GTK frontend attempts try_connect() again, resulting in two connections per launch when a daemon is already running. Besides the extra work, the probe connection can trigger a spurious FrontendRequest::Sync broadcast from the daemon (see listener behavior). Consider relying on a single try_connect() attempt in one place to decide whether to spawn, rather than probing in main.rs and reconnecting in GTK.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid observation — there are two connections: the is_service_running() probe in main.rs and try_connect() in GTK. Consolidating would require either passing the IPC connection from main.rs into the GTK crate (different API boundaries) or removing the main.rs probe and letting GTK handle all spawning (losing the clean lifecycle management in main.rs). The cost is one extra harmless Sync broadcast at startup, which doesn't seem worth the refactor.

#[cfg(unix)]
{
// on unix we give the service a chance to terminate gracefully
let pid = service.id() as libc::pid_t;
unsafe {
libc::kill(pid, libc::SIGINT);
if let Some(ref mut service) = service {
#[cfg(unix)]
{
// on unix we give the service a chance to terminate gracefully
let pid = service.id() as libc::pid_t;
unsafe {
libc::kill(pid, libc::SIGINT);
}
service.wait()?;
}
service.wait()?;
service.kill()?;
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Unix, this sends SIGINT and then calls wait(), which reaps the child. Calling service.kill()? afterwards is unnecessary and can fail with an OS error (process already exited/reaped), causing run() to error during normal shutdown. Consider only killing if the process is still running (e.g., via try_wait()), or ignore/handle the expected "already exited" error.

Suggested change
service.kill()?;
#[cfg(not(unix))]
{
service.kill()?;
}

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pre-existing behavior — the original code had the same wait() then kill() sequence. This PR just moved it inside the if let Some guard. Fixing the redundant kill is a good cleanup but out of scope for this change.

}
service.kill()?;
res?;
}
#[cfg(not(feature = "gtk"))]
Expand Down
Loading