Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track proc macro server status in analysis progress #270

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Arcticae
Copy link
Member

@Arcticae Arcticae commented Feb 7, 2025

Stack:
⬆️ #271

@Arcticae Arcticae requested a review from a team as a code owner February 7, 2025 10:44
@Arcticae Arcticae requested review from mkaput and Draggu and removed request for a team February 7, 2025 10:44
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex};

use crate::config::Config;
use crate::lsp::ext::{ServerStatus, ServerStatusEvent, ServerStatusParams};
use crate::server::client::Notifier;

#[derive(Clone, PartialEq)]
pub enum ProcMacroServerStatus {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crashed is missing

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really care if it's crashed tbh - i do care about the startup status

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did it anyway, for consistency sake

@@ -148,6 +148,7 @@ impl ProcMacroClientController {
}

db.set_proc_macro_client_status(ClientStatus::Ready(client));
self.proc_macro_server_tracker.set_server_status(ProcMacroServerStatus::Ready)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make func to set db.set_proc_macro_client_status and proc_macro_server_tracker.set_server_status, this will be easy to forget when changing this code.

Also, you can create ProcMacroServerStatus from &ClientStatus

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about this?

db.set_proc_macro_client_status(ClientStatus::Ready(client));
self.proc_macro_server_tracker.pull_status_from_db(&db);

?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But you still have to remember to call both

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's true. I can't think of another way to set up a "watcher" behaviour on this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@Arcticae Arcticae requested review from Draggu and mkaput February 7, 2025 15:20
proc_macro_server_tracker: ProcMacroServerTracker,
}

impl From<&ClientStatus> for ProcMacroServerStatus {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we rename ClientStatus to ServerStatus at this point? cc @Draggu

@@ -96,6 +107,12 @@ impl ProcMacroClientController {
}
}

fn set_procmacro_client_status(&self, db: &mut AnalysisDatabase, client_status: ClientStatus) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fn set_procmacro_client_status(&self, db: &mut AnalysisDatabase, client_status: ClientStatus) {
fn set_proc_macro_client_status(&self, db: &mut AnalysisDatabase, client_status: ClientStatus) {

[nit]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants