Conversation
Summary of ChangesHello @safaricd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the telemetry system by introducing the tracking of Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces telemetry tracking for model_path and model_version, which is a valuable addition. The changes involve updating version numbers, exporting new functions, and modifying the telemetry decorators and event structures to accommodate these new tracking parameters. The refactoring of _get_context_var to accept a variable name is a good improvement for reusability. However, there are a couple of areas in src/tabpfn_common_utils/telemetry/core/decorators.py that could be improved for robustness and error handling.
brendan-priorlabs
left a comment
There was a problem hiding this comment.
One problem with model path is that it can literally be a local machine path and thus contain PII. How about we take the end of the path and check it against a whitelist of our paths. See here: https://github.com/PriorLabs/tabpfn-server/blob/890b2e2f6796636772ec21915085041ec5b303df/packages/vertex_ai/src/vertex_ai/definitions.py#L132
|
@brendan-priorlabs that's exactly what we're doing - we're just taking the name from the path, which should correspond to the name of the checkpoint - without taking the full path, which I agree, might contain PII. https://github.com/PriorLabs/tabpfn_common_utils/pull/45/files#diff-07f11f8e4b5926d243b55396530654190feb5b73399d0d1581a03a74bce59036R59 |
|
I like Brendan's suggestion of checking against the list of known checkpoints. People might also put PII in the file name. |
|
@oscarkey I see where this is coming from - please check out the code in the |
brendan-priorlabs
left a comment
There was a problem hiding this comment.
LGTM! Just a couple cleanup comments
Co-authored-by: brendan-priorlabs <brendan@priorlabs.ai>
Change Description
model_versionandmodel_pathWe track these using the same mechanics as
extension_name-contextvars. What's the flow?TabPFNClassifierfit, we load the model - this is where for that model (presumably running in its own thread) loads the weights from HF.