-
Notifications
You must be signed in to change notification settings - Fork 686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Task]: Add logger to write to csv files. #2055
Comments
Please teach me how to modify the code |
Hey @ashwinvaidya17, Is this issue been worked on? |
Hi @ChaitanyaYeole02, no one is working on this at the moment. Thanks for your interest |
Hey @samet-akcay, what kind of metric could be written in this CSV file?
|
Hey @samet-akcay, any suggestions? |
@ChaitanyaYeole02 I tried your code locally. This is what it writes Looks fine to me. Feel free to create a PR. MREfrom anomalib.data import MVTec
from anomalib.engine import Engine
from anomalib.loggers.csv import CSVLogger
from anomalib.models import Padim
def main():
logger = CSVLogger(save_dir="csv_test")
engine = Engine(logger=logger)
model = Padim()
datamodule = MVTec()
engine.train(datamodule=datamodule, model=model)
if __name__ == "__main__":
main() |
@ashwinvaidya17 I will raise the PR, but could you also tell me how can I setup and test this locally? |
Currently, the subclassed logger provides the same features as the base logger. Do you plan to save separate test and train metrics? Epoch 0 and 1 will be a bit confusing. I tested this on Padim so ideally, we should get train metrics, validation metrics, and test metrics separately rather than showing up as epoch. Otherwise this does not provide any additional features on top of the base class. |
Yeah, currently it does not provide any feature on top of the base class. Do you have any suggestion of additional features on top of the base class? I was able to replicate the MRE. Should we add more metrics? |
@ChaitanyaYeole02 thanks for creating the PR, but apologies, I realized that my issue is ill-defined. The original use-case came from the benchmarking script as it prints all the models, dataset categories, and metrics in a single table. Something like this I was hoping we could get something similar for general training but looks like the information is not sufficient. With a closer look at the base class, it seems like we can only access the
But looks like it is not as straightforward as I thought. |
Ohh, interesting. I would take a look at this. How did you get these benchmark results any MRE? |
Thank you, let me take a look at this and if I am stuck somewhere I will let you know. |
What is the motivation for this task?
It is handy to collect logged metrics to a csv file.
Describe the solution you'd like
src/anomalib/loggers
that writes metrics to a csv file._print_tabular_results
method in the benchmark job.Additional context
No response
The text was updated successfully, but these errors were encountered: