Comet.ml allows you to automatically track your machine learning code, experiments, hyperparameters, and results to achieve reproducibility, transparency, and more efficient iteration cycles.
We built it after seeing many data scientists trying to grapple with disjointed scripts, notebooks (both Jupyter and paper ones), and complex file structures to remember what they ran previously.
Comet.ml has native support for popular machine learning frameworks like Tensorflow, Keras, PyTorch, MXNet, and more.
We’ve created a Comet.ml cheat sheet for our users that want a handy one-page reference or for those who need an extra push to get started.
Download it here or check out the highlights below!
Experiment Method Highlights
The core class of Comet.ml is an Experiment. An Experiment will automatically log scripts output (stdout/stderr), code, and command line arguments on any script and for the supported libraries will also log hyper parameters, metrics and model configuration.
Note: these methods are also available for existing experiments.
set_name: set a name for your experiment. This name will appear in the Comet experiment table and in the project visualizations legend so identifying specific versions of your model is easy.
log_dataset_hash: will just make a hash of whatever you give it, including a filename, or the entire contents of a file. You can look at the resulting hash and see if it is the same as another hash — a good way of seeing if your training runs are using the same training data.
log_asset: upload artifacts from your training run, whether they’re model weights or log files.
log_html: reports any HTML blob to the HTML tab on Comet.ml and is rendered as an Iframe. For example, you can also convert .ipynb files to html in order to log notebooks.
log_metric: logs a general metric (accuracy, f1 score, etc…) as well as custom metrics. Comet automatically logs certain metrics for machine learning frameworks like Tensorflow, Keras, etc…so you do not need to manually report them.
log_parameter: report hyperparameters such as learning rate, batch size, optimizers and more.
Once you start logging experiments, you can conduct meta-analysis across multiple experiments with project-level visualizations, code diffs, and more. This list only scratches the surface of possible methods and customization but is a great way to get started with logging experiments to Comet.ml!
You can see examples of these methods in action for libraries like PyTorch, Keras, etc. in our Comet Examples repo
Comet.ml helps data scientists and machine learning engineers to automatically track their datasets, code, experiments and results creating efficiency, visibility, and reproducibility.
Learn more & see a demo at https://www.comet.ml/
Enjoyed this article? Check out these relevant articles:
- Comet.ml Release Notes — updated daily with new features and fixes!
- Implementing ResNet with MXNet Gluon and Comet.ml for image classification
- Monitoring machine learning model results live from Jupyter notebooks