Comet.ml cheat sheet: supercharge your machine learning experiment management

Comet.ml allows you to automatically track your machine learning code, experiments, hyperparameters, and results to achieve reproducibility, transparency, and more efficient iteration cycles. We built it after seeing many data scientists trying to grapple with disjointed scripts, notebooks (both Jupyter and paper ones), and complex file structures to remember what they ran previously. Comet.ml has…

Read More

Monitoring machine learning model results live from Jupyter notebooks

Tracking and saving your model results just got that much easier with Comet.ml For many data scientists, Jupyter notebooks have become the tool of choice. Its ability to combine software code, computational output, explanatory text, and multimedia into a single document has helped countless users easily create tutorials, iterate more quickly, and showcase their work externally.…

Read More

Building reliable machine learning models with cross-validation

Cross-validation is a technique used to measure and evaluate machine learning models performance. During training we create a number of partitions of the training set and train/test on different subsets of those partitions. Cross-validation is frequently used to train, measure and finally select a machine learning model for a given dataset because it helps assess…

Read More

Part I: Conducting Exploratory Data Analysis (EDA) for the Kaggle Home Credit Default Competition

Follow along as the Comet.ml team competes to win the Kaggle Home Credit Default Competition — this is the first of a series of posts on our modeling process! In this first post, we are going to conduct some preliminary exploratory data analysis (EDA) on the datasets provided by Home Credit for their credit default risk Kaggle competition…

Read More