Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] Where train.py takes the dataset when we run r.run("train")? #35

Open
serdar-akyol opened this issue Sep 25, 2024 · 0 comments

Comments

@serdar-akyol
Copy link

As in jupyter notebook shows, I have run r.run("split") and suppose the data processing started and returned new df that I wanted to use in training. However, when I run r.run("train"), I am facing with mlflow.exceptions.MlflowException: Error has occurred during training of AutoML model using FLAML: AssertionError('Input data must not be empty.') error. Below you can see my yaml files

local.yaml

#
# FIXME::REQUIRED: set an MLflow experiment name to track recipe executions and artifacts.
#
experiment:
  name: "test_food"
  tracking_uri: "sqlite:///metadata/mlflow/mlruns.db"
  artifact_location: "./metadata/mlflow/mlartifacts"
model_registry:
#
# FIXME::OPTIONAL: Set the registry server URI. This property is especially useful if you have a
#                  registry server that’s different from the tracking server.
#   uri: "sqlite:///metadata/mlflow/registry.db"
# FIXME::REQUIRED: Specifies the name of the Registered Model to use when registering a trained
#                  model to the MLflow Model Registry.
  model_name: "random-forest"

INGEST_CONFIG:
  #
  # FIXME::REQUIRED: Specify the format of the training and evaluation dataset. Natively supported
  #                  formats are: parquet, spark_sql, delta.
  using: "csv"
  # FIXME::OPTIONAL: Specify the training and evaluation data location.
  location: "./data/data.csv"
  loader_method: "load_file_as_dataframe" 

# INGEST_SCORING_CONFIG:
#   For different options please read: https://github.com/mlflow/recipes-classification-template#batch-scoring
#   FIXME::OPTIONAL: Specify the format of the scoring dataset. Natively supported formats are:
#                    parquet, spark_sql, delta.
#   using: ""
#   FIXME::OPTIONAL: Specify the scoring data location.
#   location: ""

# PREDICT_OUTPUT_CONFIG:
#   For different options please read: https://github.com/mlflow/recipes-classification-template#predict-step
#   FIXME::OPTIONAL: Specify the format of the scored dataset. Natively supported formats are:
#                    parquet, delta, table.
#   using: ""
#   FIXME::OPTIONAL: Specify the output location of the batch scoring predict step.
#   location: ""

recipe.yaml

# `recipe.yaml` is the main configuration file for an MLflow Recipe.
# Required recipe parameters should be defined in this file with either concrete values or
# variables such as {{ INGEST_DATA_LOCATION }}.
#
# Variables must be dereferenced in a profile YAML file, located under `profiles/`.
# See `profiles/local.yaml` for example usage. One may switch among profiles quickly by
# providing a profile name such as `local` in the Recipe object constructor:
# `r = Recipe(profile="local")`
#
# NOTE: All "FIXME::REQUIRED" fields in recipe.yaml and profiles/*.yaml must be set correctly
#       to adapt this template to a specific classification problem. To find all required fields,
#       under the root directory of this recipe, type on a unix-like command line:
#       $> grep "# FIXME::REQUIRED:" recipe.yaml profiles/*.yaml
#
# NOTE: YAML does not support tabs for indentation. Please use spaces and ensure that all YAML
#       files are properly formatted.

recipe: "classification/v1"
# FIXME::REQUIRED: Specifies the target column name for model training and evaluation.
target_col: "target"
# FIXME::REQUIRED: Specifies the value of `target_col` that is considered the positive class.
positive_class: "1"
# FIXME::REQUIRED: Sets the primary metric to use to evaluate model performance. This primary
#                  metric is used to select best performing models in MLflow UI as well as in
#                  train and evaluation step.
#                  Built-in primary metrics are: recall_score, precision_score, f1_score, accuracy_score.
primary_metric: "f1_score"
steps:
  # Specifies the dataset to use for model development
  ingest: {{INGEST_CONFIG}}
  split:
    using: split_ratios
    #
    # FIXME::OPTIONAL: Adjust the train/validation/test split ratios below.
    #
    split_ratios: [0.75, 0.125, 0.125]
    #
    #  FIXME::OPTIONAL: Specifies the method to use to "post-process" the split datasets. Note that
    #                   arbitrary transformations should go into the transform step.
    post_split_filter_method: create_dataset_filter
  transform:
    using: "custom"
    #
    #  FIXME::OPTIONAL: Specifies the method that defines an sklearn-compatible transformer, which
    #                   applies input feature transformation during model training and inference.
    transformer_method: transformer_fn
  train:
    #
    # FIXME::REQUIRED: Specifies the method to use for training. Options are "automl/flaml" for
    #                  AutoML training or "custom" for user-defined estimators.
    using: "automl/flaml"
    time_budget_secs: 3000
    predict_scores_for_all_classes: True
    predict_prefix: "predicted_"
  evaluate:
    #
    # FIXME::OPTIONAL: Sets performance thresholds that a trained model must meet in order to be
    #                  eligible for registration to the MLflow Model Registry.
    #
    validation_criteria:
      - metric: f1_score
        threshold: 0.9
  register:
    # Indicates whether or not a model that fails to meet performance thresholds should still
    # be registered to the MLflow Model Registry
    allow_non_validated_model: false
  # FIXME::OPTIONAL: Specify the dataset to use for batch scoring. All params serve the same function
  #                  as in `data`
  # ingest_scoring: {{INGEST_SCORING_CONFIG}}
  # predict:
  #   output: {{PREDICT_OUTPUT_CONFIG}}
  #   model_uri: "models/model.pkl"
  #   result_type: "double"
  #   save_mode: "default
# custom_metrics:
#   FIXME::OPTIONAL: Defines custom performance metrics to compute during model development.
#     - name: ""
#       function: get_custom_metrics
#       greater_is_better: False

split.py

"""
This module defines the following routines used by the 'split' step:

- ``create_dataset_filter``: Defines customizable logic for filtering the training,
  datasets produced by the data splitting procedure. Note that arbitrary transformations
  should go into the transform step.
"""

from pandas import DataFrame, Series
import pandas as pd
import numpy as np
import ast
from sklearn.preprocessing import LabelEncoder
from tqdm import tqdm

def create_dataset_filter(dataset: DataFrame) -> Series:
    """
    Mark rows of the split datasets to be additionally filtered. This function will be called on
    the training datasets.

    :param dataset: The {train,validation,test} dataset produced by the data splitting procedure.
    :return: A Series indicating whether each row should be filtered
    """
    # Step 1: Process the dataset
    processed_data = start_preprocessing(dataset)

    # Step 2: Check for NA values and log a warning if found
    print(processed_data.isna().any())
    if processed_data.empty:
        print("Warning: Processed data is empty.")
        return Series(False, index=dataset.index)  # Return False for all rows if processed data is empty

    # Step 3: Create a filtering Series based on your conditions
    # Example: Keep rows that are not null in a specific column (e.g., 'target')
    filter_condition = processed_data['target'].notna()  # Adjust this based on your target column or filtering criteria

    # Optional: Log the number of rows being filtered
    print(f"Filtered rows: {filter_condition.sum()} out of {len(dataset)}")

    return filter_condition

def fill_null_values_with_average_values(df: pd.DataFrame) -> pd.DataFrame:
    """
    This method identifies null values in specific nutritional columns 
    and fills them with the average of their respective categories.
    """
    # Specify the columns to check for null values
    columns_to_fix_nulls = [
        'nutritional_saturated_fat_100g',
        'nutritional_carbohydrates_100g',
        'nutritional_fat_100g',
        'nutritional_sugars_100g',
        'nutritional_proteins_100g',
        'nutritional_fiber_100g',
        'nutritional_energy_100g',
        'nutritional_salt_100g'
    ]
    
    for col in tqdm(columns_to_fix_nulls):
        category_means = df.groupby('category')[col].mean().fillna(0)
        df[col] = df.apply(
            lambda row: category_means[row['category']] if pd.isnull(row[col]) else row[col],
            axis=1
        )
        
    return df

def extract_number_of_ingredients_from_string(datum) -> int:
    return len(ast.literal_eval(datum))

def convert_string_to_list_size(df: DataFrame) -> DataFrame:
    convert = lambda x: extract_number_of_ingredients_from_string(x)
    df['ingredients_ordered'] = df['ingredients_ordered'].apply(convert)
    return df

def encode_category(df: DataFrame) -> DataFrame:
    le = LabelEncoder()
    df['category'] = le.fit_transform(df['category'])
    return df

def start_preprocessing(df: DataFrame) -> DataFrame:
    df_no_null = fill_null_values_with_average_values(df)
    df_ingridients_list = convert_string_to_list_size(df_no_null)
    df_encoded = encode_category(df_ingridients_list)
    return df_encoded.drop(columns=['id', 'category', 'is_liquid', 'nutritional_saturated_fat_100g', 'nutritional_fat_100g', 'nutritional_fiber_100g', 'nutritional_salt_100g'])

train.py

"""
This module defines the following routines used by the 'train' step:

- ``estimator_fn``: Defines the customizable estimator type and parameters that are used
  during training to produce a model recipe.
"""
from typing import Dict, Any
from sklearn.ensemble import RandomForestClassifier

def estimator_fn(estimator_params: Dict[str, Any] = None) -> Any:
    """
    Returns an *unfitted* estimator that defines ``fit()`` and ``predict()`` methods.
    The estimator's input and output signatures should be compatible with scikit-learn
    estimators.
    """
    #
    # FIXME::OPTIONAL: return a scikit-learn-compatible classification estimator with fine-tuned
    #                  hyperparameters.

    if estimator_params is None:
        estimator_params = {
            'n_estimators': 100,
            'max_depth': None,
            'class_weight': 'balanced',
            'random_state': 42,
        }
    return RandomForestClassifier(**estimator_params)

Additionally, when I tested my dataset with r.get_artifact("training_data").isnull().any() line, I see there is no null value. Can anyone help me in this case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant