Edit model card

Random Forest Regressor for Crop Nutrient Prediction

Overview

This model predicts the nutrient needs (Nitrogen, Phosphorus, Potassium) for various crops based on features like crop type, target yield, field size, and soil properties. It is trained using a Random Forest Regressor.

Training Data

The model was trained on a custom dataset containing the following features:

  • Crop Name
  • Target Yield
  • Field Size
  • pH (water)
  • Organic Carbon
  • Total Nitrogen
  • Phosphorus (M3)
  • Potassium (exch.)
  • Soil moisture

The target variables are:

  • Nitrogen (N) Need
  • Phosphorus (P2O5) Need
  • Potassium (K2O) Need

Model Training

The model was trained using a Random Forest Regressor. Below are the steps taken for training:

  1. Data preprocessing: handling missing values, scaling numerical features, and one-hot encoding categorical features.
  2. Splitting the dataset into training and testing sets.
  3. Training the Random Forest model on the training set.
  4. Evaluating the model on the test set.

Evaluation Metrics

The model was evaluated using the following metrics:

  • Mean Squared Error (MSE)
  • Mean Absolute Error (MAE)
  • R-squared (R2) Score

How to Use

Input Format

The model expects input data in JSON format with the following fields:

  • "Crop Name": String
  • "Target Yield": Numeric
  • "Field Size": Numeric
  • "pH (water)": Numeric
  • "Organic Carbon": Numeric
  • "Total Nitrogen": Numeric
  • "Phosphorus (M3)": Numeric
  • "Potassium (exch.)": Numeric
  • "Soil moisture": Numeric

Preprocessing Steps

  1. Load your input data.
  2. Ensure all required fields are present and in the expected format.
  3. Handle any missing values if necessary.
  4. Scale numerical features based on the training data.
  5. One-hot encode categorical features (if applicable).

Inference Procedure

Example Code:

from huggingface_hub import hf_hub_download
import joblib
import pandas as pd

# Download the model file from the Hugging Face hub
model_path = hf_hub_download(repo_id="DNgigi/NPKRecommendation", filename="ModelV2.joblib")

# Load the trained model
model = joblib.load(model_path)

# Example input data
new_data = {
    'Crop Name': 'coffee',
    'Target Yield': 1200.0,
    'Field Size': 1.0,
    'pH (water)': 5.76,
    'Organic Carbon': 12.9,
    'Total Nitrogen': 1.1,
    'Phosphorus (M3)': 1.2,
    'Potassium (exch.)': 1.7,
    'Soil moisture': 11.4
}

# Preprocess the input data
input_df = pd.DataFrame([new_data])

# Ensure the same columns as in training
input_df = pd.get_dummies(input_df, columns=['Crop Name'])
# Assuming X is your feature set used during training
for col in X.columns:
    if col not in input_df.columns:
        input_df[col] = 0

# Make predictions
predictions = model.predict(input_df)

print("Predicted nutrient needs:")
print(predictions)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .