Electric Vehicle Market Analysis Tutorial

Electric Vehicle Market Analysis Tutorial

Introduction and Purpose

This tutorial guides you through a Python-based analysis of electric vehicle (EV) market size using the Electric_Vehicle_Population_Data.csv dataset. The primary purpose is to:

  • Analyze historical EV registration trends from 1997 to 2024.
  • Forecast future EV registrations (2024–2028) using machine learning models.
  • Evaluate model performance and identify key factors influencing EV adoption.
  • Visualize market insights, such as top models, geographic distribution, and feature impacts.

The script leverages data cleaning, feature engineering, and three machine learning models (Linear Regression, Random Forest, and Gradient Boosting) to achieve these goals. This tutorial explains each step, justifies the methodology, and provides detailed explanations of complex functions, including the full code with annotations.

Why this analysis? Understanding EV market growth is crucial for policymakers, manufacturers, and researchers to plan infrastructure, predict demand, and assess environmental impacts. The script provides a robust framework for such analysis, adaptable to other datasets or features.

Analysis Steps and Justifications

The analysis follows a structured pipeline, with each step justified based on data science best practices and the dataset's characteristics. Below, we integrate the code, explain key functions, and justify the approach.

Step 1: Importing Libraries

The script uses standard libraries for data manipulation (pandas, numpy), visualization (matplotlib, seaborn), and machine learning (scikit-learn). These libraries are chosen for their efficiency, community support, and compatibility with the analysis tasks.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.model_selection import train_test_split, learning_curve
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import LabelEncoder
import os

Step 2: Setting Up Environment

The script detects whether it's running in a Jupyter notebook or script environment to handle plot display appropriately. This ensures compatibility across platforms.

# Set up matplotlib to display figures inline if in notebook environment
try:
    get_ipython()
    from IPython.display import display
    print("Jupyter environment detected. Figures will display inline.")
    %matplotlib inline
except:
    print("Script environment detected. Figures will be saved to files.")

Function Explanation: get_ipython()

This function checks if the script is running in a Jupyter environment. If successful, it enables inline plotting with %matplotlib inline. If it fails (raising an exception), the script assumes a non-interactive environment and saves plots to files. This is critical for portability across IDEs and scripts.

Step 3: Loading the Dataset

The dataset is loaded with error handling for file paths, ensuring robustness across different environments (e.g., local or Kaggle). Displaying basic info helps verify data integrity.

# Load the dataset
try:
    df = pd.read_csv('Electric_Vehicle_Population_Data.csv')
except FileNotFoundError:
    try:
        df = pd.read_csv('/kaggle/input/electric-vehicle-population-data/Electric_Vehicle_Population_Data.csv')
    except FileNotFoundError:
        print("Error: Dataset file not found. Please ensure the CSV file is in the correct location.")
        exit(1)

# Display basic info about dataset
print("Dataset loaded successfully.")
print(f"Shape of dataset: {df.shape}")
print("First few rows:")
print(df.head())

Step 4: Data Cleaning

Cleaning ensures data quality by handling missing values, converting data types, and filtering outliers (e.g., future years). This step is critical for reliable modeling.

# Data Cleaning
print("\nCleaning data...")
df['Model Year'] = pd.to_numeric(df['Model Year'], errors='coerce')
df = df.dropna(subset=['Model Year', 'County', 'Make', 'Model', 'Electric Range'])
df['Model Year'] = df['Model Year'].astype(int)
df = df[df['Model Year'] <= 2024]  # Filter out future years
print(f"Shape after cleaning: {df.shape}")

Function Explanation: pd.to_numeric()

Converts Model Year to numeric, with errors='coerce' turning invalid entries into NaN. This handles potential non-numeric values (e.g., strings) in the column, ensuring consistency for subsequent filtering and aggregation.

Step 5: Feature Engineering

Features are engineered to capture factors influencing EV registrations. Electric Range is used directly, while County_Population_Density is synthetic (due to dataset limitations) to simulate demographic influences. Encoding Make allows categorical data in models.

# Feature Engineering
print("\nPerforming feature engineering...")
np.random.seed(42)
df['County_Population_Density'] = np.random.uniform(100, 5000, len(df))  # Synthetic: 100-5000 people/sq mile
le = LabelEncoder()
df['Make_Encoded'] = le.fit_transform(df['Make'])

Function Explanation: LabelEncoder().fit_transform()

Converts categorical Make values (e.g., "Tesla", "Nissan") into numeric labels (e.g., 0, 1). This is essential for machine learning models that require numeric inputs. The fit_transform method learns the mapping and applies it in one step.

Step 6: Data Aggregation

Aggregating by Model Year simplifies the data for time-series analysis, computing mean feature values and registration counts per year.

# Aggregate data by Model Year for forecasting
print("\nAggregating data by Model Year...")
annual_data = df.groupby('Model Year').agg({
    'Electric Range': 'mean',
    'County_Population_Density': 'mean',
    'Make_Encoded': 'mean',
    'Model Year': 'count'
}).rename(columns={'Model Year': 'Registrations'}).reset_index()
print("Annual data summary:")
print(annual_data.head())

Step 7: Historical Growth Analysis

Visualizing historical registrations helps identify trends, guiding model selection and forecasting assumptions.

# Step 1: Historical Growth Analysis
print("\nGenerating historical growth analysis plot...")
plt.figure(figsize=(10, 6))
sns.lineplot(data=annual_data, x='Model Year', y='Registrations', marker='o')
plt.title('Historical EV Registrations (1997-2024)')
plt.xlabel('Model Year')
plt.ylabel('Number of Registrations')
plt.grid(True)
plt.savefig('output/historical_ev_registrations.png')
try:
    display(plt.gcf())
except NameError:
    pass
plt.close()
print("Saved historical_ev_registrations.png")
Historical EV Registrations

Step 8: Prepare Features for Forecasting

Selecting relevant features and splitting data (if sufficient) ensures models learn from meaningful inputs. The script handles small datasets by skipping splits.

# Filter data for forecasting (2010-2023 for complete years)
forecast_data = annual_data[(annual_data['Model Year'] >= 2010) & (annual_data['Model Year'] <= 2023)]
print(f"Using {len(forecast_data)} years of data for forecasting (2010-2023)")

# Step 2: Prepare Features for Forecasting
print("\nPreparing features for forecasting...")
X = forecast_data[['Model Year', 'Electric Range', 'County_Population_Density', 'Make_Encoded']]
y = forecast_data['Registrations']

if len(X) < 5:
    print("Warning: Limited data points available. Using cross-validation instead of train-test split.")
    X_train, X_test = X, X
    y_train, y_test = y, y
else:
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    print(f"Training set size: {X_train.shape[0]}, Test set size: {X_test.shape[0]}")

Function Explanation: train_test_split()

Splits data into training (80%) and testing (20%) sets with test_size=0.2. The random_state=42 ensures reproducibility. For small datasets (<5 samples), the script uses the full dataset for both training and testing to avoid empty splits.

Step 9: Define Models

Three models are chosen for their complementary strengths: Linear Regression for simplicity, Random Forest for non-linear patterns, and Gradient Boosting for sequential learning.

# Step 3: Define Models
print("\nInitializing models...")
models = {
    'Linear Regression': LinearRegression(),
    'Random Forest': RandomForestRegressor(n_estimators=100, random_state=42),
    'Gradient Boosting': GradientBoostingRegressor(n_estimators=100, random_state=42)
}

Step 10: Learning Curves

Learning curves diagnose model fit (overfitting/underfitting) by plotting training and validation errors against training set size.

# Step 4: Learning Curves
def plot_learning_curves(model, X, y, model_name):
    print(f"Generating learning curve for {model_name}...")
    cv_value = min(5, len(X) - 1) if len(X) > 2 else 2
    try:
        train_sizes, train_scores, test_scores = learning_curve(
            model, X, y, cv=cv_value, scoring='neg_mean_squared_error',
            n_jobs=1, train_sizes=np.linspace(0.3, 1.0, min(10, len(X) - 1)) if len(X) > 3 else np.array([0.5, 0.75, 1.0])
        )
        train_scores_mean = -train_scores.mean(axis=1)
        test_scores_mean = -test_scores.mean(axis=1)
        plt.figure(figsize=(10, 6))
        plt.plot(train_sizes, train_scores_mean, label='Training Error')
        plt.plot(train_sizes, test_scores_mean, label='Validation Error')
        plt.title(f'Learning Curves for {model_name}')
        plt.xlabel('Training Set Size')
        plt.ylabel('Mean Squared Error')
        plt.legend()
        plt.grid(True)
        plt.savefig(f'output/learning_curve_{model_name.replace(" ", "_")}.png')
        try:
            display(plt.gcf())
        except NameError:
            pass
        plt.close()
        print(f"Saved learning curve for {model_name}")
    except Exception as e:
        print(f"Warning: Could not generate learning curve for {model_name}: {e}")

Function Explanation: learning_curve()

Computes training and validation scores for varying training set sizes using cross-validation (cv_value). The scoring='neg_mean_squared_error' returns negative MSE, which is negated for plotting. The function adapts to small datasets by adjusting cv and train_sizes, preventing errors.

Random Forest Learning Curve

Step 11: Train Models and Evaluate

Training and evaluating models provide performance metrics (MSE, R²) and forecasts. Feature importance (for tree-based models) highlights key drivers.

# Step 5: Train Models and Evaluate
print("\nTraining models and generating forecasts...")
model_results = []
feature_importance = {}

for name, model in models.items():
    print(f"\nTraining {name}...")
    try:
        model.fit(X_train, y_train)
        y_pred = model.predict(X_test)
        mse = mean_squared_error(y_test, y_pred)
        r2 = r2_score(y_test, y_pred)
        model_results.append({'Model': name, 'MSE': mse, 'R2': r2})
        print(f"{name} - MSE: {mse:.2f}, R²: {r2:.2f}")
        
        if len(X) >= 5:
            plot_learning_curves(model, X, y, name)
        
        if name != 'Linear Regression':
            feature_importance[name] = pd.DataFrame({
                'Feature': X.columns,
                'Importance': model.feature_importances_
            }).sort_values('Importance', ascending=False)
            print(f"Feature importance for {name}:")
            print(feature_importance[name])
        
        future_years = pd.DataFrame({
            'Model Year': range(2024, 2029),
            'Electric Range': [forecast_data['Electric Range'].mean()] * 5,
            'County_Population_Density': [forecast_data['County_Population_Density'].mean()] * 5,
            'Make_Encoded': [forecast_data['Make_Encoded'].mean()] * 5
        })
        future_predictions = model.predict(future_years)
        print(f"Forecast for 2024-2028 using {name}:")
        for year, pred in zip(range(2024, 2029), future_predictions):
            print(f"  {year}: {pred:.0f} registrations")
        
        forecast_df = pd.DataFrame({
            'Year': list(forecast_data['Model Year']) + list(range(2024, 2029)),
            'Registrations': list(forecast_data['Registrations']) + list(future_predictions),
            'Type': ['Historical'] * len(forecast_data) + ['Forecasted'] * 5
        })
        plt.figure(figsize=(10, 6))
        sns.lineplot(data=forecast_df, x='Year', y='Registrations', hue='Type', marker='o')
        plt.title(f'EV Registrations Forecast with {name}')
        plt.xlabel('Year')
        plt.ylabel('Number of Registrations')
        plt.grid(True)
        plt.savefig(f'output/forecast_{name.replace(" ", "_")}.png')
        try:
            display(plt.gcf())
        except NameError:
            pass
        plt.close()
        print(f"Saved forecast plot for {name}")
    except Exception as e:
        print(f"Error in {name} model: {e}")

Function Explanation: mean_squared_error() and r2_score()

mean_squared_error() calculates the average squared difference between predicted and actual values, quantifying prediction error. r2_score() measures the proportion of variance explained, with values near 1 indicating a good fit. These metrics are standard for regression tasks.

Random Forest Forecast

Step 12: Model Comparison

Comparing R² scores visually helps identify the best-performing model.

# Step 6: Model Comparison
print("\nGenerating model comparison chart...")
results_df = pd.DataFrame(model_results)
plt.figure(figsize=(10, 6))
sns.barplot(x='Model', y='R2', data=results_df)
plt.title('Model Comparison: R² Scores')
plt.ylabel('R² Score')
plt.savefig('output/model_comparison.png')
try:
    display(plt.gcf())
except NameError:
    pass
plt.close()
print("Saved model comparison chart")
Model Comparison Chart

Step 13: Distribution by Model

Identifying top models informs market preferences and manufacturer success.

# Step 7: Distribution by Model
print("\nAnalyzing distribution by model...")
top_models = df.groupby(['Make', 'Model']).size().reset_index(name='Registrations')
top_models = top_models.sort_values('Registrations', ascending=False).head(5)
plt.figure(figsize=(10, 6))
sns.barplot(x='Registrations', y='Model', hue='Make', data=top_models)
plt.title('Top 5 EV Models by Registrations')
plt.xlabel('Number of Registrations')
plt.ylabel('Model')
plt.tight_layout()
plt.savefig('output/top_ev_models.png')
try:
    display(plt.gcf())
except NameError:
    pass
plt.close()
print("Saved top EV models chart")
Top EV Models

Step 14: Geographic Distribution

Analyzing county-level registrations highlights regional adoption patterns.

# Step 8: Geographic Distribution
print("\nAnalyzing geographic distribution...")
top_counties = df['County'].value_counts().head(5).reset_index()
top_counties.columns = ['County', 'Registrations']
plt.figure(figsize=(10, 6))
sns.barplot(x='Registrations', y='County', data=top_counties)
plt.title('Top 5 Counties by EV Registrations')
plt.xlabel('Number of Registrations')
plt.ylabel('County')
plt.savefig('output/geographic_distribution.png')
try:
    display(plt.gcf())
except NameError:
    pass
plt.close()
print("Saved geographic distribution chart")
Geographic Distribution

Step 15: Feature Impact Analysis

Training Random Forest with single features isolates their predictive power, aiding feature selection.

# Step 9: Feature Impact Analysis
print("\nPerforming feature impact analysis...")
single_feature_results = []
for feature in X.columns:
    print(f"Analyzing impact of {feature}...")
    X_single = X[[feature]]
    if len(X) < 5:
        X_train_s, X_test_s = X_single, X_single
        y_train_s, y_test_s = y, y
    else:
        X_train_s, X_test_s, y_train_s, y_test_s = train_test_split(X_single, y, test_size=0.2, random_state=42)
    try:
        rf = RandomForestRegressor(n_estimators=100, random_state=42)
        rf.fit(X_train_s, y_train_s)
        y_pred_s = rf.predict(X_test_s)
        r2 = r2_score(y_test_s, y_pred_s)
        single_feature_results.append({'Feature': feature, 'R2': r2})
        print(f"  R² score: {r2:.2f}")
    except Exception as e:
        print(f"  Error analyzing {feature}: {e}")

if single_feature_results:
    feature_impact_df = pd.DataFrame(single_feature_results)
    plt.figure(figsize=(10, 6))
    sns.barplot(x='R2', y='Feature', data=feature_impact_df)
    plt.title('Feature Impact on Registrations (Random Forest R²)')
    plt.xlabel('R² Score')
    plt.savefig('output/feature_impact.png')
    try:
        display(plt.gcf())
    except NameError:
    pass
plt.close()
print("Saved feature impact analysis")
Feature Impact Analysis

Step 16: Save Summary

A summary report consolidates findings for easy reference, enhancing usability.

# Step 10: Save Summary
print("\nGenerating summary report...")
try:
    with open('output/ev_market_summary.txt', 'w') as f:
        f.write("EV Market Size Analysis Summary\n")
        f.write("===============================\n\n")
        f.write(f"Dataset Information:\n")
        f.write(f"Original dataset size: {df.shape[0]} records\n")
        f.write(f"Years covered: {df['Model Year'].min()} to {df['Model Year'].max()}\n\n")
        f.write(f"Model Performance:\n")
        f.write(f"{results_df.to_string(index=False)}\n\n")
        for name in feature_importance:
            f.write(f"Feature Importance ({name}):\n")
            f.write(f"{feature_importance[name].to_string(index=False)}\n\n")
        if single_feature_results:
            f.write(f"Feature Impact:\n")
            f.write(f"{feature_impact_df.to_string(index=False)}\n\n")
        f.write(f"Top 5 Models:\n")
        f.write(f"{top_models.to_string(index=False)}\n\n")
        f.write(f"Top 5 Counties:\n")
        f.write(f"{top_counties.to_string(index=False)}\n")
    print("Summary report saved to output/ev_market_summary.txt")
except Exception as e:
    print(f"Error saving summary report: {e}")

Step 17: Utility Functions

Utility functions enhance user experience by handling plot display and listing output files.

# Function to directly display figures (for Jupyter notebooks)
def show_plt():
    """Display the current figure if in a notebook environment"""
    try:
        display(plt.gcf())
    except (NameError, Exception) as e:
        print(f"Note: Figure will only be saved to file (not displayed): {e}")
        pass

# End of analysis, show where to find the files
def print_file_locations():
    """Print information about where to find the saved output files"""
    print("\n" + "="*70)
    print("ANALYSIS COMPLETE!")
    print("="*70)
    print("\nAll visualizations have been saved to the 'output' directory.")
    print("\nIf you cannot see the visualizations in the notebook:")
    print("1. Check the 'output' folder for PNG image files")
    print("2. For Jupyter Notebook: Try running '%matplotlib inline' in a cell")
    print("3. For Jupyter Lab: Try running '%matplotlib widget' in a cell")
    print("4. If using a script environment, open the PNG files in an image viewer")
    print("\nSummary report is available at: output/ev_market_summary.txt")
    try:
        files = os.listdir('output')
        if files:
            print("\nGenerated files:")
            for file in sorted(files):
                print(f"- output/{file}")
    except Exception as e:
        print(f"Could not list output files: {e}")

print_file_locations()

Evaluation Measures

The script evaluates models using:

  • Mean Squared Error (MSE): Quantifies prediction error. Lower values are better.
  • R² Score: Measures explained variance. Values near 1 indicate a good fit.
  • Learning Curves: Visualize training and validation errors to diagnose model fit.
MSE and R² are standard regression metrics, providing complementary insights into model accuracy and explanatory power. Learning curves help assess overfitting or underfitting.

Models Used

Model Strengths Justification
Linear Regression Simple, interpretable, fast. Baseline model to test linear relationships.
Random Forest Handles non-linear patterns, robust to overfitting. Suitable for complex relationships in EV data.
Gradient Boosting Optimizes sequentially, captures complex patterns. Effective for small datasets with intricate trends.
The combination of models balances simplicity and complexity, allowing comparison of linear and non-linear approaches.

Observations

  • Historical Trends: EV registrations have grown significantly since 2010, reflecting market expansion.
  • Model Performance: Random Forest and Gradient Boosting often outperform Linear Regression due to non-linear patterns.
  • Forecasts: Predicted growth continues through 2028, with model-specific variations.
  • Feature Impact: Model Year and Electric Range are key predictors, per feature importance and impact analysis.
  • Market Insights: Top models and counties highlight consumer preferences and regional adoption.

How to Use

  1. Place Electric_Vehicle_Population_Data.csv in the working directory.
  2. Install libraries: pip install pandas numpy matplotlib seaborn scikit-learn.
  3. Run the script in Python or Jupyter.
  4. Check the output folder for visualizations and ev_market_summary.txt.

Limitations

  • Synthetic County_Population_Density reduces realism.
  • Limited data may affect forecast accuracy.
  • Missing features (e.g., incentives) could enhance models.

Conclusion

This tutorial and script provide a comprehensive framework for analyzing and forecasting EV market size. By integrating robust data processing, modeling, and visualization, it offers actionable insights for stakeholders. Extend the analysis with additional features or datasets for deeper insights.

Comments

Popular posts from this blog

Building and Deploying a Recommender System on Kubeflow with KServe

Tutorial: Building Login and Sign-Up Pages with React, FastAPI, and XAMPP (MySQL)

CrewAI vs LangGraph: A Simple Guide to Multi-Agent Frameworks