help@rskworld.in +91 93305 39277
RSK World
  • Home
  • Development
    • Web Development
    • Mobile Apps
    • Software
    • Games
    • Project
  • Technologies
    • Data Science
    • AI Development
    • Cloud Development
    • Blockchain
    • Cyber Security
    • Dev Tools
    • Testing Tools
  • About
  • Contact

Theme Settings

Color Scheme
Display Options
Font Size
100%
Back to Project
RSK World
statsmodels-statistical
RSK World
statsmodels-statistical
Statistical Modeling with Statsmodels
statsmodels-statistical
  • __pycache__
  • data
  • examples
  • notebooks
  • .gitignore458 B
  • CHANGELOG.md4 KB
  • FEATURES.md6.3 KB
  • LICENSE1.2 KB
  • PROJECT_INFO.md2.2 KB
  • PROJECT_SUMMARY.md4.2 KB
  • README.md7.4 KB
  • RELEASE_NOTES_v1.0.0.md6.5 KB
  • UNIQUE_FEATURES.md5.3 KB
  • advanced_time_series.py9.8 KB
  • automated_reporting.py8.3 KB
  • bayesian_statistics.py7.5 KB
  • data_preprocessing.py8.2 KB
  • econometric_modeling.py9.8 KB
  • hypothesis_testing.py12.5 KB
  • index.html10.8 KB
  • model_evaluation.py9.1 KB
  • model_persistence.py6.5 KB
  • model_selection.py9.7 KB
  • panel_data_analysis.py7.3 KB
  • performance_benchmarking.py7.3 KB
  • regression_analysis.py9 KB
  • requirements.txt361 B
  • statistical_diagnostics.py13.8 KB
  • statsmodels-statistical.png284 B
  • time_series_analysis.py10.3 KB
  • visualization_utils.py8.9 KB
performance_benchmarking.py
performance_benchmarking.py
Raw Download
Find: Go to:
"""
Performance Benchmarking and Profiling

Author: RSK World
Website: https://rskworld.in
Email: help@rskworld.in
Phone: +91 93305 39277
"""

import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from functools import wraps
import warnings
warnings.filterwarnings('ignore')


class PerformanceBenchmark:
    """
    Performance Benchmarking Tools
    
    Author: RSK World
    Website: https://rskworld.in
    Email: help@rskworld.in
    Phone: +91 93305 39277
    """
    
    def __init__(self):
        self.benchmarks = []
    
    def benchmark_function(self, func, *args, **kwargs):
        """
        Benchmark a function execution
        
        Parameters:
        -----------
        func : callable
            Function to benchmark
        *args, **kwargs
            Arguments to pass to function
        """
        start_time = time.time()
        start_memory = self._get_memory_usage()
        
        result = func(*args, **kwargs)
        
        end_time = time.time()
        end_memory = self._get_memory_usage()
        
        execution_time = end_time - start_time
        memory_used = end_memory - start_memory
        
        benchmark = {
            'function': func.__name__,
            'execution_time': execution_time,
            'memory_used': memory_used,
            'timestamp': time.time()
        }
        
        self.benchmarks.append(benchmark)
        
        print(f"Benchmark: {func.__name__}")
        print(f"Execution Time: {execution_time:.4f} seconds")
        print(f"Memory Used: {memory_used:.2f} MB")
        
        return result, benchmark
    
    def compare_models(self, models_dict, X, y):
        """
        Compare performance of multiple models
        
        Parameters:
        -----------
        models_dict : dict
            Dictionary of model names and model objects
        X : array-like
            Independent variables
        y : array-like
            Dependent variable
        """
        results = []
        
        for name, model in models_dict.items():
            print(f"\nBenchmarking: {name}")
            print("-" * 70)
            
            # Training time
            start = time.time()
            if hasattr(model, 'fit'):
                model.fit(X, y)
            training_time = time.time() - start
            
            # Prediction time
            start = time.time()
            if hasattr(model, 'predict'):
                predictions = model.predict(X)
            prediction_time = time.time() - start
            
            # Model metrics
            metrics = {}
            if hasattr(model, 'results'):
                metrics['aic'] = model.results.aic if hasattr(model.results, 'aic') else None
                metrics['bic'] = model.results.bic if hasattr(model.results, 'bic') else None
                metrics['r_squared'] = model.results.rsquared if hasattr(model.results, 'rsquared') else None
            
            results.append({
                'model': name,
                'training_time': training_time,
                'prediction_time': prediction_time,
                'total_time': training_time + prediction_time,
                **metrics
            })
        
        results_df = pd.DataFrame(results)
        
        print("\n" + "=" * 70)
        print("MODEL PERFORMANCE COMPARISON")
        print("=" * 70)
        print(results_df.to_string(index=False))
        
        return results_df
    
    def plot_benchmark_comparison(self, metric='execution_time'):
        """Plot benchmark comparison"""
        if not self.benchmarks:
            print("No benchmarks to plot")
            return
        
        df = pd.DataFrame(self.benchmarks)
        
        plt.figure(figsize=(10, 6))
        plt.bar(df['function'], df[metric])
        plt.xlabel('Function')
        plt.ylabel(metric.replace('_', ' ').title())
        plt.title('Performance Benchmark Comparison')
        plt.xticks(rotation=45, ha='right')
        plt.grid(True, alpha=0.3, axis='y')
        plt.tight_layout()
        plt.show()
    
    def _get_memory_usage(self):
        """Get current memory usage in MB"""
        try:
            import psutil
            process = psutil.Process()
            return process.memory_info().rss / 1024 / 1024  # Convert to MB
        except ImportError:
            return 0
    
    def timeit(self, n_iterations=10):
        """
        Decorator for timing function execution
        
        Parameters:
        -----------
        n_iterations : int
            Number of iterations to average
        """
        def decorator(func):
            @wraps(func)
            def wrapper(*args, **kwargs):
                times = []
                for _ in range(n_iterations):
                    start = time.time()
                    result = func(*args, **kwargs)
                    times.append(time.time() - start)
                
                avg_time = np.mean(times)
                std_time = np.std(times)
                
                print(f"{func.__name__}: {avg_time:.4f} ± {std_time:.4f} seconds (avg of {n_iterations} runs)")
                
                return result
            return wrapper
        return decorator


def benchmark_model_complexity(X, y, model_class, complexity_params):
    """
    Benchmark model performance across different complexities
    
    Parameters:
    -----------
    X : array-like
        Independent variables
    y : array-like
        Dependent variable
    model_class : class
        Model class to instantiate
    complexity_params : dict
        Dictionary of complexity parameter names and values
    """
    results = []
    
    for param_name, param_values in complexity_params.items():
        for param_value in param_values:
            model = model_class()
            setattr(model, param_name, param_value)
            
            start = time.time()
            model.fit(X, y)
            training_time = time.time() - start
            
            start = time.time()
            predictions = model.predict(X)
            prediction_time = time.time() - start
            
            results.append({
                param_name: param_value,
                'training_time': training_time,
                'prediction_time': prediction_time,
                'total_time': training_time + prediction_time
            })
    
    return pd.DataFrame(results)


if __name__ == "__main__":
    # Example usage
    print("Performance Benchmarking Example")
    print("=" * 70)
    
    from regression_analysis import LinearRegressionModel
    import numpy as np
    
    # Generate sample data
    np.random.seed(42)
    X = np.random.randn(1000, 10)
    y = 2 + np.sum(X[:, :3], axis=1) + np.random.randn(1000) * 0.5
    
    # Benchmark
    benchmark = PerformanceBenchmark()
    
    def fit_model():
        model = LinearRegressionModel()
        model.fit(X, y)
        return model
    
    result, bench = benchmark.benchmark_function(fit_model)
    
    # Compare models
    models = {
        'Simple Model': LinearRegressionModel()
    }
    
    comparison = benchmark.compare_models(models, X, y)

249 lines•7.3 KB
python

About RSK World

Founded by Molla Samser, with Designer & Tester Rima Khatun, RSK World is your one-stop destination for free programming resources, source code, and development tools.

Founder: Molla Samser
Designer & Tester: Rima Khatun

Development

  • Game Development
  • Web Development
  • Mobile Development
  • AI Development
  • Development Tools

Legal

  • Terms & Conditions
  • Privacy Policy
  • Disclaimer

Contact Info

Nutanhat, Mongolkote
Purba Burdwan, West Bengal
India, 713147

+91 93305 39277

hello@rskworld.in
support@rskworld.in

© 2026 RSK World. All rights reserved.

Content used for educational purposes only. View Disclaimer