help@rskworld.in +91 93305 39277
RSK World
  • Home
  • Development
    • Web Development
    • Mobile Apps
    • Software
    • Games
    • Project
  • Technologies
    • Data Science
    • AI Development
    • Cloud Development
    • Blockchain
    • Cyber Security
    • Dev Tools
    • Testing Tools
  • About
  • Contact

Theme Settings

Color Scheme
Display Options
Font Size
100%
Back to Project
RSK World
polars-fastdataframes
/
notebooks
RSK World
polars-fastdataframes
High-performance DataFrames with Polars
notebooks
  • 01_basic_operations.ipynb7.1 KB
  • 02_lazy_evaluation.ipynb5.5 KB
  • 03_performance_comparison.ipynb7.2 KB
  • 04_advanced_queries.ipynb45.3 KB
04_advanced_queries.ipynb
notebooks/04_advanced_queries.ipynb
Raw Download
Find: Go to:
{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Advanced Queries with Polars\n",
        "\n",
        "<!--\n",
        "Author: RSK World\n",
        "Website: https://rskworld.in\n",
        "Email: help@rskworld.in\n",
        "Phone: +91 93305 39277\n",
        "-->\n",
        "\n",
        "This notebook demonstrates advanced query patterns and optimization techniques in Polars.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Author: RSK World\n",
        "# Website: https://rskworld.in\n",
        "# Email: help@rskworld.in\n",
        "# Phone: +91 93305 39277\n",
        "\n",
        "import polars as pl\n",
        "import numpy as np\n",
        "from datetime import datetime, timedelta\n",
        "\n",
        "print(\"Polars version:\", pl.__version__)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. Complex Window Functions\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create sample sales data\n",
        "sales_data = pl.DataFrame({\n",
        "    'date': pl.date_range(datetime(2023, 1, 1), datetime(2023, 12, 31), '1d', eager=True),\n",
        "    'product': np.random.choice(['Product A', 'Product B', 'Product C'], 365),\n",
        "    'sales': np.random.randint(100, 1000, 365),\n",
        "    'region': np.random.choice(['North', 'South', 'East', 'West'], 365)\n",
        "})\n",
        "\n",
        "# Advanced window functions\n",
        "advanced_window = sales_data.with_columns([\n",
        "    # Running total\n",
        "    pl.col('sales').cumsum().over('product').alias('running_total'),\n",
        "    # Moving average (7 days)\n",
        "    pl.col('sales').rolling_mean(window_size=7).over('product').alias('moving_avg_7d'),\n",
        "    # Rank\n",
        "    pl.col('sales').rank().over('region').alias('rank_in_region'),\n",
        "    # Percentile\n",
        "    pl.col('sales').quantile(0.5).over('product').alias('median_sales')\n",
        "])\n",
        "\n",
        "advanced_window.head(20)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. Conditional Aggregations\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Conditional aggregations\n",
        "conditional_agg = sales_data.group_by('region').agg([\n",
        "    # Sum of sales where sales > 500\n",
        "    pl.col('sales').filter(pl.col('sales') > 500).sum().alias('high_sales_sum'),\n",
        "    # Count of high sales\n",
        "    pl.col('sales').filter(pl.col('sales') > 500).count().alias('high_sales_count'),\n",
        "    # Average of all sales\n",
        "    pl.col('sales').mean().alias('avg_sales'),\n",
        "    # Conditional mean\n",
        "    pl.when(pl.col('sales') > 500)\n",
        "    .then(pl.col('sales'))\n",
        "    .otherwise(None)\n",
        "    .mean()\n",
        "    .alias('conditional_mean')\n",
        "])\n",
        "\n",
        "conditional_agg\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 3. Pivot Operations\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Pivot table\n",
        "pivot_table = sales_data.pivot(\n",
        "    values='sales',\n",
        "    index='date',\n",
        "    columns='region',\n",
        "    aggregate_function='sum'\n",
        ")\n",
        "\n",
        "pivot_table.head(10)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 4. Complex Joins\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create additional DataFrames for joins\n",
        "products = pl.DataFrame({\n",
        "    'product': ['Product A', 'Product B', 'Product C'],\n",
        "    'category': ['Electronics', 'Clothing', 'Food'],\n",
        "    'cost': [50, 30, 10]\n",
        "})\n",
        "\n",
        "regions = pl.DataFrame({\n",
        "    'region': ['North', 'South', 'East', 'West'],\n",
        "    'manager': ['Alice', 'Bob', 'Charlie', 'David'],\n",
        "    'budget': [100000, 120000, 90000, 110000]\n",
        "})\n",
        "\n",
        "# Multiple joins\n",
        "complex_join = (sales_data\n",
        "    .join(products, on='product', how='left')\n",
        "    .join(regions, on='region', how='left')\n",
        "    .with_columns([\n",
        "        (pl.col('sales') - pl.col('cost')).alias('profit')\n",
        "    ])\n",
        ")\n",
        "\n",
        "complex_join.head(10)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 5. Query Optimization with Explain\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Query optimization with explain() - view the optimized query plan\n",
        "# This helps understand how Polars optimizes your queries\n",
        "\n",
        "# Create a sample query\n",
        "sample_query = (sales_data.lazy()\n",
        "    .filter(pl.col('sales') > 500)\n",
        "    .join(products.lazy(), on='product', how='left')\n",
        "    .group_by('category')\n",
        "    .agg([\n",
        "        pl.col('sales').sum().alias('total_sales'),\n",
        "        pl.count().alias('transaction_count')\n",
        "    ])\n",
        "    .sort('total_sales', descending=True)\n",
        ")\n",
        "\n",
        "print(\"Query Plan (optimized):\")\n",
        "print(sample_query.explain())\n",
        "\n",
        "# Execute the query\n",
        "result = sample_query.collect()\n",
        "print(\"\\nQuery Result:\")\n",
        "print(result.head())\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 6. Time Series Operations\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create time series data\n",
        "dates = pl.date_range(datetime(2023, 1, 1), datetime(2023, 12, 31), '1d', eager=True)\n",
        "ts_data = pl.DataFrame({\n",
        "    'date': dates,\n",
        "    'value': np.random.randn(len(dates)).cumsum() * 100 + 1000,\n",
        "    'category': np.random.choice(['A', 'B', 'C'], len(dates))\n",
        "})\n",
        "\n",
        "# Time series operations\n",
        "ts_operations = ts_data.with_columns([\n",
        "    # Lag (previous value)\n",
        "    pl.col('value').shift(1).over('category').alias('lag_1'),\n",
        "    # Lead (next value)\n",
        "    pl.col('value').shift(-1).over('category').alias('lead_1'),\n",
        "    # Difference\n",
        "    pl.col('value').diff().over('category').alias('diff'),\n",
        "    # Percentage change\n",
        "    pl.col('value').pct_change().over('category').alias('pct_change'),\n",
        "    # Rolling statistics\n",
        "    pl.col('value').rolling_mean(window_size=7).over('category').alias('rolling_mean_7d'),\n",
        "    pl.col('value').rolling_std(window_size=7).over('category').alias('rolling_std_7d'),\n",
        "    pl.col('value').rolling_max(window_size=30).over('category').alias('rolling_max_30d'),\n",
        "    # Resample to monthly (using group_by with date truncation)\n",
        "    pl.col('date').dt.truncate('1mo').alias('month')\n",
        "])\n",
        "\n",
        "ts_operations.head(20)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 7. Missing Data Handling\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create data with missing values\n",
        "data_with_nulls = pl.DataFrame({\n",
        "    'id': range(1, 11),\n",
        "    'name': ['Alice', None, 'Charlie', 'David', None, 'Frank', 'Grace', None, 'Ivy', 'Jack'],\n",
        "    'age': [25, 30, None, 28, 32, None, 29, 31, None, 33],\n",
        "    'salary': [50000, None, 70000, 55000, 65000, None, 58000, 62000, 51000, None],\n",
        "    'department': ['IT', 'HR', None, 'Finance', 'IT', 'HR', None, 'IT', 'HR', 'Finance']\n",
        "})\n",
        "\n",
        "print(\"Original data with nulls:\")\n",
        "print(data_with_nulls)\n",
        "print(f\"\\nNull counts per column:\")\n",
        "print(data_with_nulls.null_count())\n",
        "\n",
        "# Fill missing values\n",
        "filled = data_with_nulls.with_columns([\n",
        "    pl.col('name').fill_null('Unknown').alias('name_filled'),\n",
        "    pl.col('age').fill_null(pl.col('age').mean()).alias('age_filled'),\n",
        "    pl.col('salary').fill_null(strategy='forward').alias('salary_filled'),\n",
        "    pl.col('department').fill_null('Unassigned').alias('dept_filled')\n",
        "])\n",
        "\n",
        "print(\"\\nAfter filling nulls:\")\n",
        "print(filled)\n",
        "\n",
        "# Drop rows with any nulls\n",
        "dropped = data_with_nulls.drop_nulls()\n",
        "print(f\"\\nRows after dropping nulls: {len(dropped)}\")\n",
        "\n",
        "# Drop rows where specific columns are null\n",
        "dropped_specific = data_with_nulls.drop_nulls(subset=['name', 'salary'])\n",
        "print(f\"Rows after dropping nulls in 'name' or 'salary': {len(dropped_specific)}\")\n",
        "\n",
        "# Interpolate missing values\n",
        "interpolated = data_with_nulls.with_columns([\n",
        "    pl.col('age').interpolate().alias('age_interpolated'),\n",
        "    pl.col('salary').interpolate().alias('salary_interpolated')\n",
        "])\n",
        "print(\"\\nAfter interpolation:\")\n",
        "print(interpolated.select(['id', 'age', 'age_interpolated', 'salary', 'salary_interpolated']))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 8. Advanced String Operations\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create sample text data\n",
        "text_data = pl.DataFrame({\n",
        "    'id': range(1, 11),\n",
        "    'text': [\n",
        "        'Hello World',\n",
        "        'Python Programming',\n",
        "        'Data Science',\n",
        "        'Machine Learning',\n",
        "        'Deep Learning',\n",
        "        'Natural Language Processing',\n",
        "        'Computer Vision',\n",
        "        'Artificial Intelligence',\n",
        "        'Big Data Analytics',\n",
        "        'Cloud Computing'\n",
        "    ],\n",
        "    'email': [\n",
        "        'user1@example.com',\n",
        "        'user2@test.org',\n",
        "        'admin@company.com',\n",
        "        'info@website.net',\n",
        "        'contact@business.io',\n",
        "        'support@help.com',\n",
        "        'sales@store.com',\n",
        "        'marketing@brand.com',\n",
        "        'hr@company.com',\n",
        "        'dev@tech.com'\n",
        "    ]\n",
        "})\n",
        "\n",
        "# Advanced string operations\n",
        "string_ops = text_data.with_columns([\n",
        "    # Case operations\n",
        "    pl.col('text').str.to_uppercase().alias('upper'),\n",
        "    pl.col('text').str.to_lowercase().alias('lower'),\n",
        "    pl.col('text').str.to_titlecase().alias('title'),\n",
        "    # Length and counts\n",
        "    pl.col('text').str.len_chars().alias('char_count'),\n",
        "    pl.col('text').str.count_matches(' ').alias('word_count'),\n",
        "    # Extract patterns\n",
        "    pl.col('email').str.extract(r'@(\\w+)', 1).alias('domain'),\n",
        "    pl.col('email').str.extract_all(r'\\w+').alias('email_parts'),\n",
        "    # Replace\n",
        "    pl.col('text').str.replace(' ', '_').alias('snake_case'),\n",
        "    # Contains/Starts/Ends with\n",
        "    pl.col('text').str.contains('Learning').alias('has_learning'),\n",
        "    pl.col('text').str.starts_with('Data').alias('starts_data'),\n",
        "    pl.col('text').str.ends_with('ing').alias('ends_ing'),\n",
        "    # Slice\n",
        "    pl.col('text').str.slice(0, 5).alias('first_5_chars'),\n",
        "    # Split\n",
        "    pl.col('text').str.split(' ').alias('words_list')\n",
        "])\n",
        "\n",
        "string_ops\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 9. Type Conversions and Casting\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create data with mixed types\n",
        "mixed_data = pl.DataFrame({\n",
        "    'id': ['1', '2', '3', '4', '5'],\n",
        "    'price': ['100.50', '200.75', '300.25', '400.00', '500.50'],\n",
        "    'quantity': [10, 20, 30, 40, 50],\n",
        "    'is_active': ['true', 'false', 'true', 'false', 'true'],\n",
        "    'date_str': ['2023-01-01', '2023-02-15', '2023-03-20', '2023-04-10', '2023-05-05']\n",
        "})\n",
        "\n",
        "print(\"Original data types:\")\n",
        "print(mixed_data.schema)\n",
        "\n",
        "# Type conversions\n",
        "converted = mixed_data.with_columns([\n",
        "    pl.col('id').cast(pl.Int64).alias('id_int'),\n",
        "    pl.col('price').cast(pl.Float64).alias('price_float'),\n",
        "    pl.col('quantity').cast(pl.Float64).alias('quantity_float'),\n",
        "    pl.col('is_active').str.to_lowercase().cast(pl.Boolean).alias('is_active_bool'),\n",
        "    pl.col('date_str').str.to_date().alias('date'),\n",
        "    # Convert to string\n",
        "    pl.col('quantity').cast(pl.Utf8).alias('quantity_str')\n",
        "])\n",
        "\n",
        "print(\"\\nAfter type conversions:\")\n",
        "print(converted.schema)\n",
        "print(\"\\nConverted data:\")\n",
        "print(converted)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 10. Reshaping Operations (Melt, Stack, Unstack)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Wide format data\n",
        "wide_data = pl.DataFrame({\n",
        "    'id': [1, 2, 3],\n",
        "    'Q1_2023': [100, 200, 300],\n",
        "    'Q2_2023': [150, 250, 350],\n",
        "    'Q3_2023': [120, 220, 320],\n",
        "    'Q4_2023': [180, 280, 380]\n",
        "})\n",
        "\n",
        "print(\"Wide format:\")\n",
        "print(wide_data)\n",
        "\n",
        "# Melt (wide to long)\n",
        "long_data = wide_data.melt(\n",
        "    id_vars='id',\n",
        "    value_vars=['Q1_2023', 'Q2_2023', 'Q3_2023', 'Q4_2023'],\n",
        "    variable_name='quarter',\n",
        "    value_name='sales'\n",
        ")\n",
        "\n",
        "print(\"\\nLong format (melted):\")\n",
        "print(long_data)\n",
        "\n",
        "# Pivot (long to wide) - reverse operation\n",
        "wide_again = long_data.pivot(\n",
        "    values='sales',\n",
        "    index='id',\n",
        "    columns='quarter',\n",
        "    aggregate_function='first'\n",
        ")\n",
        "\n",
        "print(\"\\nBack to wide format (pivoted):\")\n",
        "print(wide_again)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 11. Advanced Join Types\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create sample DataFrames for joins\n",
        "customers = pl.DataFrame({\n",
        "    'customer_id': [1, 2, 3, 4, 5],\n",
        "    'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve'],\n",
        "    'city': ['NYC', 'LA', 'Chicago', 'NYC', 'LA']\n",
        "})\n",
        "\n",
        "orders = pl.DataFrame({\n",
        "    'order_id': [101, 102, 103, 104, 105, 106],\n",
        "    'customer_id': [1, 2, 1, 4, 3, 99],  # Note: 99 doesn't exist in customers\n",
        "    'amount': [100, 200, 150, 300, 250, 400],\n",
        "    'date': ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-05', '2023-01-06']\n",
        "})\n",
        "\n",
        "print(\"Customers:\")\n",
        "print(customers)\n",
        "print(\"\\nOrders:\")\n",
        "print(orders)\n",
        "\n",
        "# Inner join\n",
        "inner = customers.join(orders, on='customer_id', how='inner')\n",
        "print(\"\\n1. Inner Join (only matching records):\")\n",
        "print(inner)\n",
        "\n",
        "# Left join\n",
        "left = customers.join(orders, on='customer_id', how='left')\n",
        "print(\"\\n2. Left Join (all customers, nulls for missing orders):\")\n",
        "print(left)\n",
        "\n",
        "# Right join\n",
        "right = customers.join(orders, on='customer_id', how='right')\n",
        "print(\"\\n3. Right Join (all orders, nulls for missing customers):\")\n",
        "print(right)\n",
        "\n",
        "# Outer join\n",
        "outer = customers.join(orders, on='customer_id', how='outer')\n",
        "print(\"\\n4. Outer Join (all records from both):\")\n",
        "print(outer)\n",
        "\n",
        "# Anti join (rows in left that don't have match in right)\n",
        "anti = customers.join(orders, on='customer_id', how='anti')\n",
        "print(\"\\n5. Anti Join (customers with no orders):\")\n",
        "print(anti)\n",
        "\n",
        "# Semi join (rows in left that have match in right, but don't include right columns)\n",
        "semi = customers.join(orders, on='customer_id', how='semi')\n",
        "print(\"\\n6. Semi Join (customers who have orders, but no order details):\")\n",
        "print(semi)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 12. Complex Expressions and Chaining\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create employee data\n",
        "employees = pl.DataFrame({\n",
        "    'id': range(1, 11),\n",
        "    'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve', 'Frank', 'Grace', 'Henry', 'Ivy', 'Jack'],\n",
        "    'base_salary': [50000, 60000, 70000, 55000, 65000, 52000, 58000, 62000, 51000, 68000],\n",
        "    'years_experience': [2, 5, 8, 3, 6, 4, 7, 9, 1, 10],\n",
        "    'department': ['IT', 'HR', 'IT', 'Finance', 'IT', 'HR', 'Finance', 'IT', 'HR', 'Finance'],\n",
        "    'performance_score': [85, 90, 95, 80, 88, 82, 92, 96, 75, 98]\n",
        "})\n",
        "\n",
        "# Complex chained operations with multiple expressions\n",
        "result = (employees\n",
        "    .with_columns([\n",
        "        # Calculate bonus based on performance\n",
        "        (pl.col('base_salary') * 0.1 * (pl.col('performance_score') / 100)).alias('bonus'),\n",
        "        # Experience multiplier\n",
        "        (1 + pl.col('years_experience') * 0.02).alias('exp_multiplier'),\n",
        "        # Total compensation\n",
        "        (pl.col('base_salary') * (1 + pl.col('years_experience') * 0.02)).alias('adjusted_salary')\n",
        "    ])\n",
        "    .with_columns([\n",
        "        # Final compensation\n",
        "        (pl.col('adjusted_salary') + pl.col('bonus')).alias('total_compensation'),\n",
        "        # Category based on salary\n",
        "        pl.when(pl.col('base_salary') > 60000)\n",
        "        .then(pl.lit('High'))\n",
        "        .when(pl.col('base_salary') > 55000)\n",
        "        .then(pl.lit('Medium'))\n",
        "        .otherwise(pl.lit('Low'))\n",
        "        .alias('salary_category')\n",
        "    ])\n",
        "    .filter(pl.col('total_compensation') > 60000)\n",
        "    .group_by(['department', 'salary_category'])\n",
        "    .agg([\n",
        "        pl.col('total_compensation').mean().alias('avg_compensation'),\n",
        "        pl.col('total_compensation').max().alias('max_compensation'),\n",
        "        pl.col('total_compensation').min().alias('min_compensation'),\n",
        "        pl.count().alias('employee_count')\n",
        "    ])\n",
        "    .sort('avg_compensation', descending=True)\n",
        ")\n",
        "\n",
        "print(\"Complex chained operations result:\")\n",
        "print(result)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 13. Data Validation and Quality Checks\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create data with potential quality issues\n",
        "quality_data = pl.DataFrame({\n",
        "    'id': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
        "    'age': [25, 30, 150, 28, -5, 32, 27, 200, 26, 33],  # Invalid ages\n",
        "    'email': [\n",
        "        'valid@example.com',\n",
        "        'invalid-email',\n",
        "        'another@test.com',\n",
        "        'not-an-email',\n",
        "        'good@email.org',\n",
        "        'bad format',\n",
        "        'valid2@example.com',\n",
        "        'missing',\n",
        "        'valid3@test.net',\n",
        "        'invalid@'\n",
        "    ],\n",
        "    'score': [85, 90, 105, 80, 88, -10, 92, 150, 75, 98],  # Scores out of range\n",
        "    'status': ['active', 'inactive', 'active', 'pending', 'active', 'inactive', 'active', 'unknown', 'active', 'active']\n",
        "})\n",
        "\n",
        "print(\"Original data:\")\n",
        "print(quality_data)\n",
        "\n",
        "# Data quality checks\n",
        "quality_checks = quality_data.with_columns([\n",
        "    # Age validation (0-120)\n",
        "    pl.when((pl.col('age') >= 0) & (pl.col('age') <= 120))\n",
        "    .then(pl.col('age'))\n",
        "    .otherwise(None)\n",
        "    .alias('age_validated'),\n",
        "    # Email validation (simple check)\n",
        "    pl.when(pl.col('email').str.contains('@') & pl.col('email').str.contains('.'))\n",
        "    .then(pl.col('email'))\n",
        "    .otherwise(None)\n",
        "    .alias('email_validated'),\n",
        "    # Score validation (0-100)\n",
        "    pl.when((pl.col('score') >= 0) & (pl.col('score') <= 100))\n",
        "    .then(pl.col('score'))\n",
        "    .otherwise(None)\n",
        "    .alias('score_validated'),\n",
        "    # Overall validation flag\n",
        "    (\n",
        "        ((pl.col('age') >= 0) & (pl.col('age') <= 120)) &\n",
        "        (pl.col('email').str.contains('@') & pl.col('email').str.contains('.')) &\n",
        "        ((pl.col('score') >= 0) & (pl.col('score') <= 100))\n",
        "    ).alias('is_valid')\n",
        "])\n",
        "\n",
        "print(\"\\nData with validation:\")\n",
        "print(quality_checks)\n",
        "\n",
        "# Summary of data quality\n",
        "print(\"\\nData Quality Summary:\")\n",
        "print(f\"Valid records: {quality_checks.filter(pl.col('is_valid')).height}\")\n",
        "print(f\"Invalid records: {quality_checks.filter(~pl.col('is_valid')).height}\")\n",
        "print(f\"Invalid ages: {quality_checks.filter((pl.col('age') < 0) | (pl.col('age') > 120)).height}\")\n",
        "print(f\"Invalid emails: {quality_checks.filter(~(pl.col('email').str.contains('@') & pl.col('email').str.contains('.'))).height}\")\n",
        "print(f\"Invalid scores: {quality_checks.filter((pl.col('score') < 0) | (pl.col('score') > 100)).height}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 14. Advanced Aggregations with Multiple Conditions\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Sales data for advanced aggregations\n",
        "sales = pl.DataFrame({\n",
        "    'date': pl.date_range(datetime(2023, 1, 1), datetime(2023, 3, 31), '1d', eager=True),\n",
        "    'product': np.random.choice(['Product A', 'Product B', 'Product C'], 90),\n",
        "    'region': np.random.choice(['North', 'South', 'East', 'West'], 90),\n",
        "    'sales': np.random.randint(100, 1000, 90),\n",
        "    'quantity': np.random.randint(1, 50, 90)\n",
        "})\n",
        "\n",
        "# Advanced aggregations\n",
        "advanced_agg = sales.group_by(['product', 'region']).agg([\n",
        "    # Basic stats\n",
        "    pl.col('sales').sum().alias('total_sales'),\n",
        "    pl.col('sales').mean().alias('avg_sales'),\n",
        "    pl.col('sales').median().alias('median_sales'),\n",
        "    pl.col('sales').std().alias('std_sales'),\n",
        "    pl.col('sales').min().alias('min_sales'),\n",
        "    pl.col('sales').max().alias('max_sales'),\n",
        "    # Quantiles\n",
        "    pl.col('sales').quantile(0.25).alias('q25_sales'),\n",
        "    pl.col('sales').quantile(0.75).alias('q75_sales'),\n",
        "    # Conditional aggregations\n",
        "    pl.col('sales').filter(pl.col('sales') > 500).sum().alias('high_sales_sum'),\n",
        "    pl.col('sales').filter(pl.col('sales') > 500).count().alias('high_sales_count'),\n",
        "    pl.col('sales').filter(pl.col('sales') < 300).count().alias('low_sales_count'),\n",
        "    # Multiple conditions\n",
        "    pl.when(pl.col('sales') > 700)\n",
        "    .then(pl.col('sales'))\n",
        "    .otherwise(None)\n",
        "    .sum()\n",
        "    .alias('premium_sales'),\n",
        "    # Count distinct\n",
        "    pl.col('date').n_unique().alias('days_with_sales'),\n",
        "    # First and last\n",
        "    pl.col('sales').first().alias('first_sale'),\n",
        "    pl.col('sales').last().alias('last_sale')\n",
        "])\n",
        "\n",
        "print(\"Advanced aggregations:\")\n",
        "print(advanced_agg)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 15. Working with Nested Data (Struct and List)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create data with nested structures\n",
        "nested_data = pl.DataFrame({\n",
        "    'id': [1, 2, 3, 4, 5],\n",
        "    'name': ['Alice', 'Bob', 'Charlie', 'David', 'Eve'],\n",
        "    'scores': [[85, 90, 88], [92, 87, 91], [78, 85, 80], [95, 93, 97], [88, 90, 89]],\n",
        "    'address': [\n",
        "        {'street': '123 Main St', 'city': 'NYC', 'zip': '10001'},\n",
        "        {'street': '456 Oak Ave', 'city': 'LA', 'zip': '90001'},\n",
        "        {'street': '789 Pine Rd', 'city': 'Chicago', 'zip': '60601'},\n",
        "        {'street': '321 Elm St', 'city': 'NYC', 'zip': '10002'},\n",
        "        {'street': '654 Maple Dr', 'city': 'LA', 'zip': '90002'}\n",
        "    ]\n",
        "})\n",
        "\n",
        "print(\"Nested data:\")\n",
        "print(nested_data)\n",
        "\n",
        "# Work with lists\n",
        "list_ops = nested_data.with_columns([\n",
        "    # List operations\n",
        "    pl.col('scores').list.mean().alias('avg_score'),\n",
        "    pl.col('scores').list.sum().alias('total_score'),\n",
        "    pl.col('scores').list.max().alias('max_score'),\n",
        "    pl.col('scores').list.min().alias('min_score'),\n",
        "    pl.col('scores').list.len().alias('num_scores'),\n",
        "    pl.col('scores').list.sort().alias('scores_sorted'),\n",
        "    pl.col('scores').list.first().alias('first_score'),\n",
        "    pl.col('scores').list.last().alias('last_score')\n",
        "])\n",
        "\n",
        "print(\"\\nList operations:\")\n",
        "print(list_ops.select(['id', 'name', 'scores', 'avg_score', 'total_score', 'max_score']))\n",
        "\n",
        "# Work with structs\n",
        "struct_ops = nested_data.with_columns([\n",
        "    # Extract struct fields\n",
        "    pl.col('address').struct.field('city').alias('city'),\n",
        "    pl.col('address').struct.field('zip').alias('zip'),\n",
        "    pl.col('address').struct.field('street').alias('street')\n",
        "])\n",
        "\n",
        "print(\"\\nStruct operations:\")\n",
        "print(struct_ops.select(['id', 'name', 'city', 'zip', 'street']))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 16. Real-World Example: E-Commerce Analytics\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create realistic e-commerce data\n",
        "np.random.seed(42)\n",
        "dates = pl.date_range(datetime(2023, 1, 1), datetime(2023, 12, 31), '1d', eager=True)\n",
        "products = ['Laptop', 'Phone', 'Tablet', 'Headphones', 'Mouse', 'Keyboard', 'Monitor', 'Speaker']\n",
        "categories = ['Electronics', 'Electronics', 'Electronics', 'Audio', 'Accessories', 'Accessories', 'Electronics', 'Audio']\n",
        "regions = ['North', 'South', 'East', 'West']\n",
        "\n",
        "ecommerce = pl.DataFrame({\n",
        "    'order_id': range(1, 1001),\n",
        "    'date': np.random.choice(dates, 1000),\n",
        "    'product': np.random.choice(products, 1000),\n",
        "    'category': [categories[products.index(p)] for p in np.random.choice(products, 1000)],\n",
        "    'region': np.random.choice(regions, 1000),\n",
        "    'quantity': np.random.randint(1, 10, 1000),\n",
        "    'unit_price': np.random.uniform(50, 1000, 1000),\n",
        "    'customer_id': np.random.randint(1, 200, 1000),\n",
        "    'discount': np.random.uniform(0, 0.3, 1000)\n",
        "})\n",
        "\n",
        "# Calculate derived columns\n",
        "ecommerce = ecommerce.with_columns([\n",
        "    (pl.col('unit_price') * pl.col('quantity')).alias('subtotal'),\n",
        "    (pl.col('unit_price') * pl.col('quantity') * pl.col('discount')).alias('discount_amount'),\n",
        "    (pl.col('unit_price') * pl.col('quantity') * (1 - pl.col('discount'))).alias('total')\n",
        "])\n",
        "\n",
        "print(\"E-commerce data sample:\")\n",
        "print(ecommerce.head(10))\n",
        "\n",
        "# Comprehensive analytics\n",
        "analytics = (ecommerce\n",
        "    .with_columns([\n",
        "        pl.col('date').dt.month().alias('month'),\n",
        "        pl.col('date').dt.quarter().alias('quarter'),\n",
        "        pl.col('date').dt.weekday().alias('weekday')\n",
        "    ])\n",
        "    .group_by(['category', 'quarter'])\n",
        "    .agg([\n",
        "        pl.col('total').sum().alias('revenue'),\n",
        "        pl.col('total').mean().alias('avg_order_value'),\n",
        "        pl.col('order_id').n_unique().alias('num_orders'),\n",
        "        pl.col('quantity').sum().alias('total_quantity'),\n",
        "        pl.col('discount').mean().alias('avg_discount'),\n",
        "        (pl.col('total').sum() / pl.col('order_id').n_unique()).alias('revenue_per_order')\n",
        "    ])\n",
        "    .sort(['category', 'quarter'])\n",
        ")\n",
        "\n",
        "print(\"\\nCategory and Quarter Analytics:\")\n",
        "print(analytics)\n",
        "\n",
        "# Top products by revenue\n",
        "top_products = (ecommerce\n",
        "    .group_by('product')\n",
        "    .agg([\n",
        "        pl.col('total').sum().alias('total_revenue'),\n",
        "        pl.col('order_id').n_unique().alias('num_orders'),\n",
        "        pl.col('quantity').sum().alias('total_sold')\n",
        "    ])\n",
        "    .sort('total_revenue', descending=True)\n",
        "    .head(5)\n",
        ")\n",
        "\n",
        "print(\"\\nTop 5 Products by Revenue:\")\n",
        "print(top_products)\n",
        "\n",
        "# Regional performance\n",
        "regional_perf = (ecommerce\n",
        "    .group_by('region')\n",
        "    .agg([\n",
        "        pl.col('total').sum().alias('total_revenue'),\n",
        "        pl.col('order_id').n_unique().alias('num_orders'),\n",
        "        (pl.col('total').sum() / pl.col('order_id').n_unique()).alias('avg_order_value'),\n",
        "        pl.col('customer_id').n_unique().alias('unique_customers')\n",
        "    ])\n",
        "    .with_columns([\n",
        "        (pl.col('total_revenue') / pl.col('total_revenue').sum() * 100).alias('revenue_percentage')\n",
        "    ])\n",
        "    .sort('total_revenue', descending=True)\n",
        ")\n",
        "\n",
        "print(\"\\nRegional Performance:\")\n",
        "print(regional_perf)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 17. Performance Optimization Tips\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create large dataset for performance demo\n",
        "print(\"Creating large dataset for performance comparison...\")\n",
        "large_data = pl.DataFrame({\n",
        "    'id': range(1, 100001),\n",
        "    'category': np.random.choice(['A', 'B', 'C', 'D', 'E'], 100000),\n",
        "    'value1': np.random.randn(100000) * 100,\n",
        "    'value2': np.random.randn(100000) * 50,\n",
        "    'value3': np.random.randint(1, 1000, 100000)\n",
        "})\n",
        "\n",
        "print(f\"Dataset shape: {large_data.shape}\")\n",
        "\n",
        "# Tip 1: Use lazy evaluation for complex queries\n",
        "import time\n",
        "\n",
        "print(\"\\n1. Eager vs Lazy Evaluation:\")\n",
        "start = time.time()\n",
        "eager_result = (large_data\n",
        "    .filter(pl.col('value1') > 50)\n",
        "    .filter(pl.col('value2') < 20)\n",
        "    .group_by('category')\n",
        "    .agg([pl.col('value1').mean(), pl.count()])\n",
        ")\n",
        "eager_time = time.time() - start\n",
        "print(f\"   Eager execution: {eager_time:.4f} seconds\")\n",
        "\n",
        "start = time.time()\n",
        "lazy_result = (large_data.lazy()\n",
        "    .filter(pl.col('value1') > 50)\n",
        "    .filter(pl.col('value2') < 20)\n",
        "    .group_by('category')\n",
        "    .agg([pl.col('value1').mean(), pl.count()])\n",
        "    .collect()\n",
        ")\n",
        "lazy_time = time.time() - start\n",
        "print(f\"   Lazy execution: {lazy_time:.4f} seconds\")\n",
        "print(f\"   Speedup: {eager_time / lazy_time:.2f}x\")\n",
        "\n",
        "# Tip 2: Select only needed columns early\n",
        "print(\"\\n2. Column Selection Optimization:\")\n",
        "start = time.time()\n",
        "all_cols = (large_data\n",
        "    .filter(pl.col('value1') > 50)\n",
        "    .group_by('category')\n",
        "    .agg([pl.col('value1').mean(), pl.col('value2').mean(), pl.col('value3').mean()])\n",
        ")\n",
        "all_cols_time = time.time() - start\n",
        "\n",
        "start = time.time()\n",
        "selected_cols = (large_data\n",
        "    .select(['category', 'value1', 'value2', 'value3'])\n",
        "    .filter(pl.col('value1') > 50)\n",
        "    .group_by('category')\n",
        "    .agg([pl.col('value1').mean(), pl.col('value2').mean(), pl.col('value3').mean()])\n",
        ")\n",
        "selected_cols_time = time.time() - start\n",
        "\n",
        "print(f\"   Without early selection: {all_cols_time:.4f} seconds\")\n",
        "print(f\"   With early selection: {selected_cols_time:.4f} seconds\")\n",
        "\n",
        "# Tip 3: Use appropriate data types\n",
        "print(\"\\n3. Data Type Optimization:\")\n",
        "# Check memory usage\n",
        "print(f\"   Current memory: {large_data.estimated_size() / (1024 * 1024):.2f} MB\")\n",
        "\n",
        "# Convert to more efficient types\n",
        "optimized = large_data.with_columns([\n",
        "    pl.col('category').cast(pl.Categorical).alias('category_cat'),\n",
        "    pl.col('value3').cast(pl.Int32).alias('value3_int32')\n",
        "])\n",
        "\n",
        "print(f\"   Optimized memory: {optimized.estimated_size() / (1024 * 1024):.2f} MB\")\n",
        "print(f\"   Memory saved: {(1 - optimized.estimated_size() / large_data.estimated_size()) * 100:.1f}%\")\n",
        "\n",
        "# Tip 4: View query plan\n",
        "print(\"\\n4. Query Plan Analysis:\")\n",
        "query = (large_data.lazy()\n",
        "    .filter(pl.col('value1') > 50)\n",
        "    .select(['category', 'value1', 'value2'])\n",
        "    .group_by('category')\n",
        "    .agg([pl.col('value1').mean(), pl.col('value2').sum()])\n",
        "    .sort('category')\n",
        ")\n",
        "\n",
        "print(\"   Query plan:\")\n",
        "print(query.explain())\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 18. Working with Parquet and CSV Efficiently\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create sample data\n",
        "sample_data = pl.DataFrame({\n",
        "    'id': range(1, 10001),\n",
        "    'name': [f'Product_{i}' for i in range(1, 10001)],\n",
        "    'category': np.random.choice(['A', 'B', 'C', 'D', 'E'], 10000),\n",
        "    'price': np.random.uniform(10, 1000, 10000),\n",
        "    'quantity': np.random.randint(1, 100, 10000)\n",
        "})\n",
        "\n",
        "print(\"Sample data created:\")\n",
        "print(f\"Shape: {sample_data.shape}\")\n",
        "\n",
        "# Save to different formats\n",
        "print(\"\\n1. Saving to different formats:\")\n",
        "sample_data.write_csv('data/sample_output.csv')\n",
        "print(\"   ✓ Saved to CSV\")\n",
        "\n",
        "sample_data.write_parquet('data/sample_output.parquet')\n",
        "print(\"   ✓ Saved to Parquet\")\n",
        "\n",
        "# Read with lazy evaluation (more efficient for large files)\n",
        "print(\"\\n2. Reading with lazy evaluation:\")\n",
        "try:\n",
        "    lazy_csv = pl.scan_csv('data/sample_output.csv')\n",
        "    print(\"   ✓ Lazy CSV reader created\")\n",
        "    print(f\"   Query plan: {lazy_csv.explain()}\")\n",
        "    \n",
        "    lazy_parquet = pl.scan_parquet('data/sample_output.parquet')\n",
        "    print(\"\\n   ✓ Lazy Parquet reader created\")\n",
        "    print(f\"   Query plan: {lazy_parquet.explain()}\")\n",
        "    \n",
        "    # Process without loading full file\n",
        "    result = (lazy_csv\n",
        "        .filter(pl.col('price') > 500)\n",
        "        .group_by('category')\n",
        "        .agg([pl.col('price').mean(), pl.count()])\n",
        "        .collect()\n",
        "    )\n",
        "    print(\"\\n   Processed data without loading full file:\")\n",
        "    print(result)\n",
        "except Exception as e:\n",
        "    print(f\"   Error: {e}\")\n",
        "\n",
        "# Compare file sizes\n",
        "import os\n",
        "if os.path.exists('data/sample_output.csv'):\n",
        "    csv_size = os.path.getsize('data/sample_output.csv') / 1024\n",
        "    print(f\"\\n3. File size comparison:\")\n",
        "    print(f\"   CSV size: {csv_size:.2f} KB\")\n",
        "\n",
        "if os.path.exists('data/sample_output.parquet'):\n",
        "    parquet_size = os.path.getsize('data/sample_output.parquet') / 1024\n",
        "    print(f\"   Parquet size: {parquet_size:.2f} KB\")\n",
        "    if csv_size > 0:\n",
        "        print(f\"   Compression ratio: {csv_size / parquet_size:.2f}x smaller\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 19. Combining Multiple DataFrames (Concat, Union)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create multiple DataFrames\n",
        "df1 = pl.DataFrame({\n",
        "    'id': [1, 2, 3],\n",
        "    'name': ['Alice', 'Bob', 'Charlie'],\n",
        "    'age': [25, 30, 35]\n",
        "})\n",
        "\n",
        "df2 = pl.DataFrame({\n",
        "    'id': [4, 5, 6],\n",
        "    'name': ['David', 'Eve', 'Frank'],\n",
        "    'age': [28, 32, 27]\n",
        "})\n",
        "\n",
        "df3 = pl.DataFrame({\n",
        "    'id': [7, 8, 9],\n",
        "    'name': ['Grace', 'Henry', 'Ivy'],\n",
        "    'age': [29, 31, 26]\n",
        "})\n",
        "\n",
        "print(\"DataFrame 1:\")\n",
        "print(df1)\n",
        "print(\"\\nDataFrame 2:\")\n",
        "print(df2)\n",
        "print(\"\\nDataFrame 3:\")\n",
        "print(df3)\n",
        "\n",
        "# Concatenate vertically\n",
        "concatenated = pl.concat([df1, df2, df3])\n",
        "print(\"\\n1. Concatenated (vertical):\")\n",
        "print(concatenated)\n",
        "\n",
        "# Concatenate horizontally (must have same number of rows)\n",
        "df4 = pl.DataFrame({\n",
        "    'salary': [50000, 60000, 70000],\n",
        "    'department': ['IT', 'HR', 'Finance']\n",
        "})\n",
        "\n",
        "horizontal = pl.concat([df1, df4], how='horizontal')\n",
        "print(\"\\n2. Concatenated (horizontal):\")\n",
        "print(horizontal)\n",
        "\n",
        "# Union (combines and removes duplicates)\n",
        "df5 = pl.DataFrame({\n",
        "    'id': [1, 2, 10],\n",
        "    'name': ['Alice', 'Bob', 'Jack'],\n",
        "    'age': [25, 30, 33]\n",
        "})\n",
        "\n",
        "union_result = pl.concat([df1, df5]).unique()\n",
        "print(\"\\n3. Union (with duplicates removed):\")\n",
        "print(union_result)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 20. Summary and Best Practices\n",
        "\n",
        "This notebook covered many advanced Polars features:\n",
        "\n",
        "### Key Features Demonstrated:\n",
        "1. ✅ Complex Window Functions\n",
        "2. ✅ Conditional Aggregations\n",
        "3. ✅ Pivot Operations\n",
        "4. ✅ Complex Joins (Inner, Left, Right, Outer, Anti, Semi)\n",
        "5. ✅ Query Optimization with Explain\n",
        "6. ✅ Time Series Operations\n",
        "7. ✅ Missing Data Handling\n",
        "8. ✅ Advanced String Operations\n",
        "9. ✅ Type Conversions\n",
        "10. ✅ Reshaping Operations (Melt, Pivot)\n",
        "11. ✅ Complex Expressions and Chaining\n",
        "12. ✅ Data Validation\n",
        "13. ✅ Advanced Aggregations\n",
        "14. ✅ Nested Data (Struct and List)\n",
        "15. ✅ Real-World E-Commerce Analytics\n",
        "16. ✅ Performance Optimization\n",
        "17. ✅ File I/O (CSV, Parquet)\n",
        "18. ✅ Combining DataFrames\n",
        "\n",
        "### Best Practices:\n",
        "- **Use lazy evaluation** for complex queries and large datasets\n",
        "- **Select columns early** to reduce memory usage\n",
        "- **Use appropriate data types** (Categorical for strings, Int32 vs Int64)\n",
        "- **Leverage query optimization** by using `.explain()` to understand query plans\n",
        "- **Use Parquet format** for better compression and performance\n",
        "- **Chain operations** for better readability and optimization\n",
        "- **Validate data** early in your pipeline\n",
        "- **Use window functions** instead of loops for time series operations\n",
        "\n",
        "### Performance Tips:\n",
        "- Polars is optimized for columnar operations\n",
        "- Lazy evaluation allows query optimization\n",
        "- Use `.collect()` only when you need the results\n",
        "- Prefer Polars operations over Python loops\n",
        "- Use appropriate join types for your use case\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Complex query with lazy evaluation\n",
        "optimized_query = (sales_data.lazy()\n",
        "    .filter(pl.col('sales') > 500)\n",
        "    .join(products.lazy(), on='product', how='left')\n",
        "    .join(regions.lazy(), on='region', how='left')\n",
        "    .group_by(['region', 'category'])\n",
        "    .agg([\n",
        "        pl.col('sales').sum().alias('total_sales'),\n",
        "        pl.col('sales').mean().alias('avg_sales'),\n",
        "        pl.count().alias('transaction_count')\n",
        "    ])\n",
        "    .sort('total_sales', descending=True)\n",
        ")\n",
        "\n",
        "# View optimized query plan\n",
        "print(\"Optimized Query Plan:\")\n",
        "print(optimized_query.explain())\n",
        "\n",
        "# Execute\n",
        "result = optimized_query.collect()\n",
        "result\n"
      ]
    }
  ],
  "metadata": {
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
1,187 lines•45.3 KB
json

About RSK World

Founded by Molla Samser, with Designer & Tester Rima Khatun, RSK World is your one-stop destination for free programming resources, source code, and development tools.

Founder: Molla Samser
Designer & Tester: Rima Khatun

Development

  • Game Development
  • Web Development
  • Mobile Development
  • AI Development
  • Development Tools

Legal

  • Terms & Conditions
  • Privacy Policy
  • Disclaimer

Contact Info

Nutanhat, Mongolkote
Purba Burdwan, West Bengal
India, 713147

+91 93305 39277

hello@rskworld.in
support@rskworld.in

© 2026 RSK World. All rights reserved.

Content used for educational purposes only. View Disclaimer