๐Ÿ“Š NLP Text Analysis Bot

Advanced Natural Language Processing bot for comprehensive text analysis with Python
Try Interactive Demo

๐Ÿ“– Project Overview

NLP Text Analysis Bot is an advanced, feature-rich Python application that provides comprehensive text analysis capabilities using Natural Language Processing. This project offers text preprocessing, sentiment analysis, entity recognition, semantic understanding, text summarization, language detection, emotion detection, readability analysis, text classification, POS tagging, text similarity, advanced keyword extraction, and a beautiful Flask web interface. Perfect for developers looking to build NLP applications or integrate text analysis functionality into their projects.

โšก Quick Facts

๐ŸŽฏ Type: Python NLP Text Analysis Application
๐Ÿค– AI Features: NLP, Sentiment Analysis, Entity Recognition, Text Summarization
โšก Setup Time: ~10 minutes
๐Ÿ“ฆ Dependencies: 15+ Python packages
๐ŸŽจ Web Framework: Flask 2.3.0+
๐Ÿง  NLP Libraries: NLTK, spaCy, Transformers
๐Ÿ“ฑ Responsive: Mobile, Tablet, Desktop
๐ŸŒ Languages: Multi-language Detection
25+
Files
4000+
Lines of Code
12+
NLP Features
15+
Python Modules
Developer: Molla Samser | Website: rskworld.in | Email: help@rskworld.in
Difficulty Level: Intermediate - Perfect for developers learning Python, NLP, text analysis, and natural language processing.

โœจ Features

Core Features

๐Ÿงน Text Preprocessing

Cleaning, tokenization, stopword removal, and lemmatization for clean text analysis.

๐Ÿ˜Š Sentiment Analysis

Multi-method sentiment detection using VADER and transformer models for accurate sentiment classification.

๐Ÿ“ Entity Recognition

Named entity extraction using spaCy to identify people, organizations, locations, and more.

๐Ÿง  Semantic Understanding

Keyword extraction, topic identification, and phrase analysis for deep text understanding.

Advanced Features

๐Ÿ“„ Text Summarization

Extractive and abstractive summarization using transformer models for concise summaries.

๐ŸŒ Language Detection

Automatic language detection with confidence scores for multi-language text analysis.

๐Ÿท๏ธ Text Classification

Zero-shot text classification into multiple categories with confidence scores.

๐Ÿ˜ข Emotion Detection

Advanced emotion analysis beyond basic sentiment (joy, sadness, anger, fear, etc.).

๐Ÿ“Š Readability Analysis

Multiple readability metrics (Flesch, SMOG, Coleman-Liau, etc.) and grade levels.

๐Ÿ”‘ Advanced Keywords

TF-IDF based keyword extraction with n-grams for important term identification.

๐Ÿ“ POS Tagging

Complete part-of-speech analysis with distribution statistics and tagging.

๐Ÿ”— Text Similarity

Calculate similarity between texts using multiple methods (cosine, Jaccard, etc.).

๐ŸŒ Flask Web Interface

Beautiful and modern Flask-based web application with responsive design and visualizations.

๐Ÿ“Š Data Visualization

Interactive charts and graphs for sentiment distribution, entity types, and analysis metrics.

๐Ÿ”„ Advanced Error Handling

Comprehensive error handling with user-friendly messages and graceful fallbacks.

๐Ÿ“š Well Documented

Complete documentation with examples, quick start guide, and detailed installation instructions.

๐Ÿ› ๏ธ Technologies

Python 3.8+

Modern Python programming language

Language

Flask 2.3.0+

Lightweight Python web framework

Framework

NLTK 3.6+

Natural Language Toolkit for text processing

NLP Library

spaCy 3.4.0+

Advanced NLP library for entity extraction

NLP Library

scikit-learn 1.0.0+

Machine learning library for NLP

ML Library

NumPy 1.21.0+

Numerical computing library

Library

Requests 2.28.0+

HTTP library for API calls

Library

๐Ÿ“ฆ Installation Guide - Step by Step

โฑ๏ธ Installation Time: ~10 minutes

Follow these detailed steps to install and set up the NLP Text Analysis Bot on your system.

๐Ÿ“‹ Prerequisites

๐Ÿ Python 3.8+

Check your Python version: python --version

Download: python.org

๐Ÿ“ฆ pip Package Manager

Usually comes with Python. Verify: pip --version

๐Ÿ’ป Terminal/Command Prompt

Windows: PowerShell or CMD
Linux/Mac: Terminal

๐ŸŒ Internet Connection

Required for downloading packages and NLP models

๐Ÿš€ Step-by-Step Installation

๐Ÿ“Œ Step 1: Download or Clone the Project

Option A: Download ZIP

  1. Download the project ZIP file from the repository
  2. Extract it to your desired location (e.g., C:\Projects\nlp-text-analysis-bot or ~/Projects/nlp-text-analysis-bot)
  3. Open terminal/command prompt in the extracted folder

Option B: Clone with Git

git clone https://github.com/rskworld/nlp-text-analysis-bot.git
cd nlp-text-analysis-bot
โœ… Step 2: Create Virtual Environment

Virtual environments isolate project dependencies and prevent conflicts.

Windows:

# Create virtual environment
python -m venv venv

# Activate virtual environment
venv\Scripts\activate

# You should see (venv) in your prompt

Linux/Mac:

# Create virtual environment
python3 -m venv venv

# Activate virtual environment
source venv/bin/activate

# You should see (venv) in your prompt
โš ๏ธ Note: If you see an error, try python3 instead of python on Linux/Mac.
๐Ÿ“ฆ Step 3: Install Python Dependencies

This will install all required packages including NLTK, spaCy, Flask, Transformers, etc.

# Upgrade pip first (recommended)
pip install --upgrade pip

# Install all dependencies
pip install -r requirements.txt

# This may take 5-10 minutes depending on your internet speed
โณ Expected Time: 5-10 minutes. The installation will download large packages like PyTorch, Transformers, and spaCy models.
๐Ÿง  Step 4: Download spaCy English Model

Required for entity recognition and advanced NLP features.

# Download spaCy English model
python -m spacy download en_core_web_sm

# This downloads ~50MB model file
๐Ÿ“š Step 5: Download NLTK Data (Automatic)

NLTK data is usually downloaded automatically on first use, but you can download it manually:

# Download required NLTK data
python -c "import nltk; nltk.download('punkt'); nltk.download('stopwords'); nltk.download('wordnet'); nltk.download('vader_lexicon'); nltk.download('averaged_perceptron_tagger')"
โœ… Step 6: Verify Installation

Run the validation script to check if everything is installed correctly:

# Run validation script
python validate_project.py

# This will check all dependencies and models
๐Ÿš€ Step 7: Run the Application

Start the Flask web interface:

# Start Flask web interface
python app.py

# You should see:
# * Running on http://127.0.0.1:5000
# Press CTRL+C to quit

๐ŸŒ Access the Application:

  • Open your web browser
  • Navigate to: http://localhost:5000 or http://127.0.0.1:5000
  • You should see the NLP Text Analysis Bot interface

โœ… Installation Complete!

Congratulations! You've successfully installed the NLP Text Analysis Bot. The Flask web interface provides the best user experience with all advanced features including:

  • Text preprocessing and analysis
  • Sentiment analysis with visualizations
  • Entity recognition
  • Text summarization
  • Language detection
  • Emotion detection
  • Readability analysis
  • Interactive charts and graphs

Next Steps:

  1. Open http://localhost:5000 in your browser
  2. Enter some text in the input field
  3. Click "Analyze Text" to see comprehensive analysis results
  4. Explore all the features and visualizations!

๐Ÿ”ง Troubleshooting Installation

โŒ ModuleNotFoundError

Solution: Make sure virtual environment is activated and run pip install -r requirements.txt again.

โŒ spaCy Model Not Found

Solution: Run python -m spacy download en_core_web_sm

โŒ NLTK Data Missing

Solution: Run the NLTK download command from Step 5, or it will download automatically on first use.

โŒ Port Already in Use

Solution: Change port in app.py or stop the process using port 5000.

๐Ÿ“š Usage Guide - Step by Step

๐ŸŽฏ Getting Started with the Web Interface

Step-by-Step Demo Instructions

๐Ÿ“ Step 1: Start the Application

  1. Open terminal/command prompt in the project directory
  2. Activate virtual environment (if not already activated):
    # Windows: venv\Scripts\activate
    # Linux/Mac: source venv/bin/activate
  3. Run the Flask application:
    python app.py
  4. Wait for the message: * Running on http://127.0.0.1:5000

๐ŸŒ Step 2: Open the Web Interface

  1. Open your web browser (Chrome, Firefox, Safari, or Edge)
  2. Navigate to: http://localhost:5000
  3. You should see the NLP Text Analysis Bot interface with:
    • Text input area
    • Analyze button
    • Results display area
    • Visualization charts section

๐Ÿ“Š Step 3: Analyze Your First Text

  1. Enter Sample Text: Type or paste text in the input field. Example:
    "I love this product! It's amazing and works perfectly. The customer service is excellent too."
  2. Click "Analyze Text" button
  3. Wait for Analysis: The system will process your text (takes 2-5 seconds)
  4. View Results: You'll see comprehensive analysis including:
    • ๐Ÿ“ˆ Sentiment scores and distribution chart
    • ๐Ÿท๏ธ Named entities (people, organizations, locations)
    • ๐Ÿ“ Text summary
    • ๐ŸŒ Detected language
    • ๐Ÿ˜Š Emotion analysis
    • ๐Ÿ“Š Readability metrics
    • ๐Ÿ”‘ Keywords and topics
    • ๐Ÿ“‹ POS tagging results

๐Ÿ“ˆ Step 4: Explore Visualizations

The interface includes interactive charts and graphs:

  • Sentiment Distribution Chart: Pie or bar chart showing positive/negative/neutral sentiment
  • Entity Type Chart: Bar chart showing different entity types found
  • Emotion Distribution: Visual representation of detected emotions
  • Readability Metrics: Comparison charts for different readability scores
  • Keyword Frequency: Word cloud or bar chart of important keywords

๐Ÿ”„ Step 5: Try Different Text Types

Experiment with various text samples to see different analysis results:

  • Product Reviews: Analyze customer feedback sentiment
  • News Articles: Extract entities and summarize content
  • Social Media Posts: Detect emotions and sentiment
  • Technical Documents: Analyze readability and extract keywords
  • Multi-language Text: Test language detection

๐Ÿ’ป Using as Python Module

You can also use the NLP pipeline programmatically in your Python code:

from nlp_pipeline import NLPPipeline # Initialize pipeline pipeline = NLPPipeline() # Analyze text results = pipeline.analyze("Your text here") # Access results print(results['sentiment']) print(results['entities']) print(results['summary'])

๐Ÿ“Š Features Usage

๐Ÿงน Text Preprocessing

Automatic cleaning, tokenization, and normalization of input text

๐Ÿ˜Š Sentiment Analysis

Get sentiment scores (positive, negative, neutral) with confidence levels

๐Ÿท๏ธ Entity Recognition

Automatically extracts people, organizations, locations, dates, and more

๐Ÿ“„ Text Summarization

Generate concise summaries of long texts using extractive or abstractive methods

๐ŸŒ Language Detection

Automatically detects text language with confidence scores

๐Ÿ˜ข Emotion Detection

Identifies emotions like joy, sadness, anger, fear beyond basic sentiment

๐Ÿ“Š Readability Analysis

Multiple readability metrics (Flesch, SMOG, Coleman-Liau) and grade levels

๐Ÿ”‘ Keyword Extraction

TF-IDF based keyword extraction with n-grams for important terms

๐Ÿ“Š Data Visualizations & Charts

The NLP Text Analysis Bot includes interactive charts and graphs to visualize analysis results. Here are examples of the visualizations you'll see:

๐Ÿ“ˆ Sentiment Analysis Chart

This pie chart shows the distribution of sentiment in analyzed text (Positive, Negative, Neutral).

๐Ÿท๏ธ Entity Type Distribution

Bar chart displaying different types of named entities found in the text (Person, Organization, Location, etc.).

๐Ÿ˜Š Emotion Distribution

Visual representation of detected emotions (joy, sadness, anger, fear, surprise, etc.).

๐Ÿ“Š Readability Metrics Comparison

Comparison of different readability scores (Flesch, SMOG, Coleman-Liau, etc.).

๐Ÿ’ก Interactive Charts: All charts in the web interface are interactive. You can hover over data points to see exact values, and charts update automatically when you analyze new text.

๐Ÿ’ป Code Examples

Basic Python Usage

from nlp_pipeline import NLPPipeline # Create pipeline instance pipeline = NLPPipeline() # Analyze text text = "I love this product! It's amazing and works perfectly." results = pipeline.analyze(text) # Access results print(results['sentiment']) print(results['entities']) print(results['summary'])

Advanced Features Usage

from nlp_pipeline import NLPPipeline pipeline = NLPPipeline() # Complete text analysis text = "The new AI technology from OpenAI is revolutionizing how we work." results = pipeline.analyze(text) # Sentiment Analysis print(f"Sentiment: {results['sentiment']['label']}") print(f"Confidence: {results['sentiment']['confidence']}") # Entity Recognition for entity in results['entities']: print(f"{entity['text']} - {entity['label']}") # Text Summarization print(f"Summary: {results['summary']['text']}") # Language Detection print(f"Language: {results['language']['language']}") print(f"Confidence: {results['language']['confidence']}") # Emotion Detection print(f"Primary Emotion: {results['emotion']['primary']}") # Readability Analysis print(f"Flesch Score: {results['readability']['flesch']}") print(f"Grade Level: {results['readability']['grade_level']}")

Flask Web Interface

# Run Flask web interface from app import app if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5000) # Or simply run: # python app.py

Individual Module Usage

# Sentiment Analysis from sentiment_analysis import SentimentAnalyzer analyzer = SentimentAnalyzer() sentiment = analyzer.analyze("I'm feeling great today!") print(sentiment) # {'label': 'positive', 'score': 0.95} # Entity Recognition from entity_recognition import EntityRecognizer recognizer = EntityRecognizer() entities = recognizer.extract("Apple Inc. is located in Cupertino, California") print(entities) # [{'text': 'Apple Inc.', 'label': 'ORG'}, ...] # Text Summarization from text_summarization import TextSummarizer summarizer = TextSummarizer() summary = summarizer.summarize("Long text here...", max_length=100) print(summary) # Language Detection from language_detection import LanguageDetector detector = LanguageDetector() lang = detector.detect("Bonjour, comment allez-vous?") print(lang) # {'language': 'fr', 'confidence': 0.99}

Configuration

# config.py ENABLE_SENTIMENT_ANALYSIS = True ENABLE_ENTITY_RECOGNITION = True ENABLE_TEXT_SUMMARIZATION = True ENABLE_EMOTION_DETECTION = True ENABLE_READABILITY_ANALYSIS = True # Access in code from config import ENABLE_SENTIMENT_ANALYSIS print(ENABLE_SENTIMENT_ANALYSIS)

๐Ÿ”— API Endpoints

Flask Web API

The application provides REST API endpoints for text analysis through the Flask web framework.

Available API Endpoints

Endpoint Method Description
/api/analyze POST Complete text analysis with all features
/api/similarity POST Calculate similarity between two texts
/api/health GET Health check endpoint
/ GET Web interface homepage

API Usage Examples

# POST /api/analyze import requests url = "http://localhost:5000/api/analyze" data = { "text": "I love this product! It's amazing." } response = requests.post(url, json=data) results = response.json() print(results['sentiment']) print(results['entities']) print(results['summary'])
# POST /api/similarity import requests url = "http://localhost:5000/api/similarity" data = { "text1": "I love Python programming", "text2": "Python is my favorite language" } response = requests.post(url, json=data) similarity = response.json() print(f"Similarity: {similarity['similarity']}")

Response Format

{ "original_text": "...", "preprocessing": {...}, "sentiment": { "label": "positive", "score": 0.95, "confidence": 0.92 }, "entities": [...], "semantic": {...}, "summary": {...}, "language": {...}, "emotion": {...}, "readability": {...} }

โš™๏ธ Configuration

Configuration in this Python application is handled through:

Configuration File

Edit config.py file in the root directory:

# config.py DEFAULT_LANGUAGE = 'en' ENABLE_ANALYTICS = True ENABLE_SENTIMENT_ANALYSIS = True ENABLE_API_INTEGRATIONS = True # Optional API keys WEATHER_API_KEY = None NEWS_API_KEY = None

Note: API keys are optional. The bot works without them but with limited functionality.

Environment Variables

You can also use environment variables:

export DEFAULT_LANGUAGE=en export ENABLE_ANALYTICS=true export WEATHER_API_KEY=your_key_here

Runtime Configuration

Configure settings programmatically:

  • Language: Use bot.set_language('es') to change language
  • Analytics: Enable/disable through config.py
  • API Integrations: Configure API keys in config.py
  • Response Templates: Customize in response_templates.py

Configuration changes require restarting the application.

Web Interface Configuration

The Flask web interface can be configured in app.py. Default port is 5000, but you can change it in the run configuration.

๐Ÿ“ Project Structure

nlp-text-analysis-bot
nlp-text-analysis-bot/
โ”œโ”€โ”€ app.py                     # Flask web interface
โ”œโ”€โ”€ nlp_pipeline.py            # Main NLP pipeline orchestrator
โ”œโ”€โ”€ text_preprocessing.py      # Text cleaning and preprocessing
โ”œโ”€โ”€ sentiment_analysis.py       # Sentiment analysis module
โ”œโ”€โ”€ entity_recognition.py      # Named entity recognition
โ”œโ”€โ”€ semantic_understanding.py  # Semantic analysis module
โ”œโ”€โ”€ text_summarization.py      # Text summarization module
โ”œโ”€โ”€ language_detection.py      # Language detection module
โ”œโ”€โ”€ text_classification.py     # Text classification module
โ”œโ”€โ”€ emotion_detection.py       # Emotion detection module
โ”œโ”€โ”€ readability_analysis.py   # Readability metrics module
โ”œโ”€โ”€ advanced_keywords.py       # TF-IDF keyword extraction
โ”œโ”€โ”€ pos_tagging.py            # Part-of-speech tagging
โ”œโ”€โ”€ text_similarity.py         # Text similarity calculation
โ”œโ”€โ”€ config.py                  # Configuration
โ”œโ”€โ”€ example_usage.py           # Usage examples
โ”œโ”€โ”€ test_analysis.py           # Test suite
โ”œโ”€โ”€ validate_project.py        # Project validation script
โ”œโ”€โ”€ templates/
โ”‚   โ””โ”€โ”€ index.html            # Web interface template
โ”œโ”€โ”€ static/
โ”‚   โ”œโ”€โ”€ css/
โ”‚   โ”‚   โ””โ”€โ”€ style.css         # Custom styles
โ”‚   โ””โ”€โ”€ js/
โ”‚       โ””โ”€โ”€ main.js           # JavaScript functions
โ”œโ”€โ”€ requirements.txt          # Dependencies
โ”œโ”€โ”€ README.md                 # Documentation
โ”œโ”€โ”€ QUICKSTART.md             # Quick start guide
โ”œโ”€โ”€ ADVANCED_FEATURES.md      # Advanced features guide
โ”œโ”€โ”€ LICENSE                    # License file
โ””โ”€โ”€ .gitignore                 # Git ignore

Note: Edit config.py to customize settings and optional API keys.

๐Ÿ“„ Detailed File Descriptions

๐Ÿง  nlp_pipeline.py

Purpose: Main NLP pipeline orchestrator. Coordinates all NLP modules for comprehensive text analysis.

Key Features:

  • Main pipeline class
  • Orchestrates all NLP modules
  • End-to-end text analysis
  • Error handling and fallbacks
  • Result aggregation
  • Performance optimization

๐Ÿงน text_preprocessing.py

Purpose: Text preprocessing module. Cleans, tokenizes, and normalizes text for analysis.

Key Features:

  • Text cleaning
  • Tokenization
  • Stopword removal
  • Lemmatization

๐Ÿ˜Š sentiment_analysis.py

Purpose: Sentiment analysis module. Multi-method sentiment detection using VADER and transformers.

Key Features:

  • VADER sentiment analysis
  • Transformer-based analysis
  • Sentiment classification
  • Confidence scoring

๐Ÿท๏ธ entity_recognition.py

Purpose: Named entity recognition module. Extracts entities using spaCy.

Key Features:

  • Named entity extraction
  • Entity type classification
  • Location, person, organization detection
  • Date and time extraction

๐Ÿง  semantic_understanding.py

Purpose: Semantic analysis module. Keyword extraction and topic identification.

Key Features:

  • Keyword extraction
  • Topic identification
  • Phrase analysis
  • Semantic relationships

๐Ÿ“„ text_summarization.py

Purpose: Text summarization module. Generates concise summaries using extractive and abstractive methods.

Key Features:

  • Extractive summarization
  • Abstractive summarization
  • Summary length control
  • Transformer-based models

๐ŸŒ language_detection.py

Purpose: Language detection module. Automatically detects text language with confidence scores.

Key Features:

  • Automatic language detection
  • Confidence scoring
  • Multi-language support
  • Language probability distribution

๐Ÿท๏ธ text_classification.py

Purpose: Text classification module. Zero-shot classification into multiple categories.

Key Features:

  • Zero-shot classification
  • Category assignment
  • Confidence scores
  • Custom categories

๐Ÿ˜ข emotion_detection.py

Purpose: Emotion detection module. Advanced emotion analysis beyond basic sentiment.

Key Features:

  • Emotion classification
  • Emotion distribution
  • Primary emotion detection
  • Emotion intensity scoring

๐Ÿ“Š readability_analysis.py

Purpose: Readability analysis module. Multiple readability metrics and grade levels.

Key Features:

  • Flesch reading ease
  • SMOG index
  • Coleman-Liau index
  • Grade level calculation

โš™๏ธ config.py

Purpose: Configuration module. Contains settings, constants, and configuration options.

Key Features:

  • Application settings
  • Default configurations
  • API key management
  • Feature toggles

๐ŸŒ app.py

Purpose: Flask web interface application. Provides web-based interface for the chatbot.

Key Features:

  • Flask web server
  • REST API endpoints
  • Web interface routes
  • Template rendering

๐Ÿ“ฆ requirements.txt

Purpose: Python dependencies list. Contains all required packages and versions.

Key Features:

  • Dependency management
  • Version specifications
  • Package listings
  • Installation instructions

๐Ÿ“– README.md

Purpose: Project overview and quick start guide. Provides introduction, features, installation instructions, and usage examples.

Contents:

  • Project description
  • Features list
  • Installation guide
  • Usage instructions
  • Project structure
  • Support information

๐Ÿ“ RELEASE_NOTES.md

Purpose: Release notes. Documents features, changes, and updates in the current version.

Contents:

  • Release information
  • Feature list
  • Technical details
  • Changelog

โš–๏ธ LICENSE

Purpose: MIT License file. Contains the full MIT License text and copyright information.

License Type: MIT License

Copyright: ยฉ 2026 RSK World

๐Ÿšซ .gitignore

Purpose: Git ignore rules. Specifies files and directories that should not be tracked by version control.

Excluded Items:

  • .env files (API keys)
  • node_modules/ directory
  • build/ directory
  • IDE configuration files
  • OS-specific files
  • Log files

๐Ÿ“Š File Statistics

15+
Total Files
15+
Python Modules
5
Documentation Files
10+
Core Modules
3000+
Lines of Code
4
Directories

๐Ÿ“Œ File Organization

Core NLP Modules: nlp_pipeline.py, text_preprocessing.py, sentiment_analysis.py, entity_recognition.py, semantic_understanding.py

Advanced Features: text_summarization.py, language_detection.py, emotion_detection.py, readability_analysis.py, text_classification.py, pos_tagging.py, text_similarity.py, advanced_keywords.py

Documentation: README.md, QUICKSTART.md, ADVANCED_FEATURES.md, LICENSE

Configuration: config.py, requirements.txt, .gitignore

Web Interface: app.py, templates/index.html, static/css/style.css, static/js/main.js

๐Ÿš€ Advanced Features Details

1. Context-Aware Conversations

Advanced context management maintains conversation context across multiple turns. The bot remembers previous messages, user information, and conversation history, enabling coherent and natural multi-turn dialogues.

2. Intent Recognition & Entity Extraction

Intelligent intent recognition identifies user intentions from natural language using pattern matching and machine learning. Entity extraction automatically identifies names, dates, locations, and other important information from messages.

3. Sentiment Analysis

Sentiment analysis module analyzes user sentiment (positive, negative, neutral) to provide better, empathetic responses. The bot adapts its tone and responses based on user emotions.

4. Multi-Language Support

Supports 8+ languages (English, Spanish, French, German, Hindi, Chinese, Japanese, Arabic) with automatic language detection. Users can switch languages or the bot detects the language automatically.

5. API Integrations

Integrated with external APIs for weather information, news articles, jokes, quotes, and calculations. The bot can fetch real-time data and provide enhanced functionality beyond basic conversations.

6. Conversation Analytics

Comprehensive analytics tracking metrics, intent distribution, session statistics, and conversation patterns. Provides insights into user engagement and bot performance.

7. Flask Web Interface

Beautiful Flask-based web interface with responsive design. Provides easy-to-use web interface for interacting with the chatbot, viewing analytics, and managing conversations.

8. Modular & Extensible

Well-organized modular design makes it easy to extend with new features, integrations, and customizations. Each module is independent and can be modified or extended easily.

๐Ÿ”ง Troubleshooting

Installation Issues

  • Make sure you're using Python 3.8 or higher: python --version
  • Install all dependencies: pip install -r requirements.txt
  • Use virtual environment: python -m venv venv then activate it
  • If pip install fails, try: pip install --upgrade pip first

NLTK Data Issues

  • Download required NLTK data: python -c "import nltk; nltk.download('punkt'); nltk.download('stopwords')"
  • If NLTK download fails, check internet connection
  • NLTK data is downloaded automatically on first use

Import Errors

  • Make sure you're in the project directory
  • Activate virtual environment before running
  • Check that all dependencies are installed: pip list
  • If module not found, reinstall: pip install -r requirements.txt --force-reinstall

Common Issues

  • Module not found errors: Run pip install -r requirements.txt to install all dependencies
  • Port already in use: The default Flask port is 5000. Change it in app.py or set PORT environment variable
  • Virtual environment issues: Make sure virtual environment is activated before running
  • API integration errors: API keys are optional. The bot works without them but with limited functionality
  • Context errors: Check that conversation_history.json is writable

๐Ÿ“‹ Requirements

numpy>=1.21.0 scikit-learn>=1.0.0 nltk>=3.6 spacy>=3.4.0 python-dateutil>=2.8.2 colorama>=0.4.4 setuptools>=65.0.0 flask>=2.3.0 requests>=2.28.0

See requirements.txt for the complete list of dependencies.

Python Version: Python 3.8 or higher required.

๐ŸŽฏ Use Cases

๐Ÿ’ป Development

AI coding assistant for developers

๐Ÿ“š Education

Learning and tutoring platform

๐Ÿ’ผ Business

Professional consultation and advice

โœ๏ธ Creative Writing

Storytelling and content creation

๐Ÿ’ฌ General Chat

Casual conversations and assistance

๐ŸŒ Multi-Language

8+ languages with automatic detection

๐Ÿ’ฌ Support

For support, questions, or more projects:

๐Ÿ“„ License

This project is provided as-is for educational and development purposes.

MIT License - See LICENSE file for details.

๐Ÿ“‚ Demo Folder Structure

The demo/ folder contains demonstration and documentation files for this project.

demo/ โ”œโ”€โ”€ index.html # This documentation page โ”œโ”€โ”€ demo.html # Interactive demo page โ”œโ”€โ”€ style.css # Stylesheet (optional, styles are inline) โ””โ”€โ”€ script.js # JavaScript (optional, can be added for interactivity)
Try Interactive Demo

๐Ÿ“„ Demo Files Description

๐Ÿ“„ index.html

Purpose: Comprehensive project documentation and information page. This HTML file contains all details about the NLP Text Analysis Bot project.

Contents:

  • Complete project overview
  • All features documentation
  • Installation instructions
  • Usage examples
  • Code examples
  • API integration reference
  • Configuration details
  • Detailed file and folder descriptions
  • Project structure
  • Troubleshooting guide
  • Support information

Features:

  • Self-contained HTML with inline CSS
  • Responsive design
  • Modern, beautiful UI
  • Well-organized sections
  • Easy navigation

๐ŸŽจ style.css

Purpose: External stylesheet file (optional). Currently, all styles are embedded inline in index.html, but this file can be used for additional custom styles if needed.

Status: Empty file - can be used for custom styling

Usage: Add custom CSS styles here if you want to override or extend the inline styles in index.html

๐Ÿ“œ script.js

Purpose: External JavaScript file (optional). Can be used to add interactive features to the documentation page.

Status: Empty file - can be used for additional functionality

Potential Uses:

  • Table of contents navigation
  • Smooth scrolling
  • Search functionality
  • Copy code snippets
  • Theme toggle
  • Print functionality
  • Interactive elements

๐ŸŽฎ demo.html

Purpose: Interactive demo page showcasing the NLP Text Analysis Bot features in action.

Features:

  • Live chat interface
  • Context management demonstration
  • Intent recognition and entity extraction
  • Sentiment analysis display
  • API integrations (weather, news, jokes, calculations)
  • Real-time statistics and analytics
  • Quick action buttons
  • Beautiful, responsive UI

Usage: Open demo.html in your browser to try the interactive demo!

๐Ÿ’ก About the Demo Folder

The demo/ folder is separate from the main nlp-text-analysis-bot/ project folder. It contains:

  • Documentation: This comprehensive HTML documentation page that explains the entire project
  • Interactive Demo: demo.html - Live interactive demo showcasing all chatbot features
  • Styling: Optional CSS file for custom styling
  • Scripts: Optional JavaScript file for enhanced interactivity

Note: To view this documentation, simply open demo/index.html in any web browser. To try the interactive demo, open demo.html. Both pages are self-contained and work offline.