Skip to main content
Enterprise AI Analysis: Classification of Exoplanetary Light Curves Using Artificial Intelligence

AI in Astrophysics

Unlocking the Cosmos: AI-Driven Stellar Classification

This paper presents a novel approach for classifying exoplanetary light curves using a Bagging-Performance Approach Neural Network (BAPANN). The model achieves high accuracy (up to 97%) across diverse datasets, demonstrating robust performance against noise and efficient learning. It successfully categorizes 9 types of stellar variability, outperforming existing state-of-the-art methods.

In this article, we propose a robust star classification methodology leveraging light curves collected from 15 datasets within the Kepler field in the visible optical spectrum. By employing a Bagging neural network ensemble approach, specifically an Bagging-Performance Approach Neural Network (BAPANN), which integrates three supervised neural network architectures, we successfully classified 760 samples of curves which represent 9 type of stars. Our method demonstrated a high classification accuracy of up to 97% using light curve datasets containing 13, 20, 50, 150, and 450 points per star. The BAPANN achieved a minimum error rate of 0.1559 and exhibited efficient learning, requiring an average of 29 epochs. Additionally, nine types of stellar variability were classified through 45 conducted tests, taking into account error margins of 0, 5, and 10 for the light curve samples. These results highlight the BAPANN model's robustness against uncertainty and ability to converge quickly in terms of iterations needed for learning, training, and validation.

Keywords: light curve, optical data, star rating, Kepler field, neural network, BAPANN, stellar variability, exoplanetary

Executive Impact & Key Metrics

Our AI-driven classification model delivers unparalleled accuracy and efficiency, setting new benchmarks for astronomical data analysis.

0 Classification Accuracy
0 Minimum Error Rate
0 Average Learning Efficiency
0 Stellar Types Classified

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Robust Star Classification

The BAPANN model, an ensemble of three neural network architectures, achieves high accuracy in classifying 760 samples across 9 star types. It is robust against noise and uncertainty, converging quickly with an average of 29 epochs.

  • Uses Bagging Neural Network Ensemble (BAPANN)
  • Integrates three supervised neural network architectures
  • Classified 760 samples across 9 star types
  • Achieved up to 97% classification accuracy
  • Minimum error rate of 0.1559
  • Efficient learning with an average of 29 epochs
  • Robust against uncertainty, noise, and quick convergence

Kepler Field Light Curves

Light curves from 15 Kepler field datasets (visible optical spectrum) are used. Data preprocessing includes normalization and feature selection, reducing 27 initial statistics to 17 discriminant ones for optimal training.

  • 15 datasets from Kepler field (visible optical spectrum)
  • Contains 760 samples covering 9 star types
  • Light curves with 13, 20, 50, 150, and 450 points per star
  • Data normalized between 0 and 1 for neural networks
  • Feature selection reduced 27 statistics to 17 discriminant ones

Optimized BAPANN Configurations

The BAPANN ensemble utilizes three multilayer perceptron architectures with varying hidden nodes (5, 10, 15). The optimal architecture is identified by highest classification percentage and lowest performance value, ensuring stable learning.

  • Three multilayer perceptron architectures
  • Configurations: 17-5-9, 17-10-9, 17-15-9 (input-hidden-output nodes)
  • Backpropagation learning algorithm
  • Optimal architecture selected based on highest accuracy and lowest error
  • Hidden nodes selection prevents overtraining

Enterprise Process Flow

Photometric Database
Source Selection & Comparison with KIC
Light Curves
27 Parametric Statistics
Standardization of Statistics
Bagging Approach (BAPANN)
Backpropagation Architecture 1
Backpropagation Architecture 2
Backpropagation Architecture 3
Classification 1
Classification 2
Classification 3
Evaluation & Comparison of Rankings
Cataloging & UV Quantification

BAPANN Performance vs. State-of-the-Art

Feature Previous Methods (e.g., SVM, BNC) BAPANN (Our Work)
Accuracy Up to 94.9% (Meta assembly) Up to 97.8% (with 50 points, 10 hidden nodes)
Stars Classified Up to 6,818,181 (NUV) 760 (visible spectrum)
Input Features Up to 7000 statistics, 2000 points 17 statistics, 50 points
Stellar Variability Groups Up to 14 categories 9 groups
Robustness to Noise Variable High, robust against noise and uncertainty

Impact of Feature Selection

Initial analysis used 27 parametric statistics. However, specific non-discriminant features, identified through visual graphing (Figure 12), hindered neural network performance. Removing these, reducing the input to 17 discriminant statistics, significantly improved classification accuracy to 97.8%. This highlights the critical role of informed feature engineering in achieving optimal AI model performance in astrophysics.

  • Initial dataset contained 27 statistics, some non-discriminant.
  • Visual analysis helped identify and remove features that did not vary across samples.
  • Reducing features to 17 discriminant statistics improved training efficiency.
  • Resulted in a classification accuracy of 97.8% for 9 stellar variability types.
  • Demonstrates the importance of feature selection for AI model robustness.

Advanced ROI Calculator

Estimate the potential efficiency gains and cost savings for your enterprise by implementing AI-driven astronomical data analysis.

Estimated Annual Savings $0
Annual Analyst Hours Reclaimed 0

Your AI Implementation Roadmap

Our phased approach ensures a seamless integration of AI solutions for astronomical data processing.

Phase 1: Discovery & Data Audit

Duration: 2-4 Weeks

Comprehensive review of existing data pipelines, data quality, and specific classification challenges. Define project scope and success metrics.

Phase 2: Model Adaptation & Training

Duration: 4-8 Weeks

Customization of BAPANN architecture, integration with your Kepler-like datasets, and iterative training to optimize for your specific stellar variability types. Refinement of feature selection.

Phase 3: Deployment & Validation

Duration: 2-3 Weeks

Deployment of the validated AI model into your observational workflow. Rigorous A/B testing and performance monitoring to ensure accuracy and efficiency in real-world scenarios.

Phase 4: Ongoing Optimization & Support

Duration: Continuous

Regular model recalibration, performance tuning, and technical support. Exploration of expanding AI capabilities to other astronomical datasets and phenomena.

Ready to Transform Your Astronomical Data Analysis?

Connect with our AI specialists to discuss how custom AI solutions can revolutionize your research and operational efficiency.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking