How Our Aurora AI Learns From the Sky

Aurora forecasts usually depend on satellite data that track solar eruptions, magnetic fields, and charged particles — the physics behind the northern lights. Yet these forecasts only describe what should happen, not what people actually see. Local factors like clouds, moonlight, or weather can change everything.

Our Aurora AI bridges that gap. It uses the same scientific data — Kp index, Bz value, solar-wind speed, and atmospheric conditions — but enhances them with real photos uploaded by people around the world. Each image is linked to the exact time, place, and space-weather conditions when it was taken, creating a feedback loop that shows whether forecasts matched reality.

With every new photo, the model becomes more accurate. It learns to recognize patterns between what’s predicted and what truly appears in the sky, evolving into a living, self-improving forecast that grows smarter over time.

We’re building this system step by step:

  • Phase 1: Collect real-world aurora images and match them with live space-weather data.

  • Phase 2: Train our AI to identify visual and magnetic patterns that correlate with true sightings.

  • Phase 3: Continuously refine the model with new user uploads, turning community observations into measurable learning.

  • Phase 4: Integrate these insights into a global, adaptive forecasting network that improves automatically as more people contribute.

Built entirely in-house, our system is transparent and secure, with full control over how data is used and stored. It’s not a generic AI add-on — it’s a world-first forecasting model that learns directly from human experience.