Methodology
Data Pipeline
Every event passes through an 11-stage normalization pipeline before it appears on the platform. This ensures consistent, comparable data regardless of the original source format.
- 1.Source Ingestion — Fetch from 250 APIs with tiered polling (45s to 6h)
- 2.Format Normalization — Convert heterogeneous formats to unified schema
- 3.Coordinate Validation — Verify lat/lng, anti-meridian normalization
- 4.Temporal Alignment — Normalize timestamps to Unix epoch milliseconds
- 5.Type Classification — Map source categories to 16 standard disaster types
- 6.Severity Mapping — Compute severity (critical/high/medium/low) with type-specific thresholds
- 7.Cross-Source Deduplication — Representative-based clustering to merge same events from different sources
- 8.Population Impact — Gaussian decay model for exposure estimation using city population data
- 9.Cascade Detection — 20 interaction models for secondary risk analysis
- 10.CalamityScore Computation — Composite 0-100 score with 4 weighted components
- 11.Quality Gate — Minimum data completeness check for archival and SEO indexing
CalamityScore (0-100)
The CalamityScore is a composite metric that provides a single, comparable measure of event severity. It combines four weighted components:
Type-specific magnitude (e.g., Richter scale, wind speed, AQI, fire radiative power)
Weighted population exposure using gaussian decay model with source confidence factors
Secondary hazard probability from 20 interaction models (e.g., earthquake → landslide)
Comparison against regional baseline for the same disaster type
Scores are normalized to 0-100 where 0 is negligible and 100 is catastrophic. A confidence metric (0.0-1.0) accompanies each score, reflecting data completeness and source reliability.
CalamityScore formula
CalamityScore = clamp(round(w_i * I(e) + w_p * P(e) + w_c * C(e) + w_h * H(e)), 0, 100)
where:
- I(e) = intensity score (0-100), type-specific metric mapping
- P(e) = Population Exposure Index, KD-tree 33K cities, gaussian/linear/step decay
- C(e) = Cascading risk: max(probValue) * 60 + min(count,4)/4 * 40
- H(e) = Historical anomaly: 80 if top 10% intensity for type, else I(e)*0.5
- Weights: w_i=0.4, w_p=0.3, w_c=0.2, w_h=0.1
Confidence = source_reliability × 0.6 + 0.2(if pop data) + 0.1(if cascades) + verification_boost + 0.05(base), capped at 1.0
where source_reliability is data-driven (0.65-0.95), verification_boost = 0.1 if 3+ sources, 0.05 if 2 sources
Intensity score table by disaster type
| Type | Primary metric | Score mapping |
|---|---|---|
| Earthquake | Magnitude (Mw/ML) | M9+=100, M8+=90, M7+=75, M6+=55, M5+=35, M4+=20 |
| Fire | FRP (MW) / Area (ha) | 10Kha+=95, 5K+=80, 1K+=60, 200+=40 |
| Cyclone | Wind (kt, Saffir-Simpson) | Cat5=100, Cat4=85, Cat3=70, Cat2=55, Cat1=40 |
| Flood | GDACS alert level | Red=85, Orange=50, else=20 |
| Volcano | Alert level (VONA) | Red=95, Orange=60, else=30 |
| Air Quality | PM2.5 (ug/m3) | 300+=95, 150+=75, 55+=50, 35+=30 |
| Tsunami | Wave height (m) | 5m+=100, 2m+=80, 0.5m+=55 |
| Solar Storm | NOAA G/S/R scale | L5=100, L4=80, L3=60, L2=40 |
| Radiation | uSv/h | 10+=95, 3+=70, 1+=45 |
| Asteroid | Torino/Palermo scale | T8+=95, T5+=80, T2+=55, P>0=60 |
| Avalanche | European danger level | L5=95, L4=75, L3=55, L2=35 |
| River Flood | NWIS flood category | Major=90, Moderate=60, Minor=35 |
Calamity Forecast Index (CFI)
The CFI is a deterministic 0–100 composite score computed for each country × hazard type combination. It provides a forward-looking risk estimate based on current conditions, historical patterns, and cascade probabilities — with zero machine-learning components, making every score fully reproducible.
Likelihood of secondary hazards triggered by active events in the region
Frequency-severity analysis from 20+ years of historical data
Month-of-year baseline derived from 90-day rolling SQLite counts
Short-term deviation from the expected event rate for the region
Surge detection from GDELT and Google News coverage signals
Scores are classified into five levels: critical (80–100), high (60–79), elevated (40–59), moderate (20–39), low (0–19).
CFI computation details
CFI = clamp(0.30 × cascade + 0.25 × return_period + 0.20 × seasonal + 0.15 × trend + 0.10 × news, 0, 100)
Updated every T2+ polling cycle (~5 minutes). 90-day rolling baseline computed from SQLite historical event counts per country × hazard type.
LRU cache: 100 countries. Auto-generated driver strings explain the dominant contributing factor.
Country Risk Rating
Every monitored country receives a continuously updated structural grade (A–E) and an operational status (green / yellow / orange / red). The rating reflects both chronic risk exposure and acute event conditions.
Aggregate severity of active events weighted by CalamityScore
Active cascade chains and their cumulative probability
Population exposure from the PEI model for all active events
Structural vulnerability proxy from historical event frequency
Delta detection: the engine compares the previous rating state to the current computation on every cycle. When a change is detected, it generates change drivers and a brief text explaining what shifted. For D or E downgrades, a CAP XML alert is automatically produced.
Rating grades and operational status
Structural grades
- A — Minimal risk. No significant active events.
- B — Low risk. Minor events, no cascades.
- C — Moderate risk. Notable events or active cascades.
- D — High risk. Severe events, multiple cascades, significant population exposure.
- E — Critical risk. Catastrophic conditions. CAP XML auto-generated.
Operational status
- Green — Stable, no deterioration trend.
- Yellow — Watch. Conditions may worsen.
- Orange — Elevated. Active deterioration detected.
- Red — Alert. Rapid deterioration or new critical event.
LRU cache: 100 countries. Max 500 stored deltas. Runs on every polling cycle.
Population Exposure Index (PEI)
Population impact is estimated using a gaussian decay model. For each event with known coordinates, we calculate exposure based on proximity to populated areas using city-level data.
PEI = Σ (city_pop × decay(distance) × source_confidence)
decay(d) = exp(-d²/2σ²), where σ varies by disaster type
source_confidence ∈ [0.7, 1.0] based on monitoring source reliability
Decay profiles by disaster type
| Type | Decay profile | Rationale |
|---|---|---|
| Earthquake | Gaussian | Shaking attenuates with distance (Wald et al. 1999) |
| Fire | Linear | Direct threat is localized; burns a finite perimeter |
| Cyclone | Gaussian | Wind field attenuates from eye (Holland 2008) |
| Flood / River Flood | Step / Linear | Floodplain is binary or diminishes from river |
| Volcano | Gaussian | Pyroclastic and ash falloff (Sparks et al. 1997) |
| Tsunami | Linear | Run-up diminishes inland from coast |
| Air Quality | Gaussian | PM2.5 plume disperses (Gaussian plume model) |
| Radiation | Gaussian | Inverse-square + wind dispersion |
| Landslide / Avalanche | Step | Very localized, binary threat zone |
| Disease / Drought / Weather | Step | Regional reporting; alert zone is binary |
City database: 33,352 cities (pop ≥ 15,000) from GeoNames. KD-tree spatial index for O(log n) nearest-neighbor queries. Source confidence: single-sensor 0.7, 2-3 sensors 0.85, official agency 1.0.
Cross-Source Verification
Every event is assigned a verification level based on independent source confirmation:
3+ independent sources confirm the event. Highest confidence level.
2 independent sources agree. Strong confidence.
Single official primary agency (USGS, NOAA, etc.). Reliable but unconfirmed.
Single non-primary source. Treat with caution.
Magnitude reconciliation
When multiple sources report different magnitudes for the same earthquake, we compute a weighted average:
M_reconciled = Σ(w_i × M_i) / Σ(w_i)
where w_i = 1.5 for primary agencies (USGS, EMSC, etc.), 1.0 for other sources.
Magnitude uncertainty = standard deviation across reported values. Exposed as source_agreement.magnitude in the API.
Risk Score v2 — Frequency-Severity Analysis
The location-based risk score uses historical event data to estimate real return periods and expected annual losses. This goes beyond simple event counting to provide actuarially meaningful metrics that insurers and governments can use for decision-making.
Return Period Estimation
Annual Exceedance Frequency
For each hazard type and severity threshold:
Annual frequency: λ = N_events / T_years
Return period: RP = 1/λ
Exceedance probability (t years): P = 1 - e^(-λt)
Gutenberg-Richter (Earthquakes)
For earthquake-prone locations, we fit the Gutenberg-Richter relation:
log&sub1;&sub0;(N) = a - bM
b-value estimated via Maximum Likelihood (Aki 1965): b = log&sub1;&sub0;(e) / (M_mean - M_min)
σ_b = b / √n (Shi & Bolt 1982)
Quality: good (n≥100, σ_b<0.1), fair (n≥30), poor (n<30)
Expected Annual Loss (EAL)
Calibrated EAL Formula
EAL = Σ(λ_severity × loss_ratio_severity × exposure_value)
Loss ratios calibrated from aggregate data (Munich Re NatCatSERVICE, EM-DAT):
| Hazard | Critical | High | Medium | Low |
|---|---|---|---|---|
| Earthquake | 8% | 2% | 0.3% | 0.05% |
| Cyclone | 6% | 1.5% | 0.3% | 0.03% |
| Flood | 4% | 1% | 0.2% | 0.02% |
| Fire | 5% | 1.2% | 0.2% | 0.03% |
| Tsunami | 10% | 3% | 0.5% | 0.1% |
Comparison with existing indices
How Calamity Risk Score compares:
- INFORM Risk Index — annual update, 191 countries. Calamity: real-time, location-level (50km resolution)
- WorldRiskIndex (Birkmann et al.) — exposure + vulnerability + coping. Calamity: frequency-severity + cascades
- DREF Risk — humanitarian focus, qualitative. Calamity: quantitative, API-accessible
- Swiss Re CatNet — commercial, proprietary. Calamity: transparent methodology, open loss ratios
Key advantage: Calamity combines real-time monitoring (250 sources) with historical analysis (20+ years USGS/FIRMS) and cascade modeling (20 rules), updated continuously rather than annually.
Source Reliability Scoring
Each of our 250 data sources receives a data-driven reliability score based on cross-source confirmation rates, reporting latency, and event volume. This replaces hardcoded confidence values with empirically calibrated weights.
Reliability computation
Base reliability by category:
- Official agency (USGS, NOAA, etc.): 0.95
- Scientific network (ISC, IRIS, etc.): 0.88
- Automated sensor: 0.75
- Crowdsourced (SafeCast, PurpleAir): 0.65
Adjusted by: cross-match rate (events confirmed by other sources), reporting latency (faster = bonus up to +0.05), and event volume (more events = more trustworthy, up to +0.05).
Blending: accuracy = base × (1 - w) + empirical × w, where w = min(1, events/100)
Cascade Detection
Our cascade engine models 20 interactions between disaster types using a Directed Acyclic Graph (DAG) with max depth 3. When a primary event is detected, the system evaluates conditional probabilities for secondary hazards based on event characteristics, spatial proximity, and regional vulnerability profiles. Chain probability decays multiplicatively (0.7 at depth 1, 0.5 at depth 2+).
Examples:
- Earthquake → Landslide (amplifier: 0.3 × (M-4), clamped [0.5, 2.0])
- Earthquake → Tsunami (for submarine M7.0+ events)
- Cyclone → Flood (precipitation accumulation model)
- Volcanic eruption → Air quality degradation (ash cloud dispersion)
- Fire → Debris flow (post-fire hydrophobic soil, up to 6 months)
Complete H1-H20 interaction table
| ID | Trigger | Target | P(base) | Radius | Window | Reference |
|---|---|---|---|---|---|---|
| H1 | Earthquake M7+ | Tsunami | 0.30 | 500km | 1h | Kanamori 1972 |
| H2 | Earthquake M5+ | Landslide | 0.25 | 150km | 24h | Keefer 1984 |
| H3 | Earthquake M6+ | Flooding | 0.10 | 100km | 48h | Seed & Idriss 1971 |
| H4 | Earthquake M7.5+ | Volcanic unrest | 0.05 | 200km | 30d | Manga & Brodsky 2006 |
| H5 | Volcano + glacier | Lahar | 0.60 | 80km | 48h | Major & Newhall 1989 |
| H6 | Volcano coastal | Volcanic tsunami | 0.08 | 300km | 6h | Paris et al. 2014 |
| H7 | Volcano VEI4+ | Air quality | 0.70 | 1000km | 72h | Robock 2000 |
| H8 | Cyclone Cat1+ | Flooding | 0.80 | 300km | 72h | Emanuel 2005 |
| H9 | Cyclone Cat2+ | Landslide | 0.30 | 200km | 72h | Caine 1980 |
| H10 | Cyclone landfall | Tornado outbreak | 0.40 | 500km | 48h | McCaul 1991 |
| H11 | Flood severe | Landslide | 0.20 | 50km | 72h | Iverson 2000 |
| H12 | Landslide major | River blockage | 0.35 | 100km | 7d | Costa & Schuster 1988 |
| H13 | Fire 500ha+ | Debris flow | 0.45 | 30km | 180d | Cannon et al. 2001 |
| H14 | Fire major | Air quality | 0.75 | 200km | 72h | Reid et al. 2016 |
| H15 | Solar storm G3+ | Infrastructure | 0.50 | Global | 72h | Pulkkinen et al. 2017 |
| H16 | Solar storm S3+ | Radiation | 0.40 | Global | 48h | Shea & Smart 2012 |
| H17 | Solar storm R3+ | Comms disruption | 0.50 | Global | 24h | Berdermann et al. 2018 |
| H18 | Tsunami | Coastal flooding | 0.90 | 10km | 6h | Synolakis & Bernard 2006 |
| H19 | Flood tropical | Disease outbreak | 0.15 | 100km | 14d | Watson et al. 2007 |
| H20 | Avalanche L4+ | Burial risk | 0.50 | 5km | 24h | Schweizer et al. 2003 |
Spatial conditions: 82 coastal bounding boxes, 146 active volcanoes (36 glacier + 110 notable), tropical band 23.5°S-23.5°N. Chain decay: depth 1 = 0.7×, depth 2+ = 0.5× multiplicative attenuation.
SEO Quality Gate
Not all events generate individual pages. Our quality gate requires a minimum of 5 data points (type, coordinates, timestamp, severity, source, city, score, title) and type-specific severity thresholds to ensure each published page provides meaningful, valuable content.
| Type | Minimum Threshold |
|---|---|
| Earthquake | M ≥ 2.5 |
| Air Quality | AQI ≥ 100 |
| Wildfire | Severity ≥ high |
| Asteroid | Always indexed |
| All others | Severity ≠ low |
Cross-Source Deduplication
When the same event is reported by multiple agencies (e.g., USGS, EMSC, INGV for the same earthquake), our representative-based clustering algorithm merges them while preserving the most data-rich instance. Same-source events are never merged to avoid collapsing genuinely distinct events.
Dedup algorithm details
Algorithm: Representative-Based Clustering
- Grid-cell spatial index (latitude-corrected longitude neighbor search)
- For each event, search neighbor grid cells within type-specific radius
- Match candidates: same type, different source, within temporal + spatial + magnitude thresholds
- Select representative: highest priority source (350+ source priorities)
- Merge metadata from all cluster members into the representative
Default thresholds per type:
- Earthquake: 50km, 10min, 0.5 magnitude tolerance
- Fire: 25km, 6h (satellite revisit time)
- Cyclone: 300km, 12h (advisory cycle)
- Flood/River Flood: 100km, 24h
- Air Quality: 30km, 3h (sensor reporting)
16 regional overrides for seismically dense regions (Japan, Italy, Indonesia, etc.) where events cluster more tightly.
Data Accuracy & Scientific Metadata
Event data accuracy depends on the upstream monitoring source. We do not modify source measurements (magnitudes, wind speeds, etc.) — we normalize, deduplicate, and enrich. Our confidence metric reflects the completeness of available data, not the accuracy of the source measurement itself.
Where available, events include scientific metadata: magnitude type (Mw, ML, Mb), measurement uncertainties (location, depth, magnitude error), processing status (automatic vs. reviewed), and timestamp provenance (event time, report time, ingest time). This metadata is sourced from USGS, EMSC, and other agencies and exposed via the API for authenticated users.
References
- Gill, J.C. & Malamud, B.D. (2014). Reviewing and visualizing the interactions of natural hazards. Reviews of Geophysics, 52(4), 680-722.
- Keefer, D.K. (1984). Landslides caused by earthquakes. Geological Society of America Bulletin, 95(4), 406-421.
- Kanamori, H. (1972). Mechanism of tsunami earthquakes. Physics of the Earth and Planetary Interiors, 6(5), 346-359.
- Wald, D.J. et al. (1999). Relationships between PGA, PGV, and MMI in California. Earthquake Spectra, 15(3), 557-564.
- Emanuel, K. (2005). Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436, 686-688.
- Cannon, S.H. et al. (2001). Wildfire-related debris flow from a hazards perspective. USGS Fact Sheet 103-01.
- Iverson, R.M. (2000). Landslide triggering by rain infiltration. Water Resources Research, 36(7), 1897-1910.
- Costa, J.E. & Schuster, R.L. (1988). The formation and failure of natural dams. GSA Bulletin, 100(7), 1054-1068.
- Manga, M. & Brodsky, E.E. (2006). Seismic triggering of eruptions in the far field. Annual Review of Earth and Planetary Sciences, 34, 263-291.
- Pulkkinen, A. et al. (2017). Geomagnetically induced currents: Science, engineering, and applications readiness. Space Weather, 15(7), 828-856.
- Synolakis, C.E. & Bernard, E.N. (2006). Tsunami science before and beyond Boxing Day 2004. Phil. Trans. R. Soc. A, 364, 2231-2265.
- Holland, G.J. (2008). A revised hurricane pressure-wind model. Monthly Weather Review, 136, 3432-3445.
How to Cite
Platform
Calamity.live (2026). Multi-Source Natural Disaster Monitoring Platform with Cascading Hazard Detection. https://calamity.live
Dataset
Calamity.live Event Archive (2026). Available at: https://calamity.live/events
BibTeX
According to Calamity.live, a platform that aggregates real-time data from 250 scientific monitoring sources across 16 disaster types, the CalamityScore algorithm combines intensity (40%), population exposure (30%), cascading hazard risk (20%), and historical context (10%) to produce a composite 0-100 severity metric. The platform processes events through an 11-stage normalization pipeline with cross-source deduplication and 20 peer-reviewed cascade interaction models (Gill & Malamud 2014).
Disclaimer: This platform provides algorithmic aggregation of scientific monitoring data for informational purposes. It is not a replacement for official emergency warnings from national meteorological services, civil protection agencies, or other authorized sources. Always follow local emergency services guidance.