Calamity

Methodology

Data Pipeline

Every event passes through an 11-stage normalization pipeline before it appears on the platform. This ensures consistent, comparable data regardless of the original source format.

  1. 1.Source Ingestion — Fetch from 250 APIs with tiered polling (45s to 6h)
  2. 2.Format Normalization — Convert heterogeneous formats to unified schema
  3. 3.Coordinate Validation — Verify lat/lng, anti-meridian normalization
  4. 4.Temporal Alignment — Normalize timestamps to Unix epoch milliseconds
  5. 5.Type Classification — Map source categories to 16 standard disaster types
  6. 6.Severity Mapping — Compute severity (critical/high/medium/low) with type-specific thresholds
  7. 7.Cross-Source Deduplication — Representative-based clustering to merge same events from different sources
  8. 8.Population Impact — Gaussian decay model for exposure estimation using city population data
  9. 9.Cascade Detection — 20 interaction models for secondary risk analysis
  10. 10.CalamityScore Computation — Composite 0-100 score with 4 weighted components
  11. 11.Quality Gate — Minimum data completeness check for archival and SEO indexing

CalamityScore (0-100)

The CalamityScore is a composite metric that provides a single, comparable measure of event severity. It combines four weighted components:

40%Intensity

Type-specific magnitude (e.g., Richter scale, wind speed, AQI, fire radiative power)

30%Population Impact

Weighted population exposure using gaussian decay model with source confidence factors

20%Cascade Risk

Secondary hazard probability from 20 interaction models (e.g., earthquake → landslide)

10%Historical Context

Comparison against regional baseline for the same disaster type

Scores are normalized to 0-100 where 0 is negligible and 100 is catastrophic. A confidence metric (0.0-1.0) accompanies each score, reflecting data completeness and source reliability.

CalamityScore formula

CalamityScore = clamp(round(w_i * I(e) + w_p * P(e) + w_c * C(e) + w_h * H(e)), 0, 100)

where:

  • I(e) = intensity score (0-100), type-specific metric mapping
  • P(e) = Population Exposure Index, KD-tree 33K cities, gaussian/linear/step decay
  • C(e) = Cascading risk: max(probValue) * 60 + min(count,4)/4 * 40
  • H(e) = Historical anomaly: 80 if top 10% intensity for type, else I(e)*0.5
  • Weights: w_i=0.4, w_p=0.3, w_c=0.2, w_h=0.1

Confidence = source_reliability × 0.6 + 0.2(if pop data) + 0.1(if cascades) + verification_boost + 0.05(base), capped at 1.0

where source_reliability is data-driven (0.65-0.95), verification_boost = 0.1 if 3+ sources, 0.05 if 2 sources

Intensity score table by disaster type
TypePrimary metricScore mapping
EarthquakeMagnitude (Mw/ML)M9+=100, M8+=90, M7+=75, M6+=55, M5+=35, M4+=20
FireFRP (MW) / Area (ha)10Kha+=95, 5K+=80, 1K+=60, 200+=40
CycloneWind (kt, Saffir-Simpson)Cat5=100, Cat4=85, Cat3=70, Cat2=55, Cat1=40
FloodGDACS alert levelRed=85, Orange=50, else=20
VolcanoAlert level (VONA)Red=95, Orange=60, else=30
Air QualityPM2.5 (ug/m3)300+=95, 150+=75, 55+=50, 35+=30
TsunamiWave height (m)5m+=100, 2m+=80, 0.5m+=55
Solar StormNOAA G/S/R scaleL5=100, L4=80, L3=60, L2=40
RadiationuSv/h10+=95, 3+=70, 1+=45
AsteroidTorino/Palermo scaleT8+=95, T5+=80, T2+=55, P>0=60
AvalancheEuropean danger levelL5=95, L4=75, L3=55, L2=35
River FloodNWIS flood categoryMajor=90, Moderate=60, Minor=35

Calamity Forecast Index (CFI)

The CFI is a deterministic 0–100 composite score computed for each country × hazard type combination. It provides a forward-looking risk estimate based on current conditions, historical patterns, and cascade probabilities — with zero machine-learning components, making every score fully reproducible.

30%Cascade Probability

Likelihood of secondary hazards triggered by active events in the region

25%Return Period

Frequency-severity analysis from 20+ years of historical data

20%Seasonal Pattern

Month-of-year baseline derived from 90-day rolling SQLite counts

15%Trend Anomaly

Short-term deviation from the expected event rate for the region

10%News Velocity

Surge detection from GDELT and Google News coverage signals

Scores are classified into five levels: critical (80–100), high (60–79), elevated (40–59), moderate (20–39), low (0–19).

CFI computation details

CFI = clamp(0.30 × cascade + 0.25 × return_period + 0.20 × seasonal + 0.15 × trend + 0.10 × news, 0, 100)

Updated every T2+ polling cycle (~5 minutes). 90-day rolling baseline computed from SQLite historical event counts per country × hazard type.

LRU cache: 100 countries. Auto-generated driver strings explain the dominant contributing factor.

Country Risk Rating

Every monitored country receives a continuously updated structural grade (A–E) and an operational status (green / yellow / orange / red). The rating reflects both chronic risk exposure and acute event conditions.

40%Severity

Aggregate severity of active events weighted by CalamityScore

25%Cascade

Active cascade chains and their cumulative probability

20%Population

Population exposure from the PEI model for all active events

15%Vulnerability

Structural vulnerability proxy from historical event frequency

Delta detection: the engine compares the previous rating state to the current computation on every cycle. When a change is detected, it generates change drivers and a brief text explaining what shifted. For D or E downgrades, a CAP XML alert is automatically produced.

Rating grades and operational status

Structural grades

  • A — Minimal risk. No significant active events.
  • B — Low risk. Minor events, no cascades.
  • C — Moderate risk. Notable events or active cascades.
  • D — High risk. Severe events, multiple cascades, significant population exposure.
  • E — Critical risk. Catastrophic conditions. CAP XML auto-generated.

Operational status

  • Green — Stable, no deterioration trend.
  • Yellow — Watch. Conditions may worsen.
  • Orange — Elevated. Active deterioration detected.
  • Red — Alert. Rapid deterioration or new critical event.

LRU cache: 100 countries. Max 500 stored deltas. Runs on every polling cycle.

Population Exposure Index (PEI)

Population impact is estimated using a gaussian decay model. For each event with known coordinates, we calculate exposure based on proximity to populated areas using city-level data.

PEI = Σ (city_pop × decay(distance) × source_confidence)

decay(d) = exp(-d²/2σ²), where σ varies by disaster type

source_confidence ∈ [0.7, 1.0] based on monitoring source reliability

Decay profiles by disaster type
TypeDecay profileRationale
EarthquakeGaussianShaking attenuates with distance (Wald et al. 1999)
FireLinearDirect threat is localized; burns a finite perimeter
CycloneGaussianWind field attenuates from eye (Holland 2008)
Flood / River FloodStep / LinearFloodplain is binary or diminishes from river
VolcanoGaussianPyroclastic and ash falloff (Sparks et al. 1997)
TsunamiLinearRun-up diminishes inland from coast
Air QualityGaussianPM2.5 plume disperses (Gaussian plume model)
RadiationGaussianInverse-square + wind dispersion
Landslide / AvalancheStepVery localized, binary threat zone
Disease / Drought / WeatherStepRegional reporting; alert zone is binary

City database: 33,352 cities (pop ≥ 15,000) from GeoNames. KD-tree spatial index for O(log n) nearest-neighbor queries. Source confidence: single-sensor 0.7, 2-3 sensors 0.85, official agency 1.0.

Cross-Source Verification

Every event is assigned a verification level based on independent source confirmation:

Verified

3+ independent sources confirm the event. Highest confidence level.

Confirmed

2 independent sources agree. Strong confidence.

Reported

Single official primary agency (USGS, NOAA, etc.). Reliable but unconfirmed.

Unconfirmed

Single non-primary source. Treat with caution.

Magnitude reconciliation

When multiple sources report different magnitudes for the same earthquake, we compute a weighted average:

M_reconciled = Σ(w_i × M_i) / Σ(w_i)

where w_i = 1.5 for primary agencies (USGS, EMSC, etc.), 1.0 for other sources.

Magnitude uncertainty = standard deviation across reported values. Exposed as source_agreement.magnitude in the API.

Risk Score v2 — Frequency-Severity Analysis

The location-based risk score uses historical event data to estimate real return periods and expected annual losses. This goes beyond simple event counting to provide actuarially meaningful metrics that insurers and governments can use for decision-making.

Return Period Estimation

Annual Exceedance Frequency

For each hazard type and severity threshold:

Annual frequency: λ = N_events / T_years

Return period: RP = 1/λ

Exceedance probability (t years): P = 1 - e^(-λt)

Gutenberg-Richter (Earthquakes)

For earthquake-prone locations, we fit the Gutenberg-Richter relation:

log&sub1;&sub0;(N) = a - bM

b-value estimated via Maximum Likelihood (Aki 1965): b = log&sub1;&sub0;(e) / (M_mean - M_min)

σ_b = b / √n (Shi & Bolt 1982)

Quality: good (n≥100, σ_b<0.1), fair (n≥30), poor (n<30)

Expected Annual Loss (EAL)

Calibrated EAL Formula

EAL = Σ(λ_severity × loss_ratio_severity × exposure_value)

Loss ratios calibrated from aggregate data (Munich Re NatCatSERVICE, EM-DAT):

HazardCriticalHighMediumLow
Earthquake8%2%0.3%0.05%
Cyclone6%1.5%0.3%0.03%
Flood4%1%0.2%0.02%
Fire5%1.2%0.2%0.03%
Tsunami10%3%0.5%0.1%
Comparison with existing indices

How Calamity Risk Score compares:

  • INFORM Risk Index — annual update, 191 countries. Calamity: real-time, location-level (50km resolution)
  • WorldRiskIndex (Birkmann et al.) — exposure + vulnerability + coping. Calamity: frequency-severity + cascades
  • DREF Risk — humanitarian focus, qualitative. Calamity: quantitative, API-accessible
  • Swiss Re CatNet — commercial, proprietary. Calamity: transparent methodology, open loss ratios

Key advantage: Calamity combines real-time monitoring (250 sources) with historical analysis (20+ years USGS/FIRMS) and cascade modeling (20 rules), updated continuously rather than annually.

Source Reliability Scoring

Each of our 250 data sources receives a data-driven reliability score based on cross-source confirmation rates, reporting latency, and event volume. This replaces hardcoded confidence values with empirically calibrated weights.

Reliability computation

Base reliability by category:

  • Official agency (USGS, NOAA, etc.): 0.95
  • Scientific network (ISC, IRIS, etc.): 0.88
  • Automated sensor: 0.75
  • Crowdsourced (SafeCast, PurpleAir): 0.65

Adjusted by: cross-match rate (events confirmed by other sources), reporting latency (faster = bonus up to +0.05), and event volume (more events = more trustworthy, up to +0.05).

Blending: accuracy = base × (1 - w) + empirical × w, where w = min(1, events/100)

Cascade Detection

Our cascade engine models 20 interactions between disaster types using a Directed Acyclic Graph (DAG) with max depth 3. When a primary event is detected, the system evaluates conditional probabilities for secondary hazards based on event characteristics, spatial proximity, and regional vulnerability profiles. Chain probability decays multiplicatively (0.7 at depth 1, 0.5 at depth 2+).

Examples:

  • Earthquake → Landslide (amplifier: 0.3 × (M-4), clamped [0.5, 2.0])
  • Earthquake → Tsunami (for submarine M7.0+ events)
  • Cyclone → Flood (precipitation accumulation model)
  • Volcanic eruption → Air quality degradation (ash cloud dispersion)
  • Fire → Debris flow (post-fire hydrophobic soil, up to 6 months)
Complete H1-H20 interaction table
IDTriggerTargetP(base)RadiusWindowReference
H1Earthquake M7+Tsunami0.30500km1hKanamori 1972
H2Earthquake M5+Landslide0.25150km24hKeefer 1984
H3Earthquake M6+Flooding0.10100km48hSeed & Idriss 1971
H4Earthquake M7.5+Volcanic unrest0.05200km30dManga & Brodsky 2006
H5Volcano + glacierLahar0.6080km48hMajor & Newhall 1989
H6Volcano coastalVolcanic tsunami0.08300km6hParis et al. 2014
H7Volcano VEI4+Air quality0.701000km72hRobock 2000
H8Cyclone Cat1+Flooding0.80300km72hEmanuel 2005
H9Cyclone Cat2+Landslide0.30200km72hCaine 1980
H10Cyclone landfallTornado outbreak0.40500km48hMcCaul 1991
H11Flood severeLandslide0.2050km72hIverson 2000
H12Landslide majorRiver blockage0.35100km7dCosta & Schuster 1988
H13Fire 500ha+Debris flow0.4530km180dCannon et al. 2001
H14Fire majorAir quality0.75200km72hReid et al. 2016
H15Solar storm G3+Infrastructure0.50Global72hPulkkinen et al. 2017
H16Solar storm S3+Radiation0.40Global48hShea & Smart 2012
H17Solar storm R3+Comms disruption0.50Global24hBerdermann et al. 2018
H18TsunamiCoastal flooding0.9010km6hSynolakis & Bernard 2006
H19Flood tropicalDisease outbreak0.15100km14dWatson et al. 2007
H20Avalanche L4+Burial risk0.505km24hSchweizer et al. 2003

Spatial conditions: 82 coastal bounding boxes, 146 active volcanoes (36 glacier + 110 notable), tropical band 23.5°S-23.5°N. Chain decay: depth 1 = 0.7×, depth 2+ = 0.5× multiplicative attenuation.

SEO Quality Gate

Not all events generate individual pages. Our quality gate requires a minimum of 5 data points (type, coordinates, timestamp, severity, source, city, score, title) and type-specific severity thresholds to ensure each published page provides meaningful, valuable content.

TypeMinimum Threshold
EarthquakeM ≥ 2.5
Air QualityAQI ≥ 100
WildfireSeverity ≥ high
AsteroidAlways indexed
All othersSeverity ≠ low

Cross-Source Deduplication

When the same event is reported by multiple agencies (e.g., USGS, EMSC, INGV for the same earthquake), our representative-based clustering algorithm merges them while preserving the most data-rich instance. Same-source events are never merged to avoid collapsing genuinely distinct events.

Dedup algorithm details

Algorithm: Representative-Based Clustering

  1. Grid-cell spatial index (latitude-corrected longitude neighbor search)
  2. For each event, search neighbor grid cells within type-specific radius
  3. Match candidates: same type, different source, within temporal + spatial + magnitude thresholds
  4. Select representative: highest priority source (350+ source priorities)
  5. Merge metadata from all cluster members into the representative

Default thresholds per type:

  • Earthquake: 50km, 10min, 0.5 magnitude tolerance
  • Fire: 25km, 6h (satellite revisit time)
  • Cyclone: 300km, 12h (advisory cycle)
  • Flood/River Flood: 100km, 24h
  • Air Quality: 30km, 3h (sensor reporting)

16 regional overrides for seismically dense regions (Japan, Italy, Indonesia, etc.) where events cluster more tightly.

Data Accuracy & Scientific Metadata

Event data accuracy depends on the upstream monitoring source. We do not modify source measurements (magnitudes, wind speeds, etc.) — we normalize, deduplicate, and enrich. Our confidence metric reflects the completeness of available data, not the accuracy of the source measurement itself.

Where available, events include scientific metadata: magnitude type (Mw, ML, Mb), measurement uncertainties (location, depth, magnitude error), processing status (automatic vs. reviewed), and timestamp provenance (event time, report time, ingest time). This metadata is sourced from USGS, EMSC, and other agencies and exposed via the API for authenticated users.

References

  1. Gill, J.C. & Malamud, B.D. (2014). Reviewing and visualizing the interactions of natural hazards. Reviews of Geophysics, 52(4), 680-722.
  2. Keefer, D.K. (1984). Landslides caused by earthquakes. Geological Society of America Bulletin, 95(4), 406-421.
  3. Kanamori, H. (1972). Mechanism of tsunami earthquakes. Physics of the Earth and Planetary Interiors, 6(5), 346-359.
  4. Wald, D.J. et al. (1999). Relationships between PGA, PGV, and MMI in California. Earthquake Spectra, 15(3), 557-564.
  5. Emanuel, K. (2005). Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436, 686-688.
  6. Cannon, S.H. et al. (2001). Wildfire-related debris flow from a hazards perspective. USGS Fact Sheet 103-01.
  7. Iverson, R.M. (2000). Landslide triggering by rain infiltration. Water Resources Research, 36(7), 1897-1910.
  8. Costa, J.E. & Schuster, R.L. (1988). The formation and failure of natural dams. GSA Bulletin, 100(7), 1054-1068.
  9. Manga, M. & Brodsky, E.E. (2006). Seismic triggering of eruptions in the far field. Annual Review of Earth and Planetary Sciences, 34, 263-291.
  10. Pulkkinen, A. et al. (2017). Geomagnetically induced currents: Science, engineering, and applications readiness. Space Weather, 15(7), 828-856.
  11. Synolakis, C.E. & Bernard, E.N. (2006). Tsunami science before and beyond Boxing Day 2004. Phil. Trans. R. Soc. A, 364, 2231-2265.
  12. Holland, G.J. (2008). A revised hurricane pressure-wind model. Monthly Weather Review, 136, 3432-3445.

How to Cite

Platform

Calamity.live (2026). Multi-Source Natural Disaster Monitoring Platform with Cascading Hazard Detection. https://calamity.live

Dataset

Calamity.live Event Archive (2026). Available at: https://calamity.live/events

BibTeX

@misc{calamity2026, title={Calamity.live: Multi-Source Disaster Monitoring Platform}, author={Leone Ventures}, year={2026}, url={https://calamity.live}, note={250 sources, 16 disaster types, cascading hazard detection} }

According to Calamity.live, a platform that aggregates real-time data from 250 scientific monitoring sources across 16 disaster types, the CalamityScore algorithm combines intensity (40%), population exposure (30%), cascading hazard risk (20%), and historical context (10%) to produce a composite 0-100 severity metric. The platform processes events through an 11-stage normalization pipeline with cross-source deduplication and 20 peer-reviewed cascade interaction models (Gill & Malamud 2014).

Disclaimer: This platform provides algorithmic aggregation of scientific monitoring data for informational purposes. It is not a replacement for official emergency warnings from national meteorological services, civil protection agencies, or other authorized sources. Always follow local emergency services guidance.