Any single market-data feed lies a small percentage of the time — stale quotes, frozen fundamentals, broken fields, transient outages. At that rate, a single bad data point every few days is guaranteed. AI Stock Monitor pairs multi-source voting with an LLM reasonableness check so a single source's bad day cannot become your bad trade. This article explains the two layers and how they work together.

Layer 1 — Mechanical Vote Across Sources

For every number that matters — price, P/E, dividend yield, implied volatility — the system queries multiple independent providers, throws out garbage values, and takes the median of the majority cluster. Outliers are flagged in the audit trail rather than silently averaged in. The number of providers behind any given field is open-ended — the system is designed so new sources can be added without changing how downstream code consumes the elected value.

Most agree
Clean election
Sources cluster within tolerance. The majority median is taken; outliers are recorded in the trail. The downstream signal sees a single trustworthy number.
One off
Outlier discarded
One source disagrees materially. The cluster wins; the outlier is flagged but not used. Common cause: one provider serving stale cache.
Single source
Election degraded
Only one source returns a value. No election possible — the number is marked degraded in the audit trail. Treated with extra care by Layer 2.

Layer 2 — LLM Reasonableness Check

After voting picks a number, an LLM auditor reviews the full picture — that field plus the other fields on the same ticker plus the account context — and asks the questions a human analyst would: does this dividend yield make sense given the price action? Does this P/E square with the recent earnings? Does this option premium look like a stale snapshot from yesterday's close?

When the LLM flags an inconsistency, the system clears the suspect value, re-fetches, and re-runs the vote. Any field that survives both layers is treated as ready for downstream use. Any field still failing the LLM check after retry is shown to the user as data quality uncertain, never silently consumed by a rule.

The most dangerous data bugs are not 404s — they are the reasonable-looking numbers served from a frozen cache. Voting catches the cases where sources disagree; the LLM auditor catches the cases where sources happen to all be wrong in the same way (e.g. all reading from the same stale upstream).

Why Two Layers, Not One

  • Voting alone fails when sources are correlated. Three providers all pulling from the same upstream vendor will all return the same wrong number. The vote sees consensus; the LLM sees that the number does not fit the rest of the data.
  • LLM alone fails on speed and cost. Running an LLM check on every field for every ticker every minute is slow and expensive. Voting cheaply screens out the easy cases so the LLM can focus where it actually adds value.
  • Together they cover the full failure surface. Vote handles single-source failures; LLM handles correlated-source and semantic-anomaly failures.

Layer 1: vote across N independent sources, take the majority-cluster median
Layer 2: LLM auditor checks the elected number against context · re-fetches if it looks off
Together: catches both single-source failures and correlated-source mistakes

See voted + audited data on the dashboard →