Skip to content

AI in getbased — what's powered, what isn't

This page is the canonical map of where AI runs in the app. Three sections:

  1. Light & Sun verdicts — ten per-state and per-event surfaces inside the Light & Sun lens. All auto-fire when you have data.
  2. AI elsewhere in the app — the chat panel, Interpretive Lens, context-card dots, PDF + photo import, EMF interpretation, and others.
  3. What is NOT AI — the deterministic-math layer (channel doses, %MED, IU yields, trend alerts, PhenoAge, calculated markers). Big section because the math is auditable and reproducible — those numbers are NOT generated by an LLM.

Every surface that DOES use AI requires a configured provider in Settings → AI. The chat panel + Interpretive Lens read your full assembled lab context; per-row verdicts read just their own data slice. Privacy gates (e.g., body regions opt-in) are honored across all surfaces.

Light & Sun verdict shape

Every Light & Sun verdict has the same shape:

  • A colored dot — green / yellow / red / gray (more on these below)
  • A tip — one sentence (≤14–18 words) summarizing the verdict
  • A detail — 1–4 sentences of context citing your specific numbers and the biology behind them
  • A refresh button (↻) — re-runs the analysis with a fresh API call

Dots:

DotMeaning
🟢 greenOn-protocol — the surface is doing what your goals say it should
🟡 yellowMostly OK with one or two specific gaps to address
🔴 redCounterproductive or unsafe in the context of your goals
⚪ grayNot enough data to judge

Where verdicts appear

All ten Light & Sun verdicts auto-fire when you have data — no buttons to remember. Each one has a manual ↻ refresh on the verdict block if you want a fresh read.

Per-event verdicts (fire after a discrete action)

  • Sun session row — fires when you tap Stop & save on a sun session, or log a completed session after the fact. Lives at the bottom of each session row + at the top of the session detail modal.
  • PBM device-session row — fires when you stop a live therapy timer or log a finished session on a panel / SAD lamp / dawn simulator / UVB phototherapy. Same placement as sun-session verdicts.
  • Light Tool measurement — fires when you save any reading from a measurement tool (Lux Meter, Flicker Detector, CCT Meter, Spectrum Classifier, Sleep Darkness, Glass Transmission). Lives below the reading row in the room panel.
  • Audit verdict — fires when you save a Light Audit (a frozen snapshot of your environment). Appears at the top of the audit detail. A small colored dot also appears in the audit card header for at-a-glance status across multiple audits.
  • Onboarding plan — fires when you complete the Light & Sun setup card (skin type, eyewear, home lighting, Ott burden audit). Generates a personalized starting plan with three concrete first-week actions. Appears below the saved-setup chips.

Per-state verdicts (fire on render when you have enough data)

These surfaces don't have a clean "I'm done" trigger — the user is editing chip-pickers, adding rooms, the daily rollup shifts as new sessions land. Pre-2026-05-08 they had a manual Analyze button to avoid burning API calls during edit sessions; the per-render auto-fire now uses a fingerprint of the underlying state plus a per-tab-session guard so it fires once per meaningful state shift, not on every keystroke.

  • Light Today daily hero (+ dashboard chip) — fires the first time you visit the Light & Sun page each day, and re-fires when the cache is stale. Synthesizes your day's full picture (sun + devices + tools + environment + recent biomarker context) into a single verdict at the top of the page. The same verdict appears as a compact chip on the dashboard's Light Today strip.
  • Light Environment room — auto-fires when a room has a primary source set OR at least one measurement. Empty rooms skip auto-fire so a freshly-added blank room doesn't burn an API call before you finish editing. Lives inside the room's expanded body.
  • Per-screen — auto-fires when a device type is set (default 'phone'). Lives inside the expanded screen card.
  • Indoor-burden summary — auto-fires when you have at least one room or screen mapped. Lives at the top of the Light Environment block (above the audit list).
  • Channel-mix synthesis — auto-fires when at least one channel has non-zero exposure in the rolling 7 days. Brand-new users with no logs skip until they have data worth interpreting. Lives in the "Your light, by what it does" section.

Caching and force-refresh

Every verdict is cached against a fingerprint of the underlying data. If the data hasn't changed, the cached verdict is returned without a fresh API call — the refresh button (↻) is also a no-op in that case (preserves your verdict text + saves the API call).

When the data has changed (you edited a room, logged a new session, completed a measurement), the fingerprint mismatch invalidates the cache. The next render shows a "refresh AI verdict — your setup changed" CTA.

Cross-device sync

Verdicts live on the same row as the data they describe — sun-session verdicts on the session row, room verdicts on the room row, and so on. They sync to your other devices via the same per-row CRDT path the rest of your data uses. Latency is typically sub-10 seconds (the engine pushes immediately after writing, skipping the usual debounce).

What if the verdict seems wrong?

The AI is reasoning over your inputs, including measurement context. A few common failure modes worth knowing about:

  • Webcam at the monitor pointed at you is fine for the lux meter (it sees the light hitting your face), but it underreads CCT, biases the spectrum classifier toward "warm LED" regardless of actual ceiling source, and attenuates flicker amplitude. The aiming guide inside each tool modal calls this out per-tool. For accurate measurements outside lux, point the camera at the source with a phone, not from a fixed position.
  • Cached vs current: if you edited a room recently and the verdict still references old numbers, the fingerprint should have invalidated and the CTA should say "your setup changed". Click ↻ to regenerate.
  • Brand-name endorsement: the AI is instructed to never name specific brand products, only categories ("DC-dimmable LED", "incandescent or halogen"). If a verdict mentions a specific product brand, that's a regression — please open an issue.

Hardware advice

Verdicts that recommend lighting hardware all share a load-bearing prompt block of caveats. The most important: do not recommend a generic "dimmable LED" as a fix for measured flicker — most consumer LED dimming uses pulse-width modulation, which IS the flicker source. The recommendation has to qualify ("DC-dimmable", "high-frequency PWM ≥2 kHz", "filament at fixed low warmth") or pivot to a non-dimming fix (multiple low-wattage warm bulbs on separate switches, candles for the lowest evening setting, or true incandescent / halogen for bedside fixtures).

Provider requirements

Verdicts require an AI provider configured in Settings → AI. Supported providers: OpenRouter, PPQ, Routstr, Venice, Local AI (Ollama / LM Studio / Jan), Custom. Cost per verdict is roughly $0.003–0.01 on commercial providers (about 600–1,500 input + 100–300 output tokens). Local AI is free.

Disabling verdicts

If you want to keep your AI provider configured for the chat panel and Interpretive Lens but pause the per-row verdicts (e.g., during a budget-sensitive month):

window.DISABLE_AI_VERDICTS = true

Run that in DevTools Console. All ten surfaces will short-circuit until you remove the flag (or reload — the flag doesn't persist).

AI elsewhere in the app

The ten Light & Sun verdicts above are one corner of the app's AI surface. Below are every place AI runs in the rest of the app. All require a configured provider in Settings → AI.

Text-generation surfaces

SurfaceWhereWhat it does
AI chat panelSlide-out at bottom-right, every pageFree-form conversation grounded in your full lab data, context cards, supplements, wearables, sun + light, genetics. Streaming, with personalities + thread history.
Interpretive LensDashboard top, "Lens" pillRoutes a focused query through your knowledge base (research papers, notes, books). Uses the on-device transformers.js RAG by default, or an external server you configure.
Focus CardDashboard, top of the pageA 1–3 sentence "what to focus on right now" rollup synthesized across your goals + recent biomarker drift + active light/wearable signals. Auto-fires when its fingerprint changes.
Context-card AI dots9 dashboard cards (Health Goals, Diet, Exercise, Sleep, Light & Circadian, Stress, Love Life, Environment, Medical Conditions)A health-dot + 8-word tip per card, summarizing whether the card's filled-in content supports or undermines your stated goals. Manual ↻ refresh.
Onboarding chat wizardFirst-time visitors, before any data5-stage AI-driven setup flow (profile → API key → extras → context cards → has-data nudge).
Custom-marker descriptionMarker detail modal, when AI is configuredOne-sentence (≤30 words) explanation of what the marker measures + why it matters clinically. Cached per-marker so it only fires the first time.
EMF assessment interpretationEMF assessment editor → InterpretSynthesizes a room-by-room read of your EMF measurements with severity, source attribution, and mitigation recommendations. Manual trigger.
EMF comparison interpretationEMF editor → Compare two snapshotsSide-by-side delta read between two saved assessments — what got better, what got worse. Manual trigger.
Recommendations / supplement contextSupplement detail"Mito context" button on a supplement opens an AI synthesis of how the ingredient interacts with the user's mitochondrial / energy biomarkers. Manual trigger.

Vision-AI surfaces (image-in)

SurfaceWhereWhat it does
PDF lab importDrop a PDF on the dashboard or sidebarExtracts text → obfuscates PII (regex + optional local-AI streaming sanitizer) → AI parses values + maps to schema markers. Specialty labs (OAT, fatty acids) flow through the custom-marker path.
Photo lab importSame drop-zone, accepts JPG / PNG / HEICVision-extract path — same downstream as PDF, but starts from a phone snap of a printed lab report.
Custom-marker suggestion (from PDF)Inside the PDF import flowWhen a parsed value can't be matched to a known marker, AI suggests a key + name + unit + reference range based on the surrounding PDF context.
Supplement label scanSupplements → "📷 Scan label"Vision-extract product name + active ingredients (with dosage per serving) from a photo of a supplement bottle. Skips fillers + excipients.
Spectrum ClassifierLight Tools → "What is this light?"Camera-based light-source classifier (LED cool / LED warm / fluorescent / incandescent / daylight). Uses RGB ratios + flicker variance — the deterministic part is the classifier itself; the AI only kicks in for the verdict on the saved measurement (covered in Light & Sun above).

What is NOT AI (so you can trust the numbers)

A lot of the app is deterministic math, not AI inference. If you see a number on a chart or a tier on a pill, it came from one of these:

SurfaceSource
Channel pills (Vitamin D / Body clock / Cellular repair / Cardiovascular / Mood & hormones / Outdoor eye light)Bird-Riordan spectrum reconstruction → action-spectrum convolution → per-channel dose. Reproducible photobiology math, citations in /docs/contributor/sun-spectrum-model.
% MED burn doseErythemal action spectrum × Fitzpatrick skin-type threshold. CIE S 007 / ISO 17166.
Vit-D IU yieldCalibrated against Webb 2018 + dminder + NIWA. Not AI-generated.
Reference ranges (refMin / refMax) on lab chartsWhatever the lab printed on the PDF, OR your own override. Never AI-generated.
Trend alerts ("dropped 25%" / "trending up")Linear regression slope + R² thresholds in data.js. Algorithmic, not LLM.
PhenoAge biological ageLevine 2018 formula over 9 biomarkers + age. Closed-form math.
Calculated markers (Free Water Deficit, BUN/Creatinine ratio, HOMA-IR)Published formulas with conversion math for unit systems.
Channel doses on dashboard / per-sessionSame Bird-Riordan reconstruction; no AI.
Trend sparklines on context cardsDirect plot of stored values.
Severity dots on light-environment roomsHeuristic tier from measured flicker / lux / CCT thresholds (getRoomSeverity in light-env.js). The AI verdict that follows the severity dot is AI, the dot itself is not.
Custom marker creation, sidebar edits, chip-picker changes, manual-value entryLocal data writes. No AI call fires; the AI sees them on the next chat turn.

What runs on-device vs cloud

PathWhere it runs
Knowledge Base (Interpretive Lens default)On-device transformers.js + OPFS — your knowledge documents never leave the browser.
Local AI provider (Ollama / LM Studio / Jan, configured in Settings → AI → Local AI)Your machine. No network calls beyond your localhost.
PII obfuscation (local-AI streaming sanitizer)Optional, runs through your Local AI before any cloud call.
Cloud providers (PPQ / Routstr / OpenRouter / Venice / Custom)Whichever endpoint you configured. Venice uses an E2EE branch (ECDH + AES-GCM) so the provider doesn't see your prompts in plaintext.
Camera frames (light tools, label scan)Captured + processed in the browser, then either kept on-device (light-tools metric extraction) or sent as a single base64 image to the configured AI provider for the vision call (label scan, photo lab import).

Caching + privacy notes

  • Every per-row verdict caches against a fingerprint of the underlying data. If nothing's changed, the cached verdict is returned without a fresh API call.
  • The chat panel + Interpretive Lens both pull from buildLabContext() — the assembled section block of your full data minus anything you've gated off (e.g., the per-profile "Share body regions in Sun & Light context" toggle in Settings → AI, which keeps coverage fraction + preset names in but strips the anatomical region list).
  • The body-regions toggle is the only privacy gate that affects the agent slice + chat context shape. All other data flows through if-data-exists, same as the rest of the app.
  • Custom-marker creation, manual-value entry, sidebar edits, and chip-picker changes do NOT trigger AI calls — they update local data, AI sees them on the next chat turn or the next per-state verdict render.

Released under the AGPL-3.0-or-later License.