LifeJet

Three failures define the competitive landscape, and they're all the same failure wearing different masks. These products treat conversation as a delivery mechanism for information. A good practitioner builds understanding over time. That's a fundamentally different thing.
No memory. Every session starts from zero. You re-explain your history, your medications, your constraints. A real practitioner builds a picture over months. Visit six is informed by visits one through five.
No restraint. The typical health chatbot responds to a question about fatigue with ten supplements, five dietary changes, and three exercise protocols. A wall of text that makes you more tired than before you opened the app. A real practitioner suggests one thing: the highest-leverage intervention for this person at this moment.
No feedback loop. You get advice, close the app, and never follow up. There's no mechanism to ask "did it work?" A real practitioner tries something, observes, and adjusts. The iteration is the treatment.
LifeJet had to solve all three at the same time. Persistent memory, disciplined restraint, and closed-loop tracking, through conversation that feels like talking to someone who actually listened.

The insight that shaped the entire product came from studying how good practitioners actually talk. They follow a pattern: they reflect before they analyze. They form one hypothesis rather than five. They tell you what not to do. They give you one thing to watch for. They close with something that makes you feel capable rather than anxious.
Working closely with the founding team, I translated this into a seven-phase conversation arc that governs every LifeJet response.
Mirror. Reflect the experience back. Remove blame framing. The user should feel heard before anything analytical happens.
System Explanation. Plain-language physiology. Not "your HPA axis is dysregulated" but "when you're stressed for weeks, your body starts treating 2am like a threat."
Reframe. Shift the mental model. "You don't have an energy problem, you have a blood sugar timing problem."
Keystone Action. One experiment. Time-specific, mechanism-linked, and small enough to actually do.
What Not to Do. Direct and non-judgmental. This prevents the overcorrection that derails most self-directed health attempts.
Tracking Prompt. One micro-observation question. Something noticeable within 24 to 72 hours. This becomes the opening of the next conversation.
Anchoring Truth. An emotional landing that leaves the user feeling capable. Stay curious instead of anxious.
This isn't a state machine. It's a conversational rhythm. The AI follows the arc naturally, the way a skilled practitioner moves through a session without reading from a checklist. Some conversations need more time in the mirroring phase. Others need deeper system explanation. The structure provides coherence without rigidity.

The memory system was the hardest engineering problem and the most important product decision. Health relationships compound over time, or they don't work at all.
I designed a five-category memory taxonomy, and the retention periods for each category are product decisions as much as they are engineering ones.
Category | What It Stores | Retention | Why This Duration |
|---|---|---|---|
---------- | --------------- | ----------- | ------------------- |
Identity profile | Demographics, diagnoses, pronouns | No expiration | These rarely change |
Health constraints | Symptoms, medications, lab results, contraindications | 365 days | Medications change slowly |
Intervention protocols | Active experiments, adherence, what worked and what didn't | 180 days | Old experiments shouldn't haunt new ones |
Conversation summaries | Session recaps for continuity | 30 days | You need last month's hypothesis, not last month's small talk |
Pattern insights | Cross-system correlations the agent has identified | 90 days | Long enough to validate, short enough to refresh |
After every response, a classifier model evaluates the conversation and decides what to store, in which category, with what metadata. The next session loads all relevant memories into context. There are no intake forms and no "tell me about yourself" loops. The relationship compounds naturally, the way it does with a practitioner you've been seeing for months.
The backend is a Python/FastAPI service hosting the agent via OpenAI's Agent Kit. Four separate vector stores power multi-store RAG: the LifeJet knowledge base, Institute for Functional Medicine frameworks, PubMed abstracts, and curated wellness content. The agent decides at each turn which stores to query and whether to query at all, with query rewriting enabled for semantic expansion.
The frontend runs on Next.js 15 and React 19, with OpenAI ChatKit for the conversation interface, Tailwind v4 and shadcn/ui for the design system, Drizzle ORM over PostgreSQL, and Clerk for authentication.
The tool architecture was designed around reasoning needs rather than product features. There are five core tools: a RAG vector lookup that queries across all four stores with semantic rewriting and source attribution, a plan builder that structures prompts across five health pillars, an intake clarifier that tracks ten categories with a five-minute cooldown to prevent question bombardment, memory tools for adding and searching via Mem0, and a feature-flagged web search for optional real-time retrieval.
Not every user needs the same depth, so I designed three response modes that reshape the agent's behavior, tool access, and formatting.
Guide is the daily-use mode. Responses are 120 to 220 words with a maximum of six bullets. The design goal is a "two-minute promise": after reading, you feel seen rather than broken, you understand what's happening in your body, and you know what to try next. This mode respects time and cognitive load above everything else.
Deep Dive is longer and structured with headings. It includes systems mapping with mechanism reasoning and evidence citations from the knowledge base. This is for users who want to understand the full picture of what's going on.
Practitioner produces a structured clinical snapshot: presenting concern, key context, pattern hypotheses, confounders, red flags, follow-up questions, low-risk experiments, tracking plan, and reassessment loop. It's designed for users to print or share directly with their clinician, bridging self-directed exploration and professional care.
The governing design principle across all three modes is that the conversation is the product. Cards appear only at high-leverage moments, two to three per response at most. Experiment cards show one active experiment with its mechanism, timeframe, and tracking prompt. The home screen is a single focused card showing what you're trying and what to notice. If you stripped all the formatting and read the response aloud, it should still feel like great coaching.
This product couldn't be designed without understanding agent orchestration, and it couldn't be engineered without understanding what makes a health conversation feel trustworthy. The seven-phase arc is conversation design that required engineering thinking. The memory taxonomy is engineering that required conversation design. That overlap is why they brought me in.
60,000 on the waitlist. Active testing with a health influencer community.







