TLKA Build Plan
pronounced "Tel-ka"

Thought Leadership Knowledge Assistant

An internal, private AI tool for the PMI thought leadership team. Ask questions across our publications in plain language. Check whether a draft aligns with the B.O.S.S.'s documented views. Surface what we have already said about a topic.

Not a content generator. Research support, alignment aid, and knowledge exploration only. Every answer cites its sources.

Five agents, ten epics, twenty-five user stories, six phased increments, and one rule that holds across all of them: every answer cites its source.

The brief in one paragraph

TLKA gives the team a single place to upload published PMI content (reports, articles, transcripts, slides) and then ask questions about it in plain language. It also maintains a growing library of the key stakeholders' (the B.O.S.S.'s) documented views, built collectively by the whole team. Every answer cites its sources. Outputs are advisory only and require human validation before any downstream use.

What TLKA does

Q&A
Ask in plain language.

Type a question. TLKA reads relevant uploaded documents and writes a cited summary. By default it searches current documents only; flip the "Include historical" toggle to widen the scope. Filter by topic or person before asking.

B.O.S.S. MODE
See what stakeholders think.

Switch into B.O.S.S. MODE to query only the stakeholder snippet library. Optionally narrow to a single stakeholder.

Alignment
Check your draft.

Paste a draft and a target. Get a verdict (aligned, neutral, contradicts) plus the exact agreements and disagreements with citations.

Themes
Explore recurring framing.

Cosine-based clustering surfaces what we keep saying about a topic over time, with representative excerpts and labels.

Collaboration
Discuss in context.

Thread comments on any document or snippet. Keep private notes for personal interpretation. Mention teammates with @.

Oversight
Admin transparency.

Activity log records every upload, query, alignment run, and edit. Admins see metadata, never message content.

What TLKA does not do

Out of scope by design

It does not generate new content. It does not access the internet or any external database. It does not produce official PMI positions. It is not open to the public. Each user's chat history is private, even from admins. The Governance page lists the full out-of-scope set.

Cost at pilot scale

Estimates for 5, 10, and 15 active team members during the pilot. Numbers cover the running cost of TLKA only (infrastructure plus model API); development effort is separate. Two retrieval architectures are referenced below, Pilot RAG (cheap, smaller context) and Hybrid RAG (richer context, kicks in at Phase 4.5); the Tech page covers the difference.

Pilot constraint

The pilot accepts only public-classification documents. Object storage (S3 / R2), server-side encryption, and Sentry are deferred to Phase 5 when uploads may include internal or restricted material. Until then, documents live on the OS-level encrypted VPS disk and exceptions land in the rotated log file.

One-time costs

ItemCostNotes
Initial corpus embedding ~$1 One pass over 300k tokens at $0.13 / 1M. Re-runs at Phase 4.5 cost the same.
Forge VPS provisioning, domain, SSL ~$15 SSL via Let's Encrypt is free. Forge setup is one-off admin time. Treat as negligible after month 1.
Development effort Out of scope Not modelled here. Phasing is in the Roadmap.

Recurring monthly costs

Infrastructure is flat at every pilot team size (one VPS, local disk storage, no Sentry). Only the model API cost scales with query volume.

Team size Queries / month Pilot RAG (model) Hybrid RAG (model) Infrastructure (flat) Total, Pilot RAG Total, Hybrid RAG
5 members 550 ~$7 ~$55 ~$40 ~$47 / mo ~$95 / mo
10 members 1,100 ~$14 ~$110 ~$40 ~$54 / mo ~$150 / mo
15 members 1,650 ~$21 ~$165 ~$40 ~$61 / mo ~$205 / mo

Infrastructure line is the Forge VPS only ($40 / mo for a Hetzner CPX31 or DigitalOcean 4 vCPU node). Initial embedding cost (less than $1) is amortised into month 1 and not shown separately. No bucket fee. No Sentry fee during pilot.

Per-user, per-year (rough)

Team sizePilot RAG / user / yearHybrid RAG / user / year
5 members~$113~$228
10 members~$65~$180
15 members~$49~$164

Per-user cost falls as the team grows because infrastructure is fixed. Doubling query volume per user is a linear bump on the model line only, not the infrastructure line. Switching from OpenAI to Anthropic Claude shifts these numbers by about ±10%.

Cost added back at rollout (Phase 5)

Items deferred during pilot return when sensitive uploads become possible:

ItemApprox monthlyReason
S3 / R2 object storage with server-side encryption~$5Required for internal and restricted material.
Sentry team plan~$26Exception tracking once user volume justifies a paid plan.
Optional CDN / backups to second region~$5DR posture for production rollout.
Headline number for budgeting

At 10 members on Pilot RAG, TLKA's full running cost is approximately $54 per month, or about $650 per year. Phase 4.5 migration to Hybrid RAG raises it to roughly $150 per month. Phase 5 rollout adds another ~$36 per month for storage + Sentry once internal classifications open up.

Where to go next