Phil Goodman
Well-Known Member
Been thinking about this for a while and might actually go through with the project. As someone in the tech field who has been interested in health and research for decades I’ve started to consider the combing the two now that tech has reached it’s current state(and only getting better). These seem to be some of the best use cases for AI agents these days(combing through TONS of research and other data and building models) so I’m interested in applying it to the world of trt. But obviously you run the risk of the dreaded garbage in/garbage out results… so I’d be interested to get feedback on what type of stuff should be included and how it should be modeled. For one, I want to reduce risk of my bias creeping in. Also, there are lots of smart people here so I’m sure I can get plenty of good suggestions and perspectives that I wouldn’t consider on my own. To note though, while I use it regularly for lots of analysis and am comfortable with the technology(and tech in general), there will be a learning curve starting out… though a lot of that can be eliminated by proper use of the AI itself. For example, below I’ll share some of the guidance so far. Seems like a good starting point, but again there will probably be some bumps and tweaking along the way.
Literature review & critique — AI is excellent at this right now. Claude, GPT-4, and others can ingest hundreds of papers, identify methodological weaknesses (small n, short duration, lack of controls, industry funding bias, etc.), and synthesize findings.
“Running millions of simulations” — this is the part to think carefully about. AI doesn’t run simulations on its own. What you’d actually be doing is statistical modeling and Monte Carlo simulation — which is very real and powerful, but requires defining a model first. The AI helps you build and run those simulations, not conjure them from raw literature.
SETTING UP CLAUDE PRO + CLAUDE CODE FOR TRT RESEARCH — COST & SETUP GUIDE
COST
Quick heads up: There was a pricing controversy on April 22, 2026. Anthropic briefly moved Claude Code to the $100-$200/month Max plans only, which caused a firestorm online — but they reversed it within hours. Claude Code is still included in the $20/month Pro plan.
Claude Pro — $20/month — includes Claude Code, 5x usage vs free, priority access, and projects. Claude Max 5x — $100/month — only needed if you hit Pro limits regularly. Start with Pro at $20/month and upgrade only if you hit usage limits during heavy simulation work.
SETUP GUIDE: AVOIDING “GARBAGE IN, GARBAGE OUT”
The quality of your output depends entirely on the quality of your inputs. Here’s how to do it right.
STEP 1 — SET UP CLAUDE PRO & A PROJECT. Go to claude.ai and upgrade to Pro. Create a Project (sidebar > New Project) — this gives Claude persistent context across all conversations. In the Project instructions, paste a system prompt like this: “You are a research assistant specializing in endocrinology literature analysis. Always cite study limitations. Flag conflicts of interest, small sample sizes (under 50), study duration under 12 weeks, and industry funding. Never extrapolate beyond what data supports.” This primes every conversation with your quality standards automatically.
STEP 2 — BUILD A CLEAN LITERATURE DATABASE. This is the most critical phase. Bad data here poisons everything downstream. Where to get papers: PubMed Central (pubmed.ncbi.nlm.nih.gov) is free and peer-reviewed only. The Cochrane Library is the gold standard for systematic reviews. ClinicalTrials.gov has raw trial data with less spin than published papers. Avoid blog posts, supplement company sites, and non-peer-reviewed sources. When uploading each paper to Claude, use this prompt: “Analyze this study and extract: (1) Sample size and demographics, (2) Duration, (3) Protocol/dosing, (4) Primary outcomes measured, (5) Funding source, (6) Conflicts of interest, (7) Methodology grade (RCT/observational/case study), (8) Key findings, (9) Limitations acknowledged by authors, (10) Limitations NOT acknowledged by authors. Format as JSON.” This builds a clean, consistent database rather than vague summaries.
STEP 3 — INSTALL CLAUDE CODE FOR SIMULATIONS. Install Node.js from nodejs.org. Open your terminal and run: npm install -g @anthropic-ai/claude-code. Then run “claude” and log in with your Pro account. Ask Claude Code to build a Python script that reads your JSON literature database, construct statistical distributions from real study data (effect sizes, variance, etc.), run Monte Carlo simulations using those distributions, and flag which variables most robustly predict outcomes across simulations. Key GIGO safeguard: tell Claude Code to source every model assumption directly from your literature database, not from general knowledge. This keeps simulations grounded in actual data, not AI assumptions.
STEP 4 — QUALITY CONTROL LAYERS. Cross-reference everything: when Claude summarizes a paper, spot-check 2-3 claims against the actual PDF. Ask for uncertainty: prompt Claude to always include confidence levels (“How confident are you, and what would change your assessment?”). Run the same question multiple ways: slightly rephrase key questions and compare answers — inconsistencies reveal weak data or hallucination. Separate tiers of evidence: have Claude tag every finding as RCT, meta-analysis, observational, or case study, and weight simulations accordingly.
REALISTIC TIMELINE AND COST. Literature collection takes 1-2 weeks using PubMed (free) and Claude Pro ($20). Structured extraction takes 2-3 weeks using Claude Pro ($20). Simulation build takes 1-2 weeks using Claude Code which is included in Pro at no extra cost. Simulation runs and analysis are ongoing using Python (free) and Claude Pro ($20/month). Total cost is $20/month. The heavy computation runs locally in Python on your own computer for free — Claude Code just writes and iterates the code for you.
Literature review & critique — AI is excellent at this right now. Claude, GPT-4, and others can ingest hundreds of papers, identify methodological weaknesses (small n, short duration, lack of controls, industry funding bias, etc.), and synthesize findings.
“Running millions of simulations” — this is the part to think carefully about. AI doesn’t run simulations on its own. What you’d actually be doing is statistical modeling and Monte Carlo simulation — which is very real and powerful, but requires defining a model first. The AI helps you build and run those simulations, not conjure them from raw literature.
SETTING UP CLAUDE PRO + CLAUDE CODE FOR TRT RESEARCH — COST & SETUP GUIDE
COST
Quick heads up: There was a pricing controversy on April 22, 2026. Anthropic briefly moved Claude Code to the $100-$200/month Max plans only, which caused a firestorm online — but they reversed it within hours. Claude Code is still included in the $20/month Pro plan.
Claude Pro — $20/month — includes Claude Code, 5x usage vs free, priority access, and projects. Claude Max 5x — $100/month — only needed if you hit Pro limits regularly. Start with Pro at $20/month and upgrade only if you hit usage limits during heavy simulation work.
SETUP GUIDE: AVOIDING “GARBAGE IN, GARBAGE OUT”
The quality of your output depends entirely on the quality of your inputs. Here’s how to do it right.
STEP 1 — SET UP CLAUDE PRO & A PROJECT. Go to claude.ai and upgrade to Pro. Create a Project (sidebar > New Project) — this gives Claude persistent context across all conversations. In the Project instructions, paste a system prompt like this: “You are a research assistant specializing in endocrinology literature analysis. Always cite study limitations. Flag conflicts of interest, small sample sizes (under 50), study duration under 12 weeks, and industry funding. Never extrapolate beyond what data supports.” This primes every conversation with your quality standards automatically.
STEP 2 — BUILD A CLEAN LITERATURE DATABASE. This is the most critical phase. Bad data here poisons everything downstream. Where to get papers: PubMed Central (pubmed.ncbi.nlm.nih.gov) is free and peer-reviewed only. The Cochrane Library is the gold standard for systematic reviews. ClinicalTrials.gov has raw trial data with less spin than published papers. Avoid blog posts, supplement company sites, and non-peer-reviewed sources. When uploading each paper to Claude, use this prompt: “Analyze this study and extract: (1) Sample size and demographics, (2) Duration, (3) Protocol/dosing, (4) Primary outcomes measured, (5) Funding source, (6) Conflicts of interest, (7) Methodology grade (RCT/observational/case study), (8) Key findings, (9) Limitations acknowledged by authors, (10) Limitations NOT acknowledged by authors. Format as JSON.” This builds a clean, consistent database rather than vague summaries.
STEP 3 — INSTALL CLAUDE CODE FOR SIMULATIONS. Install Node.js from nodejs.org. Open your terminal and run: npm install -g @anthropic-ai/claude-code. Then run “claude” and log in with your Pro account. Ask Claude Code to build a Python script that reads your JSON literature database, construct statistical distributions from real study data (effect sizes, variance, etc.), run Monte Carlo simulations using those distributions, and flag which variables most robustly predict outcomes across simulations. Key GIGO safeguard: tell Claude Code to source every model assumption directly from your literature database, not from general knowledge. This keeps simulations grounded in actual data, not AI assumptions.
STEP 4 — QUALITY CONTROL LAYERS. Cross-reference everything: when Claude summarizes a paper, spot-check 2-3 claims against the actual PDF. Ask for uncertainty: prompt Claude to always include confidence levels (“How confident are you, and what would change your assessment?”). Run the same question multiple ways: slightly rephrase key questions and compare answers — inconsistencies reveal weak data or hallucination. Separate tiers of evidence: have Claude tag every finding as RCT, meta-analysis, observational, or case study, and weight simulations accordingly.
REALISTIC TIMELINE AND COST. Literature collection takes 1-2 weeks using PubMed (free) and Claude Pro ($20). Structured extraction takes 2-3 weeks using Claude Pro ($20). Simulation build takes 1-2 weeks using Claude Code which is included in Pro at no extra cost. Simulation runs and analysis are ongoing using Python (free) and Claude Pro ($20/month). Total cost is $20/month. The heavy computation runs locally in Python on your own computer for free — Claude Code just writes and iterates the code for you.