₿ BTC Loading... via Binance

Friday, April 24, 2026

How to Use Claude to Analyze Any Crypto Project in 5 Minutes

How to Use Claude to Analyze Any Crypto Project in 5 Minutes

Most people using AI for crypto research are doing it completely wrong — and they're proud of it.

They paste a whitepaper into ChatGPT, get a cheerful summary that sounds like the project's own marketing copy, and walk away thinking they did their homework. A 2024 study from the Stanford Internet Observatory found that large language models, when given no specific prompting structure, tend to reproduce the framing of their source material rather than critically evaluate it. In crypto, that framing is almost always bullish. You just paid $0 for someone to hype you into a bad investment.

Claude — Anthropic's AI — is different. Not because it's magic. Because it responds differently to adversarial prompting. If you know how to ask the right questions, you can build a legitimate due diligence framework around it that takes five minutes and catches the kind of red flags that take most people five hours to find. Or never.

I've been running bots and AI-assisted setups since 2017. I've burned money on garbage projects and made money on solid ones. I now use Claude as a first-pass filter on every project I consider. Here's exactly how I do it.


Why Most People's AI Research Is Just Expensive Laziness

The default behavior when someone discovers a new altcoin is to Google it, read the website, maybe skim the whitepaper, and look at a price chart. That takes 20 minutes and tells you almost nothing useful. The website was designed to make you bullish. The whitepaper was written to sound technical. The price chart is the last thing that reflects fundamentals.

The mistake with AI tools is treating them like a smarter search engine. You ask "What is [Project X]?" and you get a Wikipedia-style overview. Useless. What you actually need is a structured interrogation — one where you're specifically trying to break the project, not understand it.

Claude is particularly good at this because it handles nuanced, multi-part prompts well and doesn't flatten complexity into confidence. When you tell it to steelman and then attack an argument, it does both. That dual-processing is exactly what crypto due diligence requires.

Bitcoin doesn't need this kind of scrutiny for its fundamentals — BTC's network security, supply schedule, and 15+ years of uptime speak for themselves. But anything outside BTC deserves a hard look, and Claude is the fastest way I've found to give it one.


The Five-Minute Framework: Exactly What to Paste and Ask

This is the actual workflow. Not a theoretical one.

Step 1: Collect your raw materials first (2 minutes)

Before you even open Claude, grab: - The project's homepage copy (just copy-paste the text) - The tokenomics section of the whitepaper or docs - The team page (or lack thereof) - One recent announcement or blog post from the project

You don't need the full whitepaper. You need the claims they're making publicly.

Step 2: Open Claude with an adversarial prompt, not a neutral one (30 seconds)

This is where 90% of people fail. They ask: "Can you summarize this project?"

Don't do that.

Ask this instead:

"I'm evaluating [Project Name] as a potential investment. I'm going to give you their homepage copy, tokenomics, and a recent announcement. Your job is not to summarize it — your job is to identify every claim that lacks evidence, every red flag in the tokenomics, and any narrative technique they're using to obscure weakness. Be specific. If something looks legitimate, say so and explain why."

Then paste your materials.

That single framing change produces completely different output. Claude stops being a summarizer and starts being an analyst.

Step 3: Follow up with the tokenomics interrogation (1 minute)

After the first output, paste the tokenomics section specifically and ask:

"Analyze this token distribution. What percentage goes to insiders versus the public? What are the vesting schedules? Is there any mechanism that allows the team to dump on retail? Compare this to what a clean tokenomics structure would look like."

This question alone has saved me from three projects in the past year that looked legitimate on the surface but had 40%+ team allocations with 6-month cliffs — essentially a calendar reminder for a rug.

Step 4: The "explain why this fails" question (1 minute)

Ask Claude to play devil's advocate on the core value proposition:

"Assume this project fails completely within 18 months. What are the three most likely reasons for that failure based on what you've read? Be specific to this project, not generic."

The answers here are often the most valuable part of the whole exercise. When the failure modes are obvious and the team hasn't addressed them anywhere in their public materials, that's a signal.

Step 5: Cross-reference what it can't know (30 seconds)

Claude's training has a knowledge cutoff, and it has no live market data. Ask it directly:

"What would I need to verify manually that you can't confirm — team identities, contract audits, on-chain activity, social sentiment?"

This gives you a checklist of what to check next. It also stops you from treating the AI output as a complete picture.


A Real Example: How I Used This on a Layer-2 Project in April

I'm not going to name the project because I'm not here to torpedo anyone's bags, but in early April I was looking at a Layer-2 scaling solution that had decent GitHub activity and an interesting ZK-rollup implementation. The price chart looked attractive and a few accounts I follow had been talking about it.

I ran the full five-minute Claude framework on it.

First output flagged three things: the "partnerships" listed on the homepage were actually just API integrations with no commercial agreement language, the "audited by" claim linked to a firm I'd never heard of, and the roadmap used future tense for things that were supposedly already live.

The tokenomics interrogation found a 35% team/advisor allocation with a 12-month cliff and 24-month vest — aggressive, but not disqualifying on its own.

The "why does this fail" question is what killed it for me. Claude correctly identified that their core technical claim — throughput superiority over existing L2s — had no third-party benchmarks backing it. Every performance number came from their own blog. In crypto, self-reported performance metrics are basically fiction.

I didn't buy. Two weeks later the project's lead developer went quiet and the Discord started filling up with angry holders. I don't know if it was an outright scam or just a failing project, but I didn't need to find out the hard way.

Bitcoin's fundamentals, by contrast, are publicly verifiable at every layer — hashrate, transaction volume, supply issuance, node count. There's no equivalent opacity. That's the standard everything else should be held to, and almost nothing meets it.


The Contrarian Insight Most Crypto Blogs Won't Tell You

Here's what I never see discussed: Claude is more useful for exposing what a project doesn't say than for analyzing what it does say.

Most AI-powered research tools are optimized to extract and summarize information. But in crypto, the absence of information is often the signal. A project that doesn't mention its smart contract audit in any public document. A team page with no LinkedIn links. A whitepaper that describes the problem space in great detail but glosses over the technical solution in two paragraphs.

Claude, when prompted correctly, identifies these gaps. The question "What important information is conspicuously missing from these materials?" produces output that no keyword search, no token scanner, and no price chart can give you.

I've started treating Claude's gap analysis as a higher-value signal than its positive findings. When it struggles to find red flags, that's mildly encouraging. When it keeps noting what the project didn't address, that's actionable.

The irony is that the projects with the most polished marketing materials often produce the most concerning gap analysis. The glossier the pitch, the more carefully structured the omissions tend to be.


What Claude Cannot Do — And What You Still Need

Claude has no access to live blockchain data. It can't check whether a contract has been deployed, whether whale wallets are accumulating or distributing, or what the current liquidity depth looks like. It can't tell you if a team member's LinkedIn was created three weeks ago.

For on-chain verification, I use block explorers and a handful of scanner tools. For actual position execution on Bitcoin and the handful of alts I trade, I use Kraken — it's where I've had the best execution and the least nonsense over the years. Not the flashiest UI in crypto, but reliable when volatility is high and that's when reliability matters.

For anything I'm holding long-term, especially BTC, it goes on a Trezor. No AI tool, no matter how good, changes the fundamental rule that if it's not in your wallet, it's not yours.


Key Takeaways

  • Adversarial prompting beats neutral prompting — tell Claude to find problems, not summarize features, and you get completely different and more valuable output
  • Tokenomics interrogation is the fastest single-variable filter — insider allocations above 30% with short vesting schedules are a reliable red flag that Claude surfaces quickly
  • Gap analysis is underused — what a project omits from its public materials is often more revealing than what it includes
  • Claude has hard limits — it can't verify on-chain data, current team activity, or live market conditions; treat its output as a first filter, not a final answer
  • Five minutes of structured AI analysis beats thirty minutes of casual browsing — the framework matters more than the time spent

Frequently Asked Questions

Can Claude actually replace proper due diligence on a crypto project? No, and don't treat it that way. Claude is a first-pass filter that identifies claims worth scrutinizing and gaps worth investigating. You still need to verify team identities, check smart contract audits from reputable firms, and look at on-chain data manually. Think of it as the pre-screening round, not the final interview.

Does it matter which version of Claude I use for this? Yes, in practice. The more capable models handle multi-part adversarial prompts significantly better than the baseline versions. Claude 3.5 Sonnet or the most current equivalent will produce more nuanced gap analysis than a lite model. Free tiers work for basic queries, but if you're making serious investment decisions, use the full model.

What if Claude gives me a positive assessment — does that mean the project is safe? A positive assessment from Claude means it didn't find obvious red flags in the materials you provided. That's not the same as safe. The quality of your output depends entirely on the quality of what you paste in. If the project's public materials are carefully crafted to hide problems, Claude works with what you gave it. Always cross-reference with on-chain data and independent community analysis.


Try This First

If you take nothing else from this post, run one test before the end of the week: take any altcoin you currently hold or are considering, collect its homepage text and tokenomics, and ask Claude what important information is conspicuously missing from the public-facing materials.

Don't ask it to summarize. Don't ask if it's a good investment. Ask what's missing.

The answer will either give you confidence or give you pause. Either way, you'll know more in five minutes than most people figure out after months of holding.


Follow BitBrainers — we only write about tools we would actually use ourselves.

No comments:

FOMC Week and Crypto: What Happens to Bitcoin When the Fed Speaks

Every FOMC week, crypto Twitter turns into a noise machine. Price targets fly. Leverage builds. Everyone has a hot take. Most of it is thea...

FOMC Week and Crypto: What Happens to Bitcoin When the Fed Speaks